Skip to content

Shams261/Text-Generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text Generator

GPT-2 Flask Python HuggingFace

A web application that generates human-like text using OpenAI's GPT-2 large language model. This project implements a Flask-based web interface where users can input prompts and receive AI-generated text completions.

Features

  • Text generation using GPT-2 large model from HuggingFace
  • Web interface with Bootstrap styling
  • Interactive user experience with form submission
  • Text generation with configurable parameters
  • Responsive design

Technologies Used

  • Python: Core programming language
  • Flask: Web framework for serving the application
  • HuggingFace Transformers: For accessing pre-trained GPT-2 model
  • PyTorch: Deep learning framework that powers the model
  • Bootstrap: Front-end styling and responsive design
  • HTML/CSS: Web interface design

Installation

  1. Clone the repository:

    git clone https://github.com/Shams261/Text-Generator.git
    cd Text-Generator
    
  2. Install the required packages:

    pip install flask torch transformers
    

Usage

  1. Run the Flask application:

    python app.py
    
  2. Open your web browser and navigate to:

    http://127.0.0.1:5000/
    
  3. Enter a prompt in the input field and click "Generate Text" to see the AI-generated completion.

Project Structure

Text-Generator/
├── app.py                   # Flask application
├── TextGenerationGpt2Huggingface.ipynb  # Jupyter notebook for model exploration
├── models/                  # Saved model files
│   └── tokenizer.pickle     # Saved tokenizer
├── templates/               # HTML templates
│   └── index.html           # Main web interface
└── README.md                # Project documentation

How It Works

The application uses a pre-trained GPT-2 large model from HuggingFace's Transformers library. When a user submits a prompt, the application:

  1. Tokenizes the input text
  2. Passes it through the GPT-2 model
  3. Generates text completion with beam search and n-gram repetition prevention
  4. Returns the generated text to the user interface

Model Parameters

The text generation uses the following parameters:

  • max_length: 300 tokens
  • num_beams: 5 (beam search for better quality)
  • no_repeat_ngram_size: 2 (prevents repetition of n-grams)
  • early_stopping: True (stops generation when all beams reach EOS)

Future Improvements

  • Add temperature control for text generation
  • Implement different model size options
  • Add save/export functionality for generated text
  • Incorporate user feedback mechanism
  • Add more advanced text formatting options

License

This project is open source and available under the MIT License.

Acknowledgements

  • OpenAI for creating the GPT-2 model
  • HuggingFace for providing access to pre-trained models
  • Flask for the web framework

Created by Shams Tabrez

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published