YTTP AI is a powerful desktop application that automates the entire workflow of extracting YouTube transcripts, processing them with local AI models, and creating polished documents. This tool is designed for content creators, researchers, and anyone who needs to work with YouTube video content in text format.
Key features:
- 🎥 Automatic YouTube transcript extraction
- ✂️ Intelligent text chunk splitting
- 🤖 AI-powered text processing with Ollama models
- 📝 Combined document generation (DOCX/TXT)
- 🎨 Modern animated GUI with responsive design
- ⚙️ Customizable processing settings
- 🔄 Automatic retry mechanism for reliability
- 🧹 Temporary file cleanup after processing
- 🚀 One-click installation and launch
- Python 3.8+
- Ollama installed and running
- Ollama model of your choice (recommended:
llama3.2ordeepseek-r1)
For optimal performance, we recommend:
- CPU: Intel i5 or equivalent (4 cores minimum)
- RAM: 8GB+ (16GB recommended)
- GPU: 2-4GB VRAM (for GPU acceleration)
- Storage: SSD preferred
For models like llama3.2:
- Minimum: 4GB RAM + 2GB VRAM
- Recommended: 8GB RAM + 4GB VRAM
-
Install Ollama:
- Follow official installation instructions: https://ollama.com/download
-
Download a model:
ollama pull llama3.2 # Recommended for 2-4GB VRAM # or ollama pull deepseek-r1
-
Run the application:
python Start.py
The application will automatically:
- Install required Python dependencies
- Create necessary directories
- Configure default settings
- Automatic extraction of YouTube transcripts
- Configurable chunk size and overlap
- Retry mechanism for unreliable connections
- Support for multiple YouTube URL formats
- Local processing with Ollama models
- Customizable processing prompts (tailor to your specific needs)
- Typewriter effect display with adjustable speed
- Cancellable processing operations
- DOCX and TXT output formats
- Customizable document titles
- Title font size control
- Automatic filename suggestions
- Modern animated interface
- Responsive layout for all screen sizes
- Colorful theme with gradient backgrounds
- Animated progress indicators
- Inline error messages (no popups)
- Tab-based settings organization
-
Launch the application:
python Start.py
-
Process a YouTube video:
- Enter YouTube URL in the Start screen
- View real-time processing in the Processing screen
- Adjust filename in the footer if needed
- Click "Combine" to save the final document
-
Customize settings:
- Access settings via the Settings tab
- Adjust chunking parameters
- Modify processing prompts (critical for optimal results)
- Configure output options
The processing prompt is crucial for getting high-quality results. The default prompt is a basic instruction that may need adjustment for your specific use case.
- Be specific about what you want the AI to do
- Include examples of desired output format
- Specify tone (academic, casual, professional)
- Define structure requirements (bullet points, paragraphs)
"Correct grammar, improve clarity, and convert to academic writing style. Maintain original meaning while enhancing vocabulary."
"Summarize key points in bullet format. Include timestamps for each major topic. Keep technical terms accurate."
"Transform into a professional blog post. Add section headers. Remove filler words and repetitions."
You can access and modify the processing prompt in Settings → Processing Settings.
-
Model Selection:
- For 2-4GB VRAM: Use
llama3.2(2B parameters) - For 4-8GB VRAM: Use
deepseek-r1(7B parameters) - Adjust models based on your hardware capabilities
- For 2-4GB VRAM: Use
-
Chunk Sizing:
- Start with 300 words/chunk for 2B models
- Increase to 500-700 words for larger models
- Adjust based on your hardware capabilities
-
Prompt Efficiency:
- Keep prompts concise but descriptive
- Avoid redundant instructions
- Test prompts with small chunks first
-
Transcript unavailable:
- Verify video has captions
- Try different video
- Increase retry count in settings
-
Slow processing:
- Use smaller model (e.g.,
llama3:2) - Reduce chunk size
- Close other resource-intensive applications
- Use smaller model (e.g.,
-
Ollama connection issues:
- Ensure Ollama is running (
ollama serve) - Check http://localhost:11434 in browser
- Ensure Ollama is running (
-
Poor output quality:
- Refine your processing prompt
- Reduce chunk size for more focused processing
- Try a different model
The application automatically clears temporary files after processing. To manually clear:
rm -rf temp/Or simply delete the folder as normal
Contributions are welcome! Please open an issue or submit a pull request for:
- Bug fixes
- New features
- Documentation improvements
- Translation support
- Additional processing prompt templates
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This application processes videos through YouTube's public API. Please respect content creators' rights and YouTube's Terms of Service when using this tool.