Clipy converts long form content into BRAINROT
Clipy takes long form content and produces several short form video clips
ffmpeg (for rendering the video)
openai api key (content highlighting) ~ uses $0.035/hr of video with o3-mini
requirements in requirements.txt
git clone https://github.com/rfheise/clipy.git
cd clipy
pip install -r requirements.txt
export OPENAI_API_KEY=<insert api key>
python -m clipy.main <optional arguments> -i <input file> -o <output directory>
| Flag | Description | Default Value |
|---|---|---|
| --device | Torch Device For Running Models | cuda if cuda is detected else cpu |
| --gpt-highlighting-model | gpt model to use for content highlighting | o3-mini |
| --subtitle-model | subtitle model for generating subtitles (see openai whisper for more info) | turbo |
| --num-clips | number of clips to output | ceiling(runtime/5) |
| --debug-mode | runs in debug mode (debug mode runs significantly faster and caches everything but produces very poor quality output) | N/A |
| -h | shows additional configuration options | N/A |
See Config.py for more details
You need a gpu to run this software efficiently. Right now it takes ~10 minutes to process an hour of content using my 4090 with the turbo subtitle model. It takes ~1.5hrs to process an hour of content on my macbook using the cpu with tiny.en subtitle model.
You can also try to use gpt-o4-mini (used in debug mode) instead of o3-mini since it's a fraction of the cost. However, I've found that the results are significantly worse. You can also try any other model that you desire but I've found the best performance/cost model to be o3-mini.
- Automatically highlights the most interesting moments in a video
- Currently uses chatgpt to highlight the most interesting moments
- This feels like a grift and I plan on developing/finding a model that can run locally
- Crops the video around the person speaking
- Adds PIZZAZZ to the output video
- Subtitles
- More on the way
See Dev-info.md for more details
The TalkNet & S3FD model weights and preprocessing steps are modified from this repository

