Easily draft a connected thread of up to 5 tweets to share new papers and findings. Perfect for research labs and anyone learning to use local LLMs like LLaMA 3 — explore prompt design, app building, and local inference all in one simple project.
- Runs locally: Powered by Ollama serving LLaMA 3 models on your machine—no API keys, no data leaves your system.
- Flexible interfaces: Use the Streamlit web app or the command-line tool.
- Ideal for: Science communication and LLM experimentation.
- Python 3.8 or higher
- Ollama installed
-
Install Ollama:
- Windows/macOS: Download Ollama
- Linux:
curl https://ollama.ai/install.sh | sh
-
Pull the Llama 3 model:
ollama pull llama3
-
Install Python dependencies:
pip install -r requirements.txt
Check your Ollama installation:
python test_ollama.pyThis script will:
- Test the Ollama connection
- Verify the model is available
- Run a sample prompt
Port error with ollama serve?
Ollama may already be running. Try:
# Windows
taskkill /F /IM ollama.exe
# Linux/Mac
pkill ollamaThen restart ollama serve.
Ollama not responding?
- Ensure the service is running
- Restart your computer if needed
- Confirm the llama3 model is pulled:
ollama pull llama3
- Start Ollama
- Launch the app:
streamlit run app.py
- Open your browser to the displayed URL (usually http://localhost:8501)
- Start Ollama
- Run:
python cli.py
- Follow the prompts
- Enter your text
- Select number of tweets (1–5)
- Click Generate Tweets
- View your thread
- Run the program
- Enter your text (press Enter twice to finish)
- Enter number of tweets
- View your thread
- Optionally, generate more tweets
Make sure Ollama is running before starting the app. The application uses LangChain’s Ollama integration to communicate with your local Ollama instance.
