"Your all-in-one terminal-based AI dev sidekick β engineered for speed, clarity, and control."
β¨ Powered by: GPTβ4o, Claude 3 Opus, Gemini 1.5 Pro, Groq, Mistral, and more.
π‘ Designed & crafted with precision by Vishnupriyan P R
AI Terminal Pal isn't your typical CLI toy β it's a full-blown developer productivity engine built into your terminal. Whether you're asking quick questions, analyzing codebases, generating boilerplate, or debugging a tangled mess, this tool understands your workflow. With multi-AI provider support, blazing speed, code-aware context building, and beautiful terminal output, it adapts to how you work.
- π€ Multi-AI Support: GPT-4o, Claude 3, Gemini 1.5, Mistral, Groq Llama3, and more
- π§ Contextual Intelligence: File-aware responses using
@filename.pyor auto-scan - πΌοΈ Themes & UI: Dynamic banner, themed layouts (Professional, Forest, Ocean, Minimal)
- π Clipboard Smartness: Auto-copy responses; paste into code right away
- π¦ Project Integration: Code analysis, tree view, metrics, and documentation generation
- π Live Stats: Track tokens, cost, speed, and query logs
- π§ͺ Built-in Dev Tools: Linting, testing, formatting, and debugging (AI-assisted)
- π€ Export Everything: Generate PDFs, logs, reports, or backups from the CLI
ai-terminal-pal-/π
βββ ai_chat_assistant.py # Main app
βββ .env # API keys (optional, or added during /setup)
βββ README.md # You're reading this
βββ requirements.txt # All dependencies
git clone https://github.com/vishnupriyanpr183207/Terminal-Pal
cd Terminal-Pal
pip install -r requirements.txtSet your API keys in .env or input manually via /setup:
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
CLAUDE_API_KEY=...python ai_chat_assistant.py/setup # Launch the setup wizard
/ask What is LangChain? # Ask a quick AI query
/scan # Analyze current project
/theme forest # Apply the 'forest' theme
/attach app.py # Attach file for AI contextUse
/helpor/navto explore all commands.
This app is multi-AI out of the box. You can pick your preferred model during /setup.
| Provider | Example Models | Max Context | Speed |
|---|---|---|---|
| OpenAI | gpt-4o, gpt-4-turbo |
Up to 128k | π₯ Fast |
| Claude | opus, sonnet, haiku |
Up to 200k | π Blazing |
| Gemini | 1.5-pro, flash, pro |
Up to 2M | β‘ Snappy |
| Groq | llama3, mixtral, gemma |
~32k | β‘ Ultra-fast |
| Mistral | codestral, mistral-large |
~32k | π‘ Smart |
Customize the terminal look with:
- πΌ
Professional - π
Ocean - πΏ
Forest - β«
Minimal
Change themes using:
/theme forestπ§ Provider: GPT-4o
β±οΈ Time Taken: 1.45s
π’ Tokens Used: 1,152
π° Estimated Cost: $0.0042
π Files Contextualized: utils.py, routes.py| Category | Commands |
|---|---|
| Setup | /setup, /provider, /theme, /config |
| AI Interaction | /ask, /chat, /generate, /explain, /improve |
| Project Tools | /scan, /tree, /metrics, /analyze, /deps |
| File Ops | /attach, /read, /write, /edit, /backup, /compare |
| Dev Utilities | /lint, /test, /debug, /docs, /format |
| Export/Logs | /pdf, /export, /report, /history, /logs |
| System | /status, /monitor, /clear, /exit, /update |
- API keys stored locally in your config directory or .env
- No telemetry, no tracking β fully offline except for API calls
- Full transparency in logging and history
- Optional backup/restore for peace of mind
/ask Give me a FastAPI boilerplate
/ask @views.py β optimize this
/ask Explain difference between multiprocessing and multithreadingThis project is licensed under the MIT License.
Created by :-
Vishnupriyan P R.
Vivek K K.
Akshaya K.
Sanjit M.
Crafted using Python's finest: Rich, Pyperclip, Pillow, tiktoken, ReportLab, and others.
βTools should disappear into the background and let you build.β
β MeshMinds, caffeinated coder β