A beautiful, modern web application that allows you to chat with multiple AI models simultaneously. Experience conversations with Gemini, ChatGPT, Claude, and Grok all in one interface.
- Multi-AI Chat: Send one prompt to 4 different AI models simultaneously
- Real-time Responses: Watch all models respond in parallel
- Modern UI: Clean, responsive design that works on all devices
- Health Monitoring: Real-time status of all AI services
- Statistics: Track message count and response times
- Keyboard Shortcuts: Enhanced UX with keyboard controls
- Responsive Design: Perfect on desktop, tablet, and mobile
- Python 3.8+
- Required Python packages (install via
pip install -r requirements.txt) - API keys for the respective services (stored in
.envfile)
- Create a
.envfile in the root directory:
GEMINI_API_KEY=your_gemini_api_key_here
CHATGPT_API_KEY=your_openai_api_key_here
CLAUDE_API_KEY=your_anthropic_api_key_here
GROK_API_KEY=your_xai_api_key_here# Start all backend services
start-services.bat
# Open the frontend
cd frontend
# Open index.html in your browser or use a local server# Make the script executable
chmod +x start-services.sh
# Start all backend services
./start-services.sh
# Open the frontend
cd frontend
# Open index.html in your browser or use a local server# Terminal 1 - Gemini
cd backend && python gemini_llm.py
# Terminal 2 - ChatGPT
cd backend && python chatgpt_llm.py
# Terminal 3 - Claude
cd backend && python claude_llm.py
# Terminal 4 - Grok
cd backend && python grok_llm.py- Navigate to the
frontendfolder - Open
index.htmlin your web browser - Click "Try Demo" to start chatting!
Or use a local server for better experience:
cd frontend
python -m http.server 3000
# Then visit http://localhost:3000multimind/
βββ backend/
β βββ models.py # Pydantic models for API
β βββ gemini_llm.py # Gemini AI service (Port 8001)
β βββ chatgpt_llm.py # ChatGPT service (Port 8002)
β βββ claude_llm.py # Claude service (Port 8003)
β βββ grok_llm.py # Grok service (Port 8004)
βββ frontend/
β βββ index.html # Landing page
β βββ chat.html # Chat interface
β βββ css/
β β βββ style.css # Main styles
β β βββ chat.css # Chat-specific styles
β βββ js/
β βββ main.js # Landing page JavaScript
β βββ chat.js # Chat functionality
βββ start-services.bat # Windows service starter
βββ start-services.sh # Linux/Mac service starter
βββ README.md
Each AI service runs on a different port with the following endpoints:
POST /chat- Send a chat messageGET /health- Check service health
POST /chat- Send a chat messageGET /health- Check service health
POST /chat- Send a chat messageGET /health- Check service health
POST /chat- Send a chat messageGET /health- Check service health
curl -X POST "http://localhost:8001/chat" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Hello, how are you?",
"instructions": "You are a helpful assistant."
}'- Hero section with smooth animations
- Feature showcase with hover effects
- Responsive design for all devices
- Call-to-action buttons
- 4-column grid layout for desktop
- Responsive stacking for mobile
- Real-time typing indicators
- Message history for each model
- Service health monitoring
- Response time statistics
- Keyboard shortcuts (Ctrl+Enter to send)
- Auto-resizing text input
- Loading states and animations
- Error handling and retry logic
- Smooth transitions and effects
- Modify
frontend/css/style.cssfor general styling - Modify
frontend/css/chat.cssfor chat interface - CSS variables in
:rootfor easy theme changes
- Create a new LLM file in
backend/ - Add FastAPI endpoint following the existing pattern
- Update the frontend JavaScript to include the new model
- Add the new service to the start scripts
- Services not starting: Check if ports 8001-8004 are available
- API keys not working: Verify your
.envfile configuration - CORS issues: Make sure services are running on localhost
- Frontend not connecting: Check console for network errors
- Check individual service health:
http://localhost:800X/health - View browser console for JavaScript errors
- Check backend terminal outputs for Python errors
The interface is fully responsive and includes:
- Stacked chat layout on mobile
- Touch-friendly buttons and inputs
- Optimized spacing and typography
- Swipe gestures (future enhancement)
- Dark/Light theme toggle
- Message export functionality
- Conversation history persistence
- Custom system prompts per model
- File upload support
- Voice input/output
- Performance analytics dashboard
- Model comparison tools
This project is open source and available under the MIT License.
Contributions are welcome! Please feel free to submit a Pull Request.
Enjoy chatting with multiple AI minds! π§ β¨