A 100% free, offline meeting notes assistant that captures system audio from Teams/Zoom calls, transcribes with speaker identification, and generates intelligent summaries with action items - all running locally on your machine!
| Platform | Download | Requirements |
|---|---|---|
| Windows | π¦ Download EXE | Windows 10/11, Ollama, FFmpeg |
- Download the latest release ZIP
- Extract to any folder
- Install Ollama from https://ollama.ai
- Open terminal and run:
ollama pull llama3.2 - Run
MeetingMind.exe - Done! π
π‘ See INSTALL.md for detailed setup instructions
- System Audio Recording - Capture audio from Teams, Zoom, or any application (Windows WASAPI loopback)
- Auto-Detect Meetings - Automatically start recording when Teams/Zoom/Meet opens
- Real-Time Transcription - See text as you speak during meetings β‘ NEW
- Upload Support - Drag & drop existing audio/video files
- System Tray - Quick recording controls from your taskbar
- Accurate Transcription - Uses OpenAI Whisper (runs 100% offline)
- Speaker Diarization - Identifies who's speaking using pyannote-audio
- Audio Snippets - Listen to clips to identify unknown speakers
- Word-Level Timestamps - Precise timing for each word β‘ NEW
- Manual Bookmarks - Mark important moments with Ctrl+Shift+B
- Auto-Detection - AI identifies action items, decisions, questions
- Clip Extraction - Export audio clips for specific highlights
- Quick Navigation - Jump to highlighted moments
- Speaker Identification - Audio clips help you name each speaker
- Clarifying Questions - AI asks questions to improve note accuracy
- Quick or Detailed Mode - Choose 3-5 or 5-10 questions
- Smart Summaries - Powered by Ollama LLMs (llama3.2, mistral, etc.)
- Meeting Templates - Standup, planning, retrospective, 1:1 templates
- Action Items - Automatically extracts tasks with assignees
- Key Decisions - Highlights important decisions made
- Ask Questions - Query your past meetings with AI
- Semantic Search - Find meetings by meaning, not just keywords
- Context-Aware - Understands follow-up questions
- Cross-Meeting Insights - Find patterns across meetings
- Meeting Statistics - Duration, participants, word count
- Speaker Analytics - Talk time per person
- Meeting Trends - Patterns by day, week, month
- Productivity Score - Based on action items and decisions
- Slack - Send notes via webhook
- Microsoft Teams - Share to Teams channels
- Notion - Export to Notion databases
- Email - Send via SMTP
- Calendar Sync - Google Calendar & Outlook
- Ctrl+Shift+R - Toggle recording
- Ctrl+Shift+B - Add bookmark
- Ctrl+Shift+A - Mark action item
- Ctrl+Shift+D - Mark decision
- Markdown - Clean, portable format
- HTML - Styled web pages
- JSON - For integrations
- DOCX - Word documents
- PDF - Professional reports
- Search & Browse - Find past meetings by date, participant, or keyword
- Statistics - Track meeting duration and frequency
- Quick Reload - Re-open and export any saved meeting
- 100% Offline - Everything runs locally, no API calls
- No Data Collection - Your meetings stay on your machine
- Completely Free - No API costs, no subscriptions
- Windows 10/11 (for system audio capture)
- Ollama (local LLM runtime) - Download
- FFmpeg (for audio processing) - Download
- 8GB+ RAM recommended (16GB for larger models)
- Go to Releases
- Download
MeetingMind-Windows.zip - Extract the ZIP file
- Install Ollama and run
ollama pull llama3.2 - Double-click
MeetingMind.exe - Your browser opens to http://localhost:7860
Windows:
# Using Chocolatey
choco install ffmpeg
# Or download from: https://ffmpeg.org/download.htmlmacOS:
brew install ffmpegLinux:
sudo apt install ffmpeg # Ubuntu/Debian
sudo dnf install ffmpeg # FedoraDownload and install from: https://ollama.ai
After installation, pull a model:
ollama pull llama3.2
# or
ollama pull mistralStart Ollama server:
ollama serveπ‘ Keep the Ollama server running while using MeetingMind!
cd MeetingMind# Create a virtual environment (recommended)
python -m venv venv
# Activate it
.\venv\Scripts\Activate.ps1 # Windows PowerShell
# or
source venv/bin/activate # macOS/Linux
# Install packages
pip install -r requirements.txtβ±οΈ First install may take 5-10 minutes (Whisper models are ~150MB)
python main.pyThe app will open automatically in your browser at http://127.0.0.1:7860
A system tray icon will appear for quick recording controls!
-
Start Recording
- Click "π΄ Start Recording" in the web UI, OR
- Right-click the system tray icon β "Start Recording"
-
During the Meeting
- Join your Teams/Zoom call as normal
- MeetingMind captures all system audio
-
Stop Recording
- Click "βΉοΈ Stop Recording" when done
- Processing starts automatically
After processing, you'll be guided through a Q&A session:
-
Speaker Identification
- Listen to audio clips of each speaker
- Enter their name (e.g., "John Smith")
- This improves meeting notes accuracy
-
Clarifying Questions
- AI asks about unclear action items, decisions
- Answer or skip questions as needed
- Choose "Skip All" to generate notes immediately
-
Review Results
- Executive summary
- Key points and decisions
- Action items with assignees
- Go to "π New Meeting" tab
- Upload your audio/video file (mp3, wav, m4a, mp4, etc.)
- Click "Process Upload"
- Complete Q&A workflow
- Export results
MeetingMind/
βββ main.py # Main entry point
βββ app.py # Alternative entry (runs Gradio only)
βββ build.bat # Windows EXE build script
βββ build.py # Python build script
βββ meetingmind.spec # PyInstaller configuration
βββ core/
β βββ config.py # Configuration management
β βββ controller.py # Main app controller (orchestrates all services)
β βββ events.py # Event system for async communication
βββ services/
β βββ audio_capture.py # WASAPI system audio recording
β βββ transcriber.py # Whisper transcription
β βββ diarizer.py # Speaker diarization
β βββ summarizer.py # Ollama summarization
β βββ qa_engine.py # Q&A generation & management
β βββ meeting_detector.py # Auto-detect Teams/Zoom/Meet
β βββ templates.py # Meeting templates (standup, planning, etc.)
β βββ exporter.py # Export to MD/HTML/JSON/DOCX/PDF
β βββ history.py # Meeting history storage & search
βββ ui/
β βββ gradio_app.py # Web interface
β βββ system_tray.py # System tray application
βββ data/
β βββ meetings/ # Saved meeting notes
β βββ profiles/ # Speaker profiles
βββ assets/ # Icons and images
βββ docs/ # Documentation
βββ requirements.txt
βββ README.md
python main.py --help
Options:
--port, -p Port to run web UI (default: 7860)
--share Create public share link
--no-tray Disable system tray icon
--no-browser Don't open browser automatically
--check Check dependencies and exitIn Settings tab:
- Quick Mode: 3-5 questions (faster)
- Detailed Mode: 5-10 questions (more accurate)
| Model | Size | Speed | Accuracy | Recommended For |
|---|---|---|---|---|
| tiny | 39 MB | β‘β‘β‘β‘ | ββ | Quick tests |
| base | 74 MB | β‘β‘β‘ | βββ | Most users |
| small | 244 MB | β‘β‘ | ββββ | Better accuracy |
| medium | 769 MB | β‘ | βββββ | Professional use |
| large | 1550 MB | π | βββββ | Maximum accuracy |
Popular choices:
- llama3.2 (3B) - Fast, good quality β‘ (Recommended)
- llama3.1 (8B) - Better quality, slower
- mistral (7B) - Good balance
Install with: ollama pull <model-name>
- Make sure Ollama is running:
ollama serve - Check if model is installed:
ollama list - Pull model if needed:
ollama pull llama3.2
- Install FFmpeg (see installation steps above)
- Restart terminal after installation
- Make sure you're on Windows 10/11
- Check audio device in Settings
- Try refreshing audio devices
- Use smaller Whisper model (tiny or base)
- For GPU acceleration, install CUDA-enabled PyTorch:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- Use smaller models (tiny Whisper + llama3.2)
- Close other applications
- Process shorter audio files
- MP3, WAV, M4A, MP4 (audio extracted)
- WEBM, OGG, FLAC
- Any format supported by FFmpeg
- First-time users: Start with
baseWhisper model andllama3.2 - Speaker ID: Clearer audio = better speaker separation
- Long meetings: Works best with meetings under 2 hours
- Multiple languages: Whisper auto-detects language
- Background noise: Better audio = better results
See CONTRIBUTING.md for guidelines.
MIT License - Free to use, modify, and distribute!
Built with:
- OpenAI Whisper - Speech recognition
- Ollama - Local LLM runtime
- Gradio - UI framework
Having issues? Check:
- This README
- GitHub Issues
- Ollama documentation: https://ollama.ai/docs
Made with β€οΈ for productive meetings | 100% Free | 100% Offline | 100% Private