LiMo helps Twitch streamers and their moderators review fast-moving chat using an AI model that flags potentially harmful messages in real time. Moderators can fine-tune sensitivity, maintain whitelists for community slang, and trigger moderation actions directly from the dashboard.
- Frontend – React + Vite dashboard for moderators (
frontend/) - Backend – Node.js/Express API, MongoDB persistence, Twitch IRC bridge (
backend/) - AI Service – FastAPI microservice wrapping the existing Python model (
ai_service/) - Model Training Assets – Original prototype and adapter outputs (
ai_moderator.py,adapters/)
LiMO/
├─ backend/ # Express service and Twitch integration
├─ frontend/ # React dashboard (Vite)
├─ ai_service/ # FastAPI model server that wraps ai_moderator.py
├─ adapters/ # (Optional) LoRA adapters per streamer
├─ ai_moderator.py # Existing Python prototype (re-used by AI service)
└─ *.json # Sample data and adapter map
- Node.js ≥ 18 and npm
- Python ≥ 3.10 (matching the original prototype)
- MongoDB (local or remote connection string)
- Twitch IRC credentials (bot username, OAuth token, target channel)
cd ai_service
python -m venv .venv
.venv\Scripts\activate # Windows
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --port 8000The service loads the base toxicity model once and responds on:
POST /analyze– batch-classify chat messagesGET /health– status and adapter diagnostics
If you have trained LoRA adapters, list them in streamer_adapters.json from the original workflow and the service will auto-load them per streamer.
cd backend
npm install
copy .env.example .env # update values
npm run dev # starts http://localhost:4000Key env vars (backend/.env.example):
MONGO_URI– connection string (e.g.,mongodb://localhost:27017/limo)STREAMER_ID– default community id used for incoming Twitch messagesAI_SERVICE_URL– usuallyhttp://localhost:8000TWITCH_*– bot credentials and channel for IRC ingestion
The backend exposes REST endpoints under /api, proxies moderation actions to Twitch, and creates a websocket feed on /live for real-time UI updates.
cd frontend
npm install
npm run dev # served at http://localhost:5173The Vite dev server proxies REST calls (/api) and the websocket (/live) to the Express backend. The dashboard shows:
- Live flagged messages with quick actions (timeout, ban, ignore)
- Recent chat stream annotated with scores and thresholds
- Settings panel for sensitivity + whitelist management per streamer
- Register a Twitch application or generate a chat OAuth token (via https://twitchapps.com/tmi/).
- Store the token as
TWITCH_OAUTH_TOKEN(formatoauth:xxxxx). - Use a bot account for
TWITCH_USERNAMEand the target channel (no#prefix) forTWITCH_CHANNEL. - When the Express server starts, it will join the channel and forward messages through the AI service and MongoDB.
messages– chat messages with scores, actions, whitelist hitsstreamerconfigs– per-streamer sensitivity and whitelist
You can seed data manually using the /api/moderation/messages/analyze endpoint to test without Twitch.
- Keep the AI microservice running before starting the backend so classification requests succeed.
- The React dashboard relies on the websocket stream for instant updates; start the backend with
npm run devto enable it. - To test manual feedback loops, reuse
ai_moderator.pyworkflows to generate adapters, then place them underadapters/<streamer_id>/.
- Add authentication for moderator accounts.
- Extend the AI service with an endpoint to submit labeled feedback for on-demand LoRA training.
- Package the three services with Docker Compose for easier deployment.