Skip to content

Trescaff/Capstone_Modulus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LiMo – Real-Time AI Chat Moderator

LiMo helps Twitch streamers and their moderators review fast-moving chat using an AI model that flags potentially harmful messages in real time. Moderators can fine-tune sensitivity, maintain whitelists for community slang, and trigger moderation actions directly from the dashboard.

Architecture

  • Frontend – React + Vite dashboard for moderators (frontend/)
  • Backend – Node.js/Express API, MongoDB persistence, Twitch IRC bridge (backend/)
  • AI Service – FastAPI microservice wrapping the existing Python model (ai_service/)
  • Model Training Assets – Original prototype and adapter outputs (ai_moderator.py, adapters/)
LiMO/
├─ backend/           # Express service and Twitch integration
├─ frontend/          # React dashboard (Vite)
├─ ai_service/        # FastAPI model server that wraps ai_moderator.py
├─ adapters/          # (Optional) LoRA adapters per streamer
├─ ai_moderator.py    # Existing Python prototype (re-used by AI service)
└─ *.json             # Sample data and adapter map

Prerequisites

  • Node.js ≥ 18 and npm
  • Python ≥ 3.10 (matching the original prototype)
  • MongoDB (local or remote connection string)
  • Twitch IRC credentials (bot username, OAuth token, target channel)

1. Start the AI Microservice (FastAPI)

cd ai_service
python -m venv .venv
.venv\Scripts\activate  # Windows
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --port 8000

The service loads the base toxicity model once and responds on:

  • POST /analyze – batch-classify chat messages
  • GET /health – status and adapter diagnostics

If you have trained LoRA adapters, list them in streamer_adapters.json from the original workflow and the service will auto-load them per streamer.

2. Configure and Run the Backend (Express)

cd backend
npm install
copy .env.example .env   # update values
npm run dev              # starts http://localhost:4000

Key env vars (backend/.env.example):

  • MONGO_URI – connection string (e.g., mongodb://localhost:27017/limo)
  • STREAMER_ID – default community id used for incoming Twitch messages
  • AI_SERVICE_URL – usually http://localhost:8000
  • TWITCH_* – bot credentials and channel for IRC ingestion

The backend exposes REST endpoints under /api, proxies moderation actions to Twitch, and creates a websocket feed on /live for real-time UI updates.

3. Run the React Dashboard

cd frontend
npm install
npm run dev   # served at http://localhost:5173

The Vite dev server proxies REST calls (/api) and the websocket (/live) to the Express backend. The dashboard shows:

  • Live flagged messages with quick actions (timeout, ban, ignore)
  • Recent chat stream annotated with scores and thresholds
  • Settings panel for sensitivity + whitelist management per streamer

Twitch Integration Notes

  1. Register a Twitch application or generate a chat OAuth token (via https://twitchapps.com/tmi/).
  2. Store the token as TWITCH_OAUTH_TOKEN (format oauth:xxxxx).
  3. Use a bot account for TWITCH_USERNAME and the target channel (no # prefix) for TWITCH_CHANNEL.
  4. When the Express server starts, it will join the channel and forward messages through the AI service and MongoDB.

MongoDB Collections

  • messages – chat messages with scores, actions, whitelist hits
  • streamerconfigs – per-streamer sensitivity and whitelist

You can seed data manually using the /api/moderation/messages/analyze endpoint to test without Twitch.

Development Tips

  • Keep the AI microservice running before starting the backend so classification requests succeed.
  • The React dashboard relies on the websocket stream for instant updates; start the backend with npm run dev to enable it.
  • To test manual feedback loops, reuse ai_moderator.py workflows to generate adapters, then place them under adapters/<streamer_id>/.

Next Steps

  • Add authentication for moderator accounts.
  • Extend the AI service with an endpoint to submit labeled feedback for on-demand LoRA training.
  • Package the three services with Docker Compose for easier deployment.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published