Twiga is a WhatsApp chatbot designed specifically for Tanzanian teachers, and is being built by the Tanzania AI Community and open-source contributors. It aims to enhance the educational experience for educators by providing them with access to generative AI. Using retrieval-augmented generation (RAG), Twiga can communicate with teachers in a natural way yet combine the adaptive capabilities of LLMs with the knowledge provided in the curriculum and textbooks of the Tanzanian Institute of Education (TIE). We aim to build the bot to be used for a multitude of educational applications such as generating exams, writing lesson plans, searching for textbook info, and more.
This project was awarded the Meta Llama Impact Grant Innovation Award 2024 for its use of Llama open source LLMs for social good. Read our roadmap for further details on our development plans.
We would like to thank those who are sponsoring this project.
Here are a couple of screenshots. Alternatively, you can take a look at our brief demo.
We encourage you to contribute to Twiga! There is plenty of documentation describing the current architecture of Twiga, how to contribute, and how to get started in the docs/ folder.
For further support you can join our Discord to discuss directly with the community and stay up to date on what's happening, or you can contact us more formally using GitHub Discussions.
Thank you to all the people that have contributed to Twiga so far!
- The FastAPI service now exposes
GET /metrics, backed byprometheus-fastapi-instrumentatorand custom counters for WhatsApp webhooks, LLM calls, rate limiting, and generated messages. - Launch the local Prometheus + Grafana stack with
docker compose -f monitoring/docker-compose.monitoring.yml up; Prometheus listens on:9090and Grafana on:4000. - Prometheus scrapes the Twiga app at
host.docker.internal:8000/metricsby defaultβif you run the API elsewhere, editmonitoring/prometheus/prometheus.ymlaccordingly. - Grafana auto-provisions a Prometheus datasource plus three dashboards (
FastAPI Overview,LLM Performance,Redis Overview) frommonitoring/grafana/provisioning. - Example alerts (5xx rate, slow latency) ship in
monitoring/prometheus/alerts.yml.
MIT License, Copyright 2024-present, Victor Oldensand


