KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.
Experience KateChat in action with our live demo:
To interact with all supported AI models in the demo, you'll need to provide your own API keys for:
- AWS Bedrock - Access to Claude, Llama, and other models
- OpenAI - GPT-4, GPT-5, and other OpenAI models
- Yandex Foundation Models - YandexGPT and other Yandex models
đź“‹ Note: API keys are stored by default locally in your browser and sent securely to our backend. See the Getting Started section below for detailed instructions on obtaining API keys.
- Multiple chats creation with pristine chat functionality
- Chat history storage and management, messages editing/deletion
- Rich markdown formatting: code blocks, images, MathJax formulas etc.
- "Switch model"/"Call other model" logic to process current chat messages with another model
- Request cancellation to stop reasoning or web search
- Parallel call for assistant message against other models to compare results
- Images input support (drag & drop, copy-paste, etc.), images stored on S3-compatible storage (
localstackon localdev env) - Reusable @katechat/ui that includes basic chatbot controls.
- Usage examples are available in examples.
- Voice-to-voice demo for OpenAI realtime WebRTC API.
- Distributed messages processing using external queue (Redis), full-fledged production-like dev environment with docker-compose
- User authentication (email/password, Google OAuth, GitHub OAuth)
- Real-time communication with GraphQL subscriptions
- Support for various LLM model Providers:
- AWS Bedrock (Amazon, Anthropic, Meta, Mistral, AI21, Cohere...)
- OpenAI
- Yandex Foundation Models with OpenAI protocol
- RAG implementation with documents (PDF, DOCX, TXT) parsing by Docling and vector embeddings stored in PostgreSQL/Sqlite/MS SQL server
- LLM tools (Web Search, Code Interpreter) support, custom WebSearch tool implemented using Yandex Search API
- CI/CD pipeline with GitHub Actions to deploy the app to AWS
- Demo mode when no LLM providers configured on Backend and
AWS_BEDROCK_...orOPENAI_API_...settings are stored in local storage and sent to the backend as "x-aws-region", "x-aws-access-key-id", "x-aws-secret-access-key", "x-openai-api-key" headers
- Check and fix messages deletion (mode: this one and following)
- Create ChatFiles table. mode images there, calculate image predominant color and store to DB
- Custom models support (enter ARN for Bedrock models, endpoint/api key for OpenAI like API, gpt-oss-20b)
- Finish drag & drop support to allow dropping into the chat window (katechat/ui)
- Add voice-to-voice interaction for OpenAI realtime models, put basic controls to katechat/ui and extend OpenAI protocol in main API.
- Add custom MCP tool support
- OpenAI - MCP
- Bedrock - custom wrapper
- Switch OpenAI "gpt-image..." models to Responses API, use image placeholder, do no wait response in cycle but use
new
requestsqueue with setTimeout andpublishMessagewith result - Add support for Google Vertex AI provider
- Add Deepseek API support
- Rename
document-processortotasks-processorservice to perform following tasks:
- add custom code interpreter tool implementation
from services.code_executor import CodeExecutor
# Constants
ALLOWED_MODULES = {
'pandas', 'numpy', 'matplotlib', 'PIL', 'cv2', 'moviepy','json', 'csv', 'datetime', 'math',
'openpyxl', 'scipy', 'seaborn', 'networkx', 'tiktoken', 'scikit-learn', 'plotly',
'bokeh', 'beautifulsoup4', 'sqlalchemy', 'scapy', 'dpkt', 'pytesseract', 'python-docx','python-pptx',
'manim', 'importlib-metadata', 'schemdraw'
}
def code_interpreter_handler(event, context):
executor = CodeExecutor()
code = event.get('code')
input_files = event.get('input_files', [])
chat_session_id = event.get('chat_session_id')
available_tokens = event.get('available_tokens', 16000)
if not code:
return {
'statusCode': 400,
'body': json.dumps({'error': 'No code provided'})
}
file_metadata = executor.download_input_files(input_files)
result = executor.execute_code(code, file_metadata, chat_session_id, available_tokens)
return result
- @katechat/ui chat bot demo with animated UI and custom actions buttons (plugins={[Actions]}) in chat to ask weather report tool or fill some form
- Add SerpApi for Web Search (new setting in UI)
- Python API (FastAPI)
- MySQL: check whether https://github.com/stephenc222/mysql_vss/ could be used for RAG
- Rust API sync: add images generation support, Library, admin API
- React with TypeScript
- Mantine UI library
- Apollo Client for GraphQL
- GraphQL code generation
- Real-time updates with GraphQL subscriptions (WebSockets)
- Node.js with TypeScript
- TypeORM for persistence
- Express.js for API server
- GraphQL with Apollo Server
- AWS Bedrock for AI model integrations
- OpenAI API for AI model integrations
- Jest for testing
The project consists of several parts:
- API - Node.js GraphQL API server. Also there is alternative backend API implementation on Rust, Python is in plans.
- Client - Universal web interface
- Database - any TypeORM compatible RDBMS (PostgreSQL, MySQL, SQLite, etc.)
- Redis - for message queue and caching (optional, but recommended for production)
- Node.js (v20+)
- AWS Account with Bedrock access (instructions below)
- OpenAI API Account (instructions below)
- Yandex Foundation Models API key.
- Docker and Docker Compose (optional, for development environment)
-
Create an AWS Account
- Visit AWS Sign-up
- Follow the instructions to create a new AWS account
- You'll need to provide a credit card and phone number for verification
-
Enable AWS Bedrock Access
- Log in to the AWS Management Console
- Search for "Bedrock" in the services search bar
- Click on "Amazon Bedrock"
- Click on "Model access" in the left navigation
- Select the models you want to use (e.g., Claude, Llama 2)
- Click "Request model access" and follow the approval process
-
Create an IAM User for API Access
- Go to the IAM Console
- Click "Users" in the left navigation and then "Create user"
- Enter a user name (e.g., "bedrock-api-user")
- For permissions, select "Attach policies directly"
- Search for and select "AmazonBedrockFullAccess"
- Complete the user creation process
-
Generate Access Keys
- From the user details page, navigate to the "Security credentials" tab
- Under "Access keys", click "Create access key"
- Select "Command Line Interface (CLI)" as the use case
- Click through the confirmation and create the access key
- IMPORTANT: Download the CSV file or copy the "Access key ID" and "Secret access key" values immediately. You won't be able to view the secret key again.
-
Configure Your Environment
- Open the
.envfile in theapidirectory - Add your AWS credentials:
AWS_BEDROCK_REGION=us-east-1 # or your preferred region AWS_BEDROCK_ACCESS_KEY_ID=your_access_key_id AWS_BEDROCK_SECRET_ACCESS_KEY=your_secret_access_key
- Open the
-
Verify AWS Region Availability
- Not all Bedrock models are available in every AWS region
- Check the AWS Bedrock documentation for model availability by region
- Make sure to set the
AWS_BEDROCK_REGIONto a region that supports your desired models
-
Create an OpenAI Account
- Visit OpenAI's website
- Click "Sign Up" and create an account
- Complete the verification process
-
Generate API Key
- Log in to your OpenAI account
- Navigate to the API keys page
- Click "Create new secret key"
- Name your API key (e.g., "KateChat")
- Copy the API key immediately - it won't be shown again
-
Configure Your Environment
- Open the
.envfile in theapidirectory - Add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key OPENAI_API_URL=https://api.openai.com/v1 # Default OpenAI API URL
- Open the
-
Note on API Usage Costs
- OpenAI charges for API usage based on the number of tokens processed
- Different models have different pricing tiers
- Monitor your usage through the OpenAI dashboard
- Consider setting up usage limits to prevent unexpected charges
- Clone the repository
git clone https://github.com/artiz/kate-chat.git
cd kate-chat
- Set up environment variables
cp api/.env.example api/.env
cp api-rust/.env.example api-rust/.env
cp client/.env.example client/.envEdit the .env files with your configuration settings.
- Start the production-like environment using Docker
Add the following to your /etc/hosts file:
127.0.0.1 katechat.dev.com
Then run the following commands:
export COMPOSE_BAKE=true
npm run install:all
npm run build:client
docker compose up --buildApp will be available at http://katechat.dev.com
To run the projects in development mode:
npm run install:all
docker compose up redis localstack postgres mysql mssql -d
npm run devpython -m venv document-processor/.venv
source document-processor/.venv/bin/activate
pip install -r document-processor/requirements.txt
npm run dev:document_processor- Server
cd api-rust
diesel migration run
cargo build
cargo run- Client
APP_API_URL=http://localhost:4001 APP_WS_URL=http://localhost:4002 npm run dev:client- Create new migration
docker compose up redis localstack postgres mysql mssql -d
npm run migration:generate <migration name>- Apply migrations (automated at app start but could be used to test)
npm run migration:runNOTE: do not update more than one table definition at once, sqlite sometimes applies migrations incorrectly due to "temporary_xxx" tables creation.
npm run install:all
npm run builddocker build -t katechat-api ./ -f api/Dockerfile
docker run --env-file=./api/.env -p4000:4000 katechat-api docker build -t katechat-client --build-arg APP_API_URL=http://localhost:4000 --build-arg APP_WS_URL=http://localhost:4000 ./ -f client/Dockerfile
docker run -p3000:80 katechat-clientAll-in-one service
docker build -t katechat-app ./ -f infrastructure/services/katechat-app/Dockerfile
docker run -it --rm --pid=host --env-file=./api/.env \
--env PORT=80 \
--env NODE_ENV=production \
--env ALLOWED_ORIGINS="*" \
--env REDIS_URL="redis://host.docker.internal:6379" \
--env S3_ENDPOINT="http://host.docker.internal:4566" \
--env SQS_ENDPOINT="http://host.docker.internal:4566" \
--env DB_URL="postgres://katechat:katechat@host.docker.internal:5432/katechat" \
--env CALLBACK_URL_BASE="http://localhost" \
--env FRONTEND_URL="http://localhost" \
--env DB_MIGRATIONS_PATH="./db-migrations/*-*.js" \
-p80:80 katechat-appDocument processor
DOCKER_BUILDKIT=1 docker build -t katechat-document-processor ./ -f infrastructure/services/katechat-document-processor/Dockerfile
docker run -it --rm --pid=host --env-file=./document-processor/.env \
--env PORT=8080 \
--env NODE_ENV=production \
--env REDIS_URL="redis://host.docker.internal:6379" \
--env S3_ENDPOINT="http://host.docker.internal:4566" \
--env SQS_ENDPOINT="http://host.docker.internal:4566" \
-p8080:8080 katechat-document-processorAvailable at /graphql endpoint with the following main queries/mutations:
currentUser- Get current authenticated usergetChats- Get list of user's chats with paginationgetChatById- Get a specific chatgetChatMessages- Get messages for a specific chatgetModelServiceProviders- Get list of available AI model providersgetModels- Get list of available AI models
login- Authenticate a userregister- Register a new usercreateChat- Create a new chatupdateChat- Update chat detailsdeleteChat- Delete a chatcreateMessage- Send a message and generate AI responsedeleteMessage- Delete a message
newMessage- Real-time updates for new messages in a chat
Authentication is handled via JWT tokens. When a user logs in or registers, they receive a token that must be included in the Authorization header for subsequent requests.
- Fork the repository
- Create your feature branch:
git checkout -b feature/my-new-feature - Commit your changes:
git commit -am 'Add some feature' - Push to the branch:
git push origin feature/my-new-feature - Submit a pull request
KateChat includes an admin dashboard for managing users and viewing system statistics. Admin access is controlled by email addresses specified in the DEFAULT_ADMIN_EMAILS environment variable.
- User Management: View all registered users with pagination and search
- System Statistics: Monitor total users, chats, and models
- Role-based Access: Automatic admin role assignment for specified email addresses
-
Set the
DEFAULT_ADMIN_EMAILSenvironment variable in your.envfile:DEFAULT_ADMIN_EMAILS=admin@example.com,another-admin@example.com
-
Users with these email addresses will automatically receive admin privileges upon:
- Registration
- Login (existing users)
- OAuth authentication (Google/GitHub)
-
Admin users can access the dashboard at
/adminin the web interface




