Skip to content

KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.

License

Notifications You must be signed in to change notification settings

artiz/kate-chat

Repository files navigation

KateChat - Universal AI Chat Interface

KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.

logo

🚀 Live Demo

Experience KateChat in action with our live demo:

Try KateChat Demo →

Getting Started with Demo

To interact with all supported AI models in the demo, you'll need to provide your own API keys for:

  • AWS Bedrock - Access to Claude, Llama, and other models
  • OpenAI - GPT-4, GPT-5, and other OpenAI models
  • Yandex Foundation Models - YandexGPT and other Yandex models

đź“‹ Note: API keys are stored by default locally in your browser and sent securely to our backend. See the Getting Started section below for detailed instructions on obtaining API keys.

Features

  • Multiple chats creation with pristine chat functionality
  • Chat history storage and management, messages editing/deletion
  • Rich markdown formatting: code blocks, images, MathJax formulas etc.
  • "Switch model"/"Call other model" logic to process current chat messages with another model
  • Request cancellation to stop reasoning or web search
  • Parallel call for assistant message against other models to compare results
  • Images input support (drag & drop, copy-paste, etc.), images stored on S3-compatible storage (localstack on localdev env)
  • Reusable @katechat/ui that includes basic chatbot controls.
    • Usage examples are available in examples.
    • Voice-to-voice demo for OpenAI realtime WebRTC API.
  • Distributed messages processing using external queue (Redis), full-fledged production-like dev environment with docker-compose
  • User authentication (email/password, Google OAuth, GitHub OAuth)
  • Real-time communication with GraphQL subscriptions
  • Support for various LLM model Providers:
  • RAG implementation with documents (PDF, DOCX, TXT) parsing by Docling and vector embeddings stored in PostgreSQL/Sqlite/MS SQL server
  • LLM tools (Web Search, Code Interpreter) support, custom WebSearch tool implemented using Yandex Search API
  • CI/CD pipeline with GitHub Actions to deploy the app to AWS
  • Demo mode when no LLM providers configured on Backend and AWS_BEDROCK_... or OPENAI_API_... settings are stored in local storage and sent to the backend as "x-aws-region", "x-aws-access-key-id", "x-aws-secret-access-key", "x-openai-api-key" headers

ISSUES

  • Check and fix messages deletion (mode: this one and following)

TODO

  • Create ChatFiles table. mode images there, calculate image predominant color and store to DB
  • Custom models support (enter ARN for Bedrock models, endpoint/api key for OpenAI like API, gpt-oss-20b)
  • Finish drag & drop support to allow dropping into the chat window (katechat/ui)
  • Add voice-to-voice interaction for OpenAI realtime models, put basic controls to katechat/ui and extend OpenAI protocol in main API.
  • Add custom MCP tool support
    • OpenAI - MCP
    • Bedrock - custom wrapper
  • Switch OpenAI "gpt-image..." models to Responses API, use image placeholder, do no wait response in cycle but use new requests queue with setTimeout and publishMessage with result
  • Add support for Google Vertex AI provider
  • Add Deepseek API support
  • Rename document-processor to tasks-processor service to perform following tasks:
  • add custom code interpreter tool implementation
from services.code_executor import CodeExecutor
# Constants
ALLOWED_MODULES = {
    'pandas', 'numpy', 'matplotlib', 'PIL', 'cv2', 'moviepy','json', 'csv', 'datetime', 'math', 
    'openpyxl', 'scipy', 'seaborn', 'networkx', 'tiktoken', 'scikit-learn', 'plotly', 
    'bokeh', 'beautifulsoup4', 'sqlalchemy', 'scapy', 'dpkt', 'pytesseract', 'python-docx','python-pptx',
    'manim', 'importlib-metadata', 'schemdraw'
}

def code_interpreter_handler(event, context):
      executor = CodeExecutor()

      code = event.get('code')
      input_files = event.get('input_files', [])
      chat_session_id = event.get('chat_session_id')
      available_tokens = event.get('available_tokens', 16000)

      if not code:
         return {
               'statusCode': 400,
               'body': json.dumps({'error': 'No code provided'})
         }

      file_metadata = executor.download_input_files(input_files)
      result = executor.execute_code(code, file_metadata, chat_session_id, available_tokens)

      return result
  • @katechat/ui chat bot demo with animated UI and custom actions buttons (plugins={[Actions]}) in chat to ask weather report tool or fill some form
  • Add SerpApi for Web Search (new setting in UI)
  • Python API (FastAPI)
  • MySQL: check whether https://github.com/stephenc222/mysql_vss/ could be used for RAG
  • Rust API sync: add images generation support, Library, admin API

Tech Stack

Frontend

  • React with TypeScript
  • Mantine UI library
  • Apollo Client for GraphQL
  • GraphQL code generation
  • Real-time updates with GraphQL subscriptions (WebSockets)

Backend

  • Node.js with TypeScript
  • TypeORM for persistence
  • Express.js for API server
  • GraphQL with Apollo Server
  • AWS Bedrock for AI model integrations
  • OpenAI API for AI model integrations
  • Jest for testing

Project Structure

The project consists of several parts:

  1. API - Node.js GraphQL API server. Also there is alternative backend API implementation on Rust, Python is in plans.
  2. Client - Universal web interface
  3. Database - any TypeORM compatible RDBMS (PostgreSQL, MySQL, SQLite, etc.)
  4. Redis - for message queue and caching (optional, but recommended for production)

Getting Started

Prerequisites

  • Node.js (v20+)
  • AWS Account with Bedrock access (instructions below)
  • OpenAI API Account (instructions below)
  • Yandex Foundation Models API key.
  • Docker and Docker Compose (optional, for development environment)

AWS Bedrock API keys retrieval

  1. Create an AWS Account

    • Visit AWS Sign-up
    • Follow the instructions to create a new AWS account
    • You'll need to provide a credit card and phone number for verification
  2. Enable AWS Bedrock Access

    • Log in to the AWS Management Console
    • Search for "Bedrock" in the services search bar
    • Click on "Amazon Bedrock"
    • Click on "Model access" in the left navigation
    • Select the models you want to use (e.g., Claude, Llama 2)
    • Click "Request model access" and follow the approval process
  3. Create an IAM User for API Access

    • Go to the IAM Console
    • Click "Users" in the left navigation and then "Create user"
    • Enter a user name (e.g., "bedrock-api-user")
    • For permissions, select "Attach policies directly"
    • Search for and select "AmazonBedrockFullAccess"
    • Complete the user creation process
  4. Generate Access Keys

    • From the user details page, navigate to the "Security credentials" tab
    • Under "Access keys", click "Create access key"
    • Select "Command Line Interface (CLI)" as the use case
    • Click through the confirmation and create the access key
    • IMPORTANT: Download the CSV file or copy the "Access key ID" and "Secret access key" values immediately. You won't be able to view the secret key again.
  5. Configure Your Environment

    • Open the .env file in the api directory
    • Add your AWS credentials:
      AWS_BEDROCK_REGION=us-east-1  # or your preferred region
      AWS_BEDROCK_ACCESS_KEY_ID=your_access_key_id
      AWS_BEDROCK_SECRET_ACCESS_KEY=your_secret_access_key
  6. Verify AWS Region Availability

    • Not all Bedrock models are available in every AWS region
    • Check the AWS Bedrock documentation for model availability by region
    • Make sure to set the AWS_BEDROCK_REGION to a region that supports your desired models

OpenAI API keys retrieval

  1. Create an OpenAI Account

    • Visit OpenAI's website
    • Click "Sign Up" and create an account
    • Complete the verification process
  2. Generate API Key

    • Log in to your OpenAI account
    • Navigate to the API keys page
    • Click "Create new secret key"
    • Name your API key (e.g., "KateChat")
    • Copy the API key immediately - it won't be shown again
  3. Configure Your Environment

    • Open the .env file in the api directory
    • Add your OpenAI API key:
      OPENAI_API_KEY=your_openai_api_key
      OPENAI_API_URL=https://api.openai.com/v1  # Default OpenAI API URL
  4. Note on API Usage Costs

    • OpenAI charges for API usage based on the number of tokens processed
    • Different models have different pricing tiers
    • Monitor your usage through the OpenAI dashboard
    • Consider setting up usage limits to prevent unexpected charges

Installation

  1. Clone the repository
git clone https://github.com/artiz/kate-chat.git
cd kate-chat
  1. Set up environment variables
cp api/.env.example api/.env
cp api-rust/.env.example api-rust/.env
cp client/.env.example client/.env

Edit the .env files with your configuration settings.

  1. Start the production-like environment using Docker

Add the following to your /etc/hosts file:

127.0.0.1       katechat.dev.com

Then run the following commands:

export COMPOSE_BAKE=true
npm run install:all
npm run build:client
docker compose up --build

App will be available at http://katechat.dev.com

Development Mode

To run the projects in development mode:

Default Node.js API/Client

npm run install:all
docker compose up redis localstack postgres mysql mssql -d
npm run dev

Documents processor (Python)

python -m venv document-processor/.venv
source document-processor/.venv/bin/activate
pip install -r document-processor/requirements.txt
npm run dev:document_processor

Rust API (experiment)

  1. Server
cd api-rust
diesel migration run
cargo build
cargo run
  1. Client
APP_API_URL=http://localhost:4001  APP_WS_URL=http://localhost:4002 npm run dev:client

API DB Migrations

  • Create new migration
docker compose up redis localstack postgres mysql mssql -d
npm run migration:generate <migration name>
  • Apply migrations (automated at app start but could be used to test)
npm run migration:run

NOTE: do not update more than one table definition at once, sqlite sometimes applies migrations incorrectly due to "temporary_xxx" tables creation.

Production Build

npm run install:all
npm run build

Docker Build

docker build -t katechat-api ./ -f api/Dockerfile  
docker run --env-file=./api/.env  -p4000:4000 katechat-api 
docker build -t katechat-client --build-arg APP_API_URL=http://localhost:4000 --build-arg APP_WS_URL=http://localhost:4000 ./ -f client/Dockerfile  
docker run -p3000:80 katechat-client

All-in-one service

docker build -t katechat-app ./ -f infrastructure/services/katechat-app/Dockerfile

docker run -it --rm --pid=host --env-file=./api/.env \
 --env PORT=80 \
 --env NODE_ENV=production \
 --env ALLOWED_ORIGINS="*" \
 --env REDIS_URL="redis://host.docker.internal:6379" \
 --env S3_ENDPOINT="http://host.docker.internal:4566" \
 --env SQS_ENDPOINT="http://host.docker.internal:4566" \
 --env DB_URL="postgres://katechat:katechat@host.docker.internal:5432/katechat" \
 --env CALLBACK_URL_BASE="http://localhost" \
 --env FRONTEND_URL="http://localhost" \
 --env DB_MIGRATIONS_PATH="./db-migrations/*-*.js" \
 -p80:80 katechat-app

Document processor

DOCKER_BUILDKIT=1 docker build -t katechat-document-processor ./ -f infrastructure/services/katechat-document-processor/Dockerfile

docker run -it --rm --pid=host --env-file=./document-processor/.env \
 --env PORT=8080 \
 --env NODE_ENV=production \
 --env REDIS_URL="redis://host.docker.internal:6379" \
 --env S3_ENDPOINT="http://host.docker.internal:4566" \
 --env SQS_ENDPOINT="http://host.docker.internal:4566" \
 -p8080:8080 katechat-document-processor

API Documentation

GraphQL API

Available at /graphql endpoint with the following main queries/mutations:

Queries

  • currentUser - Get current authenticated user
  • getChats - Get list of user's chats with pagination
  • getChatById - Get a specific chat
  • getChatMessages - Get messages for a specific chat
  • getModelServiceProviders - Get list of available AI model providers
  • getModels - Get list of available AI models

Mutations

  • login - Authenticate a user
  • register - Register a new user
  • createChat - Create a new chat
  • updateChat - Update chat details
  • deleteChat - Delete a chat
  • createMessage - Send a message and generate AI response
  • deleteMessage - Delete a message

Subscriptions

  • newMessage - Real-time updates for new messages in a chat

Authentication

Authentication is handled via JWT tokens. When a user logs in or registers, they receive a token that must be included in the Authorization header for subsequent requests.

Screenshots

Rich Formatting

image

Images Generation

image

Call Other Model

image

RAG (Retrieval-Augmented Generation)

image

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

Admin Dashboard

KateChat includes an admin dashboard for managing users and viewing system statistics. Admin access is controlled by email addresses specified in the DEFAULT_ADMIN_EMAILS environment variable.

Admin Features

  • User Management: View all registered users with pagination and search
  • System Statistics: Monitor total users, chats, and models
  • Role-based Access: Automatic admin role assignment for specified email addresses

Configuring Admin Access

  1. Set the DEFAULT_ADMIN_EMAILS environment variable in your .env file:

    DEFAULT_ADMIN_EMAILS=admin@example.com,another-admin@example.com
  2. Users with these email addresses will automatically receive admin privileges upon:

    • Registration
    • Login (existing users)
    • OAuth authentication (Google/GitHub)
  3. Admin users can access the dashboard at /admin in the web interface

About

KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Contributors 5