Skip to content

πŸ’¬ Feature: Real-Time AI Chat via Socket.IOΒ #15

@abhishek-nexgen-dev

Description

@abhishek-nexgen-dev

This feature adds real-time chat support using Socket.IO. It allows users to send messages to different AI models and receive complete AI-generated responses in real-time.

βœ… Key Features

  • βœ… No token-by-token streaming
  • βœ… Easy to integrate with any frontend
  • βœ… Works with local models (Ollama) and cloud models (Groq, OpenAI, etc.)

βœ… Supported AI Providers

Provider Type Notes
Ollama Local Uses installed models (e.g., llama3)
Groq Cloud Requires user API key
OpenAI Cloud Supports GPT-3.5, GPT-4
Anthropic Cloud Supports Claude models
Other Custom Can be extended with custom logic

βš™οΈ How It Works

πŸ“₯ userMessage Event (Client ➑️ Server)

The frontend sends a message to the AI model via socket:

{
  "model": {
    "provider": "ollama",         // or "groq", "openai", "custom"
    "name": "llama3",             // model name
    "isPaid": false               // optional
  },
  "messages": [
    { "role": "user", "content": "Tell me a fun fact about space." }
  ],
  "apiKey": "user-api-key-or-local-key"
}

πŸ“€ aiResponse Event (Server ➑️ Client)

The server responds with the full AI-generated reply:

{
  "model": "llama3",
  "response": "Space smells like seared steak, according to astronauts!",
  "timestamp": "2025-10-12T14:00:00Z"
}

🚫 error Event (Server ➑️ Client)

If something goes wrong:

{
  "error": true,
  "message": "Invalid API key or model not found."
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions