Skip to content

✨ Feat: AI Model Configuration Management (Create, Update, Get) #13

@abhishek-nexgen-dev

Description

@abhishek-nexgen-dev

Create backend API routes for authenticated users to:

  • Create, read, and update their AI model settings.
  • Support multiple AI providers and models.
  • Store API keys encrypted for security.
  • Mark each model as paid or free.
  • Set a system-wide AI assistant prompt.

Details

  • Users can save their AI providers, models, and encrypted API keys.
  • Users can get their current configuration with encrypted API keys in the response.
  • Users can update their settings anytime.
  • API keys are encrypted and never exposed in plain text.
  • Only logged-in users can access their own data.

1. Create AI Model Configuration

Create a secure API endpoint that allows a logged-in user to save their AI model configuration.

Endpoint

POST /api/v1/create/ai-model-config

Request Body

{
  "models": [
    {
      "provider": "OpenAI",
      "type": "chat",
      "model": "gpt-4",
      "apiKeyEncrypted": "<encrypted-api-key-openai>",
      "isPaid": true
    },
    {
      "provider": "Anthropic",
      "type": "chat",
      "model": "claude-2",
      "apiKeyEncrypted": "<encrypted-api-key-anthropic>",
      "isPaid": true
    },
    {
      "provider": "Ollama",
      "type": "chat",
      "model": "llama3",
      "apiKeyEncrypted": null,
      "isPaid": false
    }
  ],
  "system_prompt": "You are a helpful AI assistant."
}

🔹 2. GET Ai model Config

Implement a secure API route that allows an authenticated user to retrieve their saved AI model configuration.

Endpoint

GET  /api/v1/get/ai-model-config

This configuration includes:

  • A list of AI providers and their selected models
  • A isPaid flag for each model (to manage billing or usage restrictions)
  • A customizable system_prompt used to influence AI assistant behavior

⚠️ API keys must never be returned in plaintext.

🧾 Example Response

✅ Standard Response (No Encrypted Keys Returned)

{
  "models": [
    {
      "provider": "OpenAI",
      "type": "chat",
      "model": "gpt-4",
      "isPaid": true
    },
    {
      "provider": "Anthropic",
      "type": "chat",
      "model": "claude-2",
      "isPaid": true
    },
    {
      "provider": "Ollama",
      "type": "chat",
      "model": "llama3",
      "isPaid": false
    }
  ],
  "system_prompt": "You are a helpful AI assistant."
}

🔐 Optional: Response Including Encrypted Keys (If needed by client)

{
  "models": [
    {
      "provider": "OpenAI",
      "type": "chat",
      "model": "gpt-4",
      "isPaid": true,
      "apiKeyEncrypted": "<encrypted-string>"
    }
  ],
  "system_prompt": "You are a helpful AI assistant."
}

🔹 3. Update AI Model Configration

Endpoint

PUT  /api/v1/update/ai-model-config

Provide an API endpoint that allows a logged-in user to update their existing AI model configuration. This route enables users to modify:

  • AI provider or model details
  • API keys (must be encrypted)
  • Paid/free flags
  • The system-wide AI assistant prompt

This ensures the configuration remains flexible, secure, and personalized to the user’s preferences.


🧾 Request Body

The structure is the same as the POST route. Includes:

{
  "models": [
    {
      "provider": "OpenAI",
      "type": "chat",
      "model": "gpt-4",
      "apiKeyEncrypted": "<new-or-existing-encrypted-key>",
      "isPaid": true
    },
    {
      "provider": "Ollama",
      "type": "chat",
      "model": "llama3",
      "apiKeyEncrypted": null,
      "isPaid": false
    }
  ],
  "system_prompt": "You are a helpful AI assistant."
}

⚠️ API keys must already be encrypted client-side before sending.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions