Skip to content

The main purpose of this project is to provide the backend functionality for AI and LLM models which can be embedded on a personal or business website

License

Notifications You must be signed in to change notification settings

WalberMelo/ai-api-models

Repository files navigation

AI-Powered Backend Service

This repository provide the backend functionality for AI and LLM models that integrates Pinecone for vector database search and OpenAI GPT-3.5-turbo for generating conversational responses. The service is built with NestJS and designed to handle real-time user queries securely and efficiently.

Project Overview

Check out the video demo of the project:

Watch the video

Key Features

  • AI-driven responses: Leverages OpenAI's GPT-3.5-turbo to generate relevant responses based on user queries.
  • Pinecone Vector Database: Retrieves relevant documents from Pinecone to provide context for AI responses.
  • API Key-based Authentication: Protects the API from unauthorized access using an API key.
  • Rate Limiting: Implements rate-limiting to prevent abuse and ensure service availability.

Project Purpose

The main purpose of this project is to provide the backend functionality for AI and LLM models such as chatbot which can be embedded on a personal or business website. Users can ask questions, and the chatbot will retrieve relevant information from a Pinecone vector database before providing a meaningful response generated by OpenAI's GPT-3.5-turbo model.

The project includes API key authentication and rate-limiting mechanisms to secure the service from malicious use and abuse while maintaining a smooth user experience.


Setup Instructions

Prerequisites

To run this project locally, ensure you have the following installed:

  • Node.js (v14.x or later)
  • npm (v6.x or later) or yarn
  • A Pinecone account (with an API key)
  • An OpenAI account (with an API key)

Installation

  1. Clone the repository:

    git clone https://github.com/your-username/chatbot-backend.git
    cd chatbot-backend
  2. Set up environment variables:

    PINECONE_API_KEY=<your-pinecone-api-key>
    PINECONE_INDEX_NAME=<your-pinecone-index-name>
    OPENAI_API_KEY=<your-openai-api-key>
    API_KEY=<your-backend-api-key>
    DOMAIN_ORIGIN=http://localhost:5174 # or your frontend URL
    
    

Author

🟣 Walber Melo

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

The main purpose of this project is to provide the backend functionality for AI and LLM models which can be embedded on a personal or business website

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published