This repository provide the backend functionality for AI and LLM models that integrates Pinecone for vector database search and OpenAI GPT-3.5-turbo for generating conversational responses. The service is built with NestJS and designed to handle real-time user queries securely and efficiently.
Check out the video demo of the project:
- AI-driven responses: Leverages OpenAI's GPT-3.5-turbo to generate relevant responses based on user queries.
- Pinecone Vector Database: Retrieves relevant documents from Pinecone to provide context for AI responses.
- API Key-based Authentication: Protects the API from unauthorized access using an API key.
- Rate Limiting: Implements rate-limiting to prevent abuse and ensure service availability.
The main purpose of this project is to provide the backend functionality for AI and LLM models such as chatbot which can be embedded on a personal or business website. Users can ask questions, and the chatbot will retrieve relevant information from a Pinecone vector database before providing a meaningful response generated by OpenAI's GPT-3.5-turbo model.
The project includes API key authentication and rate-limiting mechanisms to secure the service from malicious use and abuse while maintaining a smooth user experience.
To run this project locally, ensure you have the following installed:
- Node.js (v14.x or later)
- npm (v6.x or later) or yarn
- A Pinecone account (with an API key)
- An OpenAI account (with an API key)
-
Clone the repository:
git clone https://github.com/your-username/chatbot-backend.git cd chatbot-backend -
Set up environment variables:
PINECONE_API_KEY=<your-pinecone-api-key> PINECONE_INDEX_NAME=<your-pinecone-index-name> OPENAI_API_KEY=<your-openai-api-key> API_KEY=<your-backend-api-key> DOMAIN_ORIGIN=http://localhost:5174 # or your frontend URL
🟣 Walber Melo
This project is licensed under the MIT License - see the LICENSE file for details.