virtualenv venv source venv/bin/activatepip install -r requirements.txtPINECONE_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
PINECONE_API_ENV = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"Download the quantize model from the link provided in model folder & keep the model in the model directory:
## Download the Llama 2 Model:
llama-2-7b-chat.ggmlv3.q4_0.bin
## From the following link:
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main# run the following command
python store_index.py# Finally run the following command
python app.pyNow,
open up localhost:- Python
- LangChain
- Flask
- Meta Llama2
- Pinecone
