An assistant application for the visually challenged
This project aims to build a software that can help the partially visually challenged people to navigate their surroundings. This is a simple, basic version of the application.
How to run the program on your local device?
- Download the code files in the given repository as a zip file.
- Install the requirements from requirements.txt. Note that there are multiple additional modules in requirements.txt that might not be required for the working of the application in its present state, but will come handy with future updates.
- Install the llama model from huggingface. To know how, refer to the following article -> https://medium.com/@lucnguyen_61589/llama-2-using-huggingface-part-1-3a29fdbaa9ed
- Install the yolo .pt file as per your choice of weight. The current version used by this application is yolov8n
Feel free to customise the code to align with your set of requirements!
Note:
- The time taken to generate a reply can be high as while running due to the huge size of llama models even after quantisation.
- The print statements written are for manual verification to check whether any given step has taken place to simplify debugging.