Welcome to the LLM-Based Web Search project! This project leverages a Language Model (LLM) to perform real-time search and retrieval of information from the web. It showcases the power of combining LLMs with web scraping and retrieval-augmented generation (RAG) techniques to provide accurate and contextually relevant information.
You can access the demo of the Streamlit app and experience the capabilities of LLM-based web search firsthand.
π Try the Demo on Hugging Face Spaces
The backend for this app is also hosted on Hugging Face Spaces. You can explore the API that powers the LLM-based search and see how the magic happens behind the scenes.
π Check Out the Backend API
This project utilizes advanced generative models to interpret search queries and fetch relevant data from the web. The process involves:
- Search & Retrieval: The app scrapes web content based on user queries.
- Content Processing: Extracted content is then processed and filtered.
- LLM Generation: The cleaned and concatenated content is fed into an LLM, which generates insightful responses.
The frontend of this project is built with Streamlit, providing a simple and interactive interface. Users can input queries and receive detailed responses in real-time, powered by the LLM.
The backend is developed using Flask, serving as an API that handles the heavy lifting of web scraping, content processing, and interaction with the LLM.