WebChat is an intelligent Streamlit application that allows users to have interactive conversations with the content of any website. Powered by Google's Gemini AI, it scrapes website content, creates embeddings, and lets you query the information in a natural, conversational manner.
- Chat with Any Website: Input a URL, and WebChat will process its content for you to question.
- Multi-Model Intelligence: integrated with multiple Gemini models for robustness.
- Primary:
gemini-2.5-flash - Fallback 1:
gemini-2.5-flash-lite - Fallback 2:
gemini-2.0-flash - Automatic Fallback: If a model is exhausted or unavailable, the system transparently switches to the next available model.
- Primary:
- Context-Aware: Remembers your conversation history for follow-up questions.
- Source-Grounded: Responses are generated strictly based on the website's content to minimize hallucinations.
- Interactive UI: Clean Streamlit interface with a typewriter effect for responses.
- Python 3.8+
- Pipenv (recommended) or
pip - A Google Gemini API Key
-
Clone the Repository
git clone https://github.com/LaxmiNarayana31/webchat.git cd webchat -
Install Dependencies Using Pipenv:
pipenv install pipenv shell
-
Environment Setup Create a
.envfile in the root directory:touch .env
Add your Gemini API Key:
GEMINI_API_KEY=your_actual_api_key_here
Run the application:
streamlit run main.py- Load a Website: In the sidebar, enter a URL (e.g.,
https://example.com) and click Load Website. - Start Chatting: Once loaded, use the chat input to ask questions about the page content.
- View History: Previous chats are saved in the sidebar for easy switching.
app/: Main application code.helper/: Helper classes for AI (ai_helper.py), LLM (llm_helper.py), and utilities.streamlit/: UI components (streamlit_app.py).
main.py: Entry point for the application.
The application uses a custom GoogleGeminiLLM class that implements a robust fallback mechanism. It iterates through the following models:
gemini-2.5-flash(High performance)gemini-2.5-flash-lite(Lightweight fallback)gemini-2.0-flash(Legacy fallback)
This ensures high availability even if you hit the rate limits of the primary model.