This project provides a full-stack AI assistant for job search queries, featuring a user-friendly React frontend and a LangChain + LangGraph backend powered by Ollama. The backend exposes a Flask endpoint for AI interaction, while the frontend offers a seamless user experience.
This project can be run either directly on your local machine or containerized using Docker.
Follow these steps to get the application running directly on your local machine.
It is highly recommended to use a Python virtual environment to manage dependencies.
From the project's root directory:
python3 -m venv ~/.venv/venv
source ~/.venv/venv/bin/activate
pip install -r requirements.txtThis will create and activate a virtual environment, then install all necessary Python packages.
The frontend is a React application developed with Vite.
From the frontend/ directory:
cd frontend
npm installThis will install all necessary JavaScript dependencies for the frontend.
The application uses Ollama to serve a local Large Language Model (LLM).
a. Install Ollama:
If you don't have Ollama installed globally on your system, follow the instructions on their official website: https://ollama.com/
b. Start Ollama Server and Pull a Model:
Start the Ollama server. This usually runs in the background. Open a new terminal for this.
ollama serveIn the same terminal where ollama serve is running (or a third terminal if preferred), pull the desired LLM model.
For systems with 8GB VRAM (like an Nvidia 3070ti), llama3.1 is a good choice (requires approximately 4.7GB of VRAM). You can also try gemma3 if you encounter memory issues.
ollama pull llama3.1
# Or a smaller model, e.g.:
ollama pull gemma3Note: The jobsearch_app/agent.py file is configured to use the gemma3 model by default (OLLAMA_MODEL = "gemma3"). If you pull a different model (e.g., llama3.1), you must update this variable in agent.py accordingly.
A local SQLite database containing AI job dataset is set up using a dedicated Docker container. The database file (ai_jobs.db) will be created in the data/ directory of this project.
a. Build the SQLite Docker Image:
From the project's root directory:
docker build -t sqlite-db -f database_container/Dockerfile .b. Run the SQLite Docker Container:
This will start the database container, which now includes the ai_jobs.db database automatically populated from ai_job_dataset.csv during the build process. It mounts the data/ directory to persist the database file, and the container will automatically restart if stopped.
docker run -d --name sqlite-jobsearch -v $(pwd)/data:/app/data --restart unless-stopped sqlite-dbEnsure your virtual environment is activated and the SQLite Docker container is running. Then, from the project's root directory, run the Flask application.
From the project's root directory:
source ~/.venv/venv/bin/activate
python -m jobsearch_app.mainThe Flask application will start on http://0.0.0.0:5000. It includes a background check for the Ollama server status. Ensure Ollama is running (ollama serve in a separate terminal) and the sqlite-jobsearch container is active. For the full UI experience, you will also need to run the frontend application.
Alternatively, you can run the entire application, including Ollama, within a Docker container.
From the project's root directory, build the Docker image. This process will install dependencies, Ollama, and pull the llama3 model (which may take some time).
docker build -t jobsearch-ai-app .Once the image is built, run the container, mapping the necessary ports (Flask app on 5000, Ollama on 11434).
docker run -p 5000:5000 -p 11434:11434 --name jobsearch-agent-container jobsearch-ai-appNote: The ollama serve process runs in the background within the container, and llama3 model is pulled during the build phase to ensure readiness.
1. Launching the Frontend UI:
To interact with the full-stack application, navigate to the frontend/ directory in a new terminal and start the development server:
cd frontend
npm run devThe application will typically open in your browser at http://localhost:5173 (or another available port). This UI provides an intuitive way to send queries to the AI assistant.
2. Direct API Interaction (using curl):
For direct API testing or integration, you can still send POST requests to the /predict endpoint. The agent handles both general job search queries and SQL-specific questions to the ai_job_dataset table.
Example using curl (from a new terminal, with the Python virtual environment activated):
source ~/.venv/venv/bin/activate
curl -X POST -H "Content-Type: application/json" -d '{"query": "What are the key skills for a Python developer in 2025 and which companies are hiring for them?"}' http://localhost:5000/predict'Example SQL Query via Agent:
source ~/.venv/venv/bin/activate
curl -X POST -H "Content-Type: application/json" -d '{"query": "How many rows are in the ai_job_dataset table?"}' http://localhost:5000/predict'The backend will return a JSON response containing the agent's generated answer.
This application features a multi-agent system powered by LangChain and LangGraph, capable of routing user queries to specialized agents based on detected keywords. This allows for more targeted and effective responses to different types of job search inquiries.
- Purpose: Handles general job search queries, providing comprehensive answers based on its foundational knowledge. This is the default agent when no specific keywords for other agents are detected.
- Interaction: Simply ask your job search-related questions naturally.
- Example Query: "What are the common skills required for a machine learning engineer in 2025 and what are the top companies hiring?"
- Purpose: Interacts with the
ai_job_datasetSQLite database to answer data-specific questions. This agent is ideal for extracting structured information, performing counts, or listing data entries. - Interaction: Use keywords like "database", "sql", "query", "table", "count", "list", "top", "group by", or "join" in your questions. You can ask about the database directly.
- Example Queries:
- "How many jobs are in the ai_job_dataset table?"
- "List the top 5 companies by job count in the database."
- "Show me the schema of the ai_job_dataset table."
- Purpose: Conducts real-time web searches using the Tavily API to fetch current information, news, or broader context that might not be available in the LLM's training data or the local database.
- Interaction: Trigger this agent by including keywords such as "search", "find information", "latest news", "what is", "how to", "current events", "who is", "when did", "forecast", or "weather" in your query.
- Example Queries:
- "Search for the latest trends in remote software development jobs."
- "What is the average salary for a data scientist in New York City?"
- "Find information about new AI regulations."
