This project demonstrates how to use the local Ollama API to interact with an AI model. It includes instructions to install the necessary tools, download a model, and run a sample chat script.
Download and install Ollama from the official site:
Once installed, make sure Ollama is running,
or run by clicking the app icon on your machine.

Run following command to download a LLM model
ollama run gemma3:1b # This command downloads the gemma3:1b model if it’s not already present.
or
ollama run deepseek-r1:1.5b
Run following command to check if the LLM model downloaded successfully
ollama list
Install packages mentioned in project's requirements.txt file
pip install -r requirements.txt
Open the directory on any browser or use Live Server vs code extentiion
python v1/chat-with-llm.py
or
python v2/server-of-llm-v2.py


