Skip to content

This project demonstrates how to use the local Ollama API to interact with an AI model. It includes instructions to install the necessary tools, download a model, and run a sample chat script.

Notifications You must be signed in to change notification settings

PriDebnath/run-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Chat API Script

This project demonstrates how to use the local Ollama API to interact with an AI model. It includes instructions to install the necessary tools, download a model, and run a sample chat script.

Step 1: Install Ollama

Download and install Ollama from the official site:

👉 https://ollama.com/download

Once installed, make sure Ollama is running, or run by clicking the app icon on your machine.

Step 2: Download a LLM model

Run following command to download a LLM model

ollama run gemma3:1b  # This command downloads the gemma3:1b model if it’s not already present.

or

ollama run deepseek-r1:1.5b  

Run following command to check if the LLM model downloaded successfully

ollama list 

Step 2: Install Packages

Install packages mentioned in project's requirements.txt file

  pip install -r requirements.txt

Step 3: Run and Chat

Serve frontend

Open the directory on any browser or use Live Server vs code extentiion

Serve backend

python v1/chat-with-llm.py

or

python v2/server-of-llm-v2.py

Demo v2

About

This project demonstrates how to use the local Ollama API to interact with an AI model. It includes instructions to install the necessary tools, download a model, and run a sample chat script.

Topics

Resources

Stars

Watchers

Forks