Skip to content

Adding short-term memory to a local Ollama chat bot using model-tools. Using agentic protocols to give a locally run LLMs pseudo increase in their context window

Notifications You must be signed in to change notification settings

anw10/memory-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

To Run this project install Ollama first and then install the Ollama python library to access the Ollama API using pip install ollama.

Next make sure to download your choice of model to run locally eg. ollama pull qwen3:14b. Remember to add this exact model name to the llama.py file.

About

Adding short-term memory to a local Ollama chat bot using model-tools. Using agentic protocols to give a locally run LLMs pseudo increase in their context window

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages