LLMASP is a framework that tries to unify the potential of Large Language Models (LLMs) and the reasoning power of Answer Set Programming (ASP), a form of declarative programming oriented towards difficult search problems.
Install the package from PyPI using pip by following the instructions on the package page.
Install poetry following the official documentation.
git clone https://github.com/LewAshby/llmasp.git
cd llmasp
poetry installDownload and install Ollama following the official documentation, then run
ollama serveMake sure you downloaded the model before calling it. E.g.
ollama pull llama3Go to Ollama library for all available models.
If you are running Ollama locally, the default IP address + port will be http://localhost:11434, for more details about Ollama and OpenAI compability go here. If you installed Ollama in an external server, then create an SSH local port forwarding.
ssh -L <local_port>:<remote_server>:<remote_port> <user>@<remote_server>Ollama binds 127.0. 0.1 port 11434 by default
Use the command-line help for usage instructions.
cd llmasp
poetry run llmasp --helpRun example case
poetry run llmasp --example -m llama3 -s http://localhost:11434/v1