A locally-hosted ML study assistant that runs entirely on your own machine with no API keys, no internet connection, and no data sent to external servers. Built with Python and Ollama, accelerated by a local NVIDIA GPU via CUDA.
- Conversational explanations of machine learning concepts
- Analogies tailored to make abstract ideas concrete
- On-demand quizzing with the
quizcommand - Fully offline after initial model download
- Python
- Ollama
- LLaMA 3.2 (local model)
- Ollama installed
- Python 3.x
- NVIDIA GPU recommended for faster inference
- Clone this repository
- Install dependencies:
pip install -r requirements.txt - Pull the model:
ollama pull llama3.2 - Run:
python main.py
Type any machine learning concept to get an explanation.
Type quiz at any time to be quizzed on the last topic discussed.
Type quit or exit to close the assistant.