AI process miner and systems integrations optimizer - Proof of Concept
Before installing, ensure you have:
- Node.js v18 or higher
- CMake (required for node-llama-cpp)
- A GGUF format LLaMA model
- Create a
Models/Llama-3.2-3B-GGUFdirectory in the project root (automatically created on first run) - Download the GGUF model:
# For Windows PowerShell New-Item -ItemType Directory -Force -Path "Models/Llama-3.2-3B-GGUF" Invoke-WebRequest -Uri "https://huggingface.co/TheBloke/llama2-3.2b-gguf/resolve/main/llama2_3.2b.Q4_K_M.gguf" -OutFile "Models/Llama-3.2-3B-GGUF/llama2_3.2b.Q4_K_M.gguf" # For Linux/macOS mkdir -p Models/Llama-3.2-3B-GGUF curl -L "https://huggingface.co/TheBloke/llama2-3.2b-gguf/resolve/main/llama2_3.2b.Q4_K_M.gguf" -o "Models/Llama-3.2-3B-GGUF/llama2_3.2b.Q4_K_M.gguf"
- Verify the file exists and has the exact name:
llama2_3.2b.Q4_K_M.gguf
Windows:
- Download from https://cmake.org/download/
- Install the Windows x64 Installer
- During installation, select "Add CMake to the system PATH"
Linux: