A local, private, and uncensored AI chat client with a hacker terminal aesthetic.
Blacky AI allows you to run powerful AI models (like Mistral, Dolphin) entirely on your own computer. No data leaves your machine. It consists of two parts:
- The Engine:
llama.cpp(The "Brain" that processes text). - The Client:
Blacky AI(The "Face" - a cool terminal-style app you interact with).
- 100% Private: Runs offline on your localhost.
- Uncensored: Compatible with uncensored models (no "I cannot do that").
- Hacker Console: Minimalist black & white terminal UI.
- Auto-Start: Launches the AI engine automatically when you open the app.
- Real-Time Streaming: Watch the AI type out answers character by character.
- Stop Button: Interrupt generation instantly if the AI goes off-track.
Before starting, open your terminal and run these commands to install necessary tools.
You need C++ compilers, CMake (build tool), and the wxWidgets library (for the GUI).
sudo apt update
sudo apt install build-essential cmake git libwxgtk3.0-gtk3-dev libcurl4-openssl-devFollow these steps one by one.
Get the code onto your machine.
git clone https://github.com/yourusername/blacky-ai.git
cd blacky-aiWe need to download and build the AI engine inside the engine/ folder.
# Create directory if it doesn't exist
mkdir -p engine
cd engine
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Build it
make(Go back to the main folder)
cd ../..The AI needs a "model" file (the actual brain). We recommend Dolphin Mistral (Uncensored).
- Create a models folder:
mkdir -p models
- Download the model file (
.ggufformat) and save it insidemodels/.- Recommended Link: dolphin-2.6-mistral-7b.Q4_K_M.gguf
- Note: This file is approx 4GB.
Your folder structure should look like this:
blacky-ai/
├── engine/
│ └── llama.cpp/
├── models/
│ └── dolphin-2.6-mistral-7b.Q4_K_M.gguf
├── client/
└── ...
Now compile the actual chat application.
cd client/wxblacky
mkdir -p build
cd build
cmake ..
makeYou don't need to touch the server. The app handles everything!
-
Run the App:
./wx_blacky
(Assuming you are inside
client/wxblacky/build) -
Chat: Type your message and press Enter.
-
Stop: Click the
[stop]button to interrupt the AI. -
Exit: Just close the window. The background AI server will shut down automatically.
"AI SERVER FAILED TO START... [ERROR]"
- Check if you downloaded the model to the exact path
blacky-ai/models/dolphin-2.6-mistral-7b.Q4_K_M.gguf. - Make sure
llama-serverwas built successfully in step 2.
"No response from AI"
- Ensure you have enough RAM (at least 8GB recommended).
- Check your internet connection (only needed for the initial model download, not for chatting).
Input appears twice?
- This was a known bug in older versions. Pull the latest code and rebuild.