Skip to content

A fully local desktop AI assistant built in C++ with wxWidgets, powered by llama.cpp and running offline.

License

Notifications You must be signed in to change notification settings

mrithip/local-llm-desktop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blacky AI

A local, private, and uncensored AI chat client with a hacker terminal aesthetic.

Blacky AI allows you to run powerful AI models (like Mistral, Dolphin) entirely on your own computer. No data leaves your machine. It consists of two parts:

  1. The Engine: llama.cpp (The "Brain" that processes text).
  2. The Client: Blacky AI (The "Face" - a cool terminal-style app you interact with).

Features

  • 100% Private: Runs offline on your localhost.
  • Uncensored: Compatible with uncensored models (no "I cannot do that").
  • Hacker Console: Minimalist black & white terminal UI.
  • Auto-Start: Launches the AI engine automatically when you open the app.
  • Real-Time Streaming: Watch the AI type out answers character by character.
  • Stop Button: Interrupt generation instantly if the AI goes off-track.

Prerequisites (What you need installed)

Before starting, open your terminal and run these commands to install necessary tools.

Ubuntu / Debian / Linux Mint

You need C++ compilers, CMake (build tool), and the wxWidgets library (for the GUI).

sudo apt update
sudo apt install build-essential cmake git libwxgtk3.0-gtk3-dev libcurl4-openssl-dev

Installation Guide

Follow these steps one by one.

1. Clone the Repository

Get the code onto your machine.

git clone https://github.com/yourusername/blacky-ai.git
cd blacky-ai

2. Set Up the AI Engine (llama.cpp)

We need to download and build the AI engine inside the engine/ folder.

# Create directory if it doesn't exist
mkdir -p engine
cd engine

# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# Build it
make

(Go back to the main folder)

cd ../..

3. Download a Model

The AI needs a "model" file (the actual brain). We recommend Dolphin Mistral (Uncensored).

  1. Create a models folder:
    mkdir -p models
  2. Download the model file (.gguf format) and save it inside models/.

Your folder structure should look like this:

blacky-ai/
├── engine/
│   └── llama.cpp/
├── models/
│   └── dolphin-2.6-mistral-7b.Q4_K_M.gguf
├── client/
└── ...

4. Build the Blacky AI Client

Now compile the actual chat application.

cd client/wxblacky
mkdir -p build
cd build
cmake ..
make

How to Use

You don't need to touch the server. The app handles everything!

  1. Run the App:

    ./wx_blacky

    (Assuming you are inside client/wxblacky/build)

  2. Chat: Type your message and press Enter.

  3. Stop: Click the [stop] button to interrupt the AI.

  4. Exit: Just close the window. The background AI server will shut down automatically.


Troubleshooting

"AI SERVER FAILED TO START... [ERROR]"

  • Check if you downloaded the model to the exact path blacky-ai/models/dolphin-2.6-mistral-7b.Q4_K_M.gguf.
  • Make sure llama-server was built successfully in step 2.

"No response from AI"

  • Ensure you have enough RAM (at least 8GB recommended).
  • Check your internet connection (only needed for the initial model download, not for chatting).

Input appears twice?

  • This was a known bug in older versions. Pull the latest code and rebuild.

Credits

About

A fully local desktop AI assistant built in C++ with wxWidgets, powered by llama.cpp and running offline.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published