PatchMind is an AI-enhanced code editor with inline LLM support, real-time prompt workflows, and RAG-based file context awareness.
- LLM Chat Integration — Chat with Gemini or Ollama-backed models
- Inline Editing — Apply prompts directly to open documents
- Prompt Manager — Save, reuse, and batch-apply prompts to files
- Real-Time Context — File tree with checkboxes for RAG context control
- Token Awareness — Visual context token tracking and budgeting
- Themes & Fonts — Customizable editor appearance
- Multi-Provider Support — Easily switch between LLMs
- Python 3.11+
- Poetry (Recommended for development) or
pip - An Ollama instance running (locally or via Docker)
There are two main ways to install and run PatchMind locally:
1. Using Poetry (Recommended)
# Clone the repository (if you haven't already)
# git clone <repository_url>
# cd patchmind-ide
# Install dependencies using Poetry
poetry install --no-dev
# Run the application
poetry run python -m pm2. Using Pip
# Clone the repository (if you haven't already)
# git clone <repository_url>
# cd patchmind-ide
# Install dependencies using pip
pip install -r requirements.txt
# Run the application
python -m pmThis method runs both the PatchMind IDE and an Ollama instance in separate containers using Docker Compose. This is useful for isolating dependencies and ensuring a consistent environment.
Prerequisites:
- Docker (Install Docker)
- Docker Compose (Install Docker Compose)
- An X Server running on your host machine (this is standard on most Linux desktops).
Steps:
-
Allow Container Access to X Server: You need to explicitly grant containers permission to connect to your host's X Server for the GUI to display. Open a terminal on your host machine and run:
xhost +local:docker
(Note: This command allows any local user running Docker containers to connect to your X server. This is generally safe for local development but be aware of the security implications if you run untrusted containers. You can revoke permission later with
xhost -local:docker) -
Build and Run: Navigate to the project's root directory (where
docker-compose.ymlis located) in your terminal and run:docker compose up --build
This command will:
- Build the
patchmind-ideDocker image based on theDockerfile(if not already built). - Download the
ollama/ollamaimage (if not already present). - Start both the
patchmind-ideandollamacontainers. - The PatchMind IDE GUI should appear on your desktop.
- Build the
-
Using Ollama:
- The IDE container is configured to connect to Ollama at
http://ollama:11434. - You will need to pull models into the Ollama container after it starts. You can do this by opening another terminal and running:
docker exec -it patchmind-ollama ollama pull llama3:8b # Or any other model you need
- The IDE should then be able to list and use the models pulled into the
ollamacontainer.
- The IDE container is configured to connect to Ollama at
-
Stopping:
- Press
Ctrl+Cin the terminal wheredocker compose upis running. - To ensure containers are fully stopped and removed, run:
docker compose down
- Ollama models will be persisted in the
./ollama_datavolume on your host.
- Press
Troubleshooting Docker:
- GUI doesn't appear / X11 Connection Error: Ensure
xhost +local:dockerwas run correctly and yourDISPLAYenvironment variable is set properly on the host. Check the container logs (docker logs patchmind-ide-app). You might need to adjust volume mounts for X11 authentication (XAUTHORITY) depending on your system setup. - Cannot connect to Ollama: Verify the
ollamacontainer is running (docker ps). Check theOLLAMA_HOSTenvironment variable in thedocker-compose.ymlfile and ensure the IDE's Ollama client uses it (this might require code adjustments if the client library doesn't automatically pick upOLLAMA_HOST).
- Auto-format:
black . - Lint:
ruff .
pm/
├── core/ # Logic, config, services
├── ui/ # Qt widgets & dialogs
├── handlers/ # Connects UI events to Core logic
├── assets/ # QSS stylesheets, etc.
├── __main__.py # Entry point
Dockerfile # For building the Docker image
docker-compose.yml # For running with Docker Compose
pyproject.toml # Poetry config
README.md # This file
...
- Follows PEP8 with
black&ruff - Use
Signal/Slotcorrectly with type annotations - Never modify the GUI from non-main threads — use signals
- See
DEVELOPER_GUIDELINES.mdfor more details.
MIT © Kal Aeolian
