Skip to content

KauanCerqueira/AgentDock

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ AgentDock

Your Personal AI Workstation

Run powerful AI models locally with zero compromise on privacy. A beautiful desktop application that brings enterprise-grade language models to your machine with OpenAI-compatible APIs.

License: MIT .NET 8 React TypeScript Electron Platform GitHub Stars

🎯 Features β€’ πŸ“₯ Installation β€’ πŸš€ Quick Start β€’ πŸ“š Documentation β€’ 🀝 Contributing

AgentDock Dashboard


🌟 What is AgentDock?

AgentDock is not just another AI toolβ€”it's your complete AI infrastructure running entirely on your machine. Imagine having the power of ChatGPT, but with full control, zero costs, and complete privacy. That's AgentDock.

🎯 The Problem We Solve

  • πŸ’Έ Tired of API costs? Run unlimited AI requests without paying per token
  • πŸ”’ Privacy concerns? Your data never leaves your machine
  • 🌐 Need offline AI? Work anywhere, no internet required
  • πŸ”§ Want customization? Fine-tune and switch models instantly
  • πŸš€ Developer-friendly? Drop-in replacement for OpenAI's API

✨ Why AgentDock?

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                                                             β”‚
β”‚  "The easiest way to run local AI models with a           β”‚
β”‚   professional-grade API that works with all your          β”‚
β”‚   existing tools and code."                                β”‚
β”‚                                                             β”‚
β”‚  - Works with LangChain, AutoGPT, Continue.dev, and more  β”‚
β”‚  - Drop-in replacement for OpenAI's API                    β”‚
β”‚  - Beautiful UI + Powerful API                             β”‚
β”‚                                                             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🎯 Features

πŸ€– AI Model Management

Smart Model Recommendations

  • 🧠 Hardware Detection: Automatically detects your CPU, RAM, GPU (NVIDIA/AMD/Intel)
  • πŸ“Š Compatibility Scoring: Each model gets a compatibility score based on your hardware
  • ⚑ Performance Estimates: See expected tokens/second before downloading
  • 🎯 Curated Selection: Pre-filtered models that actually work well

One-Click Model Download

  • πŸ” HuggingFace Integration: Search 100,000+ models
  • πŸ“¦ Smart Filtering: Only shows GGUF models compatible with llama.cpp
  • ⏬ Download Queue: Parallel downloads with progress tracking
  • πŸ’Ύ Disk Space Checks: Warns before downloading if space is low

πŸ”Œ OpenAI-Compatible API

# That's it. Your code doesn't change.
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:5000/v1",  # Point to AgentDock
    api_key="sk-agentdock-admin"
)

# Works exactly like OpenAI
response = client.chat.completions.create(
    model="llama-2-7b-chat",
    messages=[{"role": "user", "content": "Explain quantum computing"}],
    stream=True  # Streaming support!
)

What You Get:

  • βœ… /v1/chat/completions - Chat completions with streaming
  • βœ… /v1/models - List available models
  • βœ… Bearer token authentication
  • βœ… Works with LangChain, LlamaIndex, AutoGPT, Continue.dev
  • βœ… Network accessible - use from other devices on your LAN

🎨 Beautiful Desktop Experience

πŸ’¬ Chat Interface

  • Real-time streaming responses
  • Code syntax highlighting
  • Copy/paste with formatting
  • Export conversations

πŸ“Š Dashboard

  • Live system monitoring
  • GPU/CPU/RAM usage
  • API request metrics
  • Model performance stats

βš™οΈ Model Browser

  • Search & filter models
  • Hardware compatibility badges
  • One-click downloads
  • Automatic loading

πŸ”§ Developer Tools

  • πŸ› οΈ Built-in Swagger UI - Interactive API documentation at /swagger
  • πŸ“ˆ Analytics Dashboard - Track API usage, response times, token counts
  • πŸ”‘ API Key Management - Create, revoke, and manage multiple keys
  • πŸ“ Request Logging - Full request/response logging for debugging
  • πŸ§ͺ API Playground - Test endpoints with code examples in Python, Node.js, cURL

⚑ Performance & Optimization

Feature Description
GPU Acceleration CUDA (NVIDIA), ROCm (AMD), SYCL (Intel), Metal (Apple Silicon)
CPU Optimization AVX, AVX2, AVX512 instruction sets
Memory Management Automatic context size optimization
Multi-Model Support Switch models without restart
Streaming Responses Real-time token generation

πŸ”’ Privacy & Security

  • 🏠 100% Local - No data ever leaves your machine
  • πŸ” API Authentication - Bearer token security
  • 🚫 No Telemetry - We don't track anything
  • πŸ—ƒοΈ Data Control - All conversations stored locally
  • πŸ”’ Offline Capable - Works without internet after setup

πŸ“₯ Installation

🎯 For End Users (Recommended)

Get started in 3 minutes:

  1. Download the installer

    • Visit Releases
    • Choose your platform:
      • πŸͺŸ Windows: AgentDock-Setup-x.x.x.exe (Installer) or .exe (Portable)
      • 🍎 macOS: AgentDock-x.x.x.dmg (Intel) or .arm64.dmg (Apple Silicon)
      • 🐧 Linux: AgentDock-x.x.x.AppImage (Universal) or .deb (Debian/Ubuntu)
  2. Install & Launch

    • Run the installer
    • AgentDock starts automatically
    • First launch: App detects your GPU and downloads optimal llama.cpp binaries (~100-300MB)
  3. Download a Model

    • Go to Models tab
    • Click Recommended for You
    • Download suggested model (or search for others)
    • Model auto-loads when download completes
  4. Start Using

    • Chat: Test the model in the chat interface
    • API: Your OpenAI-compatible API is running at http://localhost:5000/v1
    • Swagger: Explore API docs at http://localhost:5000/swagger

πŸ’» For Developers

Prerequisites:

Quick Setup:

# Clone the repository
git clone https://github.com/KauanCerqueira/AgentDock.git
cd AgentDock

# Automatic setup (detects GPU, downloads binaries, installs deps)
# Windows PowerShell:
.\setup.ps1

# macOS/Linux:
chmod +x setup.sh && ./setup.sh

# Start development server
npm run dev

What the setup script does:

  1. Detects your GPU (NVIDIA β†’ CUDA, AMD β†’ ROCm, Intel β†’ SYCL, None β†’ CPU)
  2. Downloads appropriate llama.cpp binaries from official releases
  3. Installs all npm dependencies
  4. Sets up the backend and frontend
  5. You're ready to code!

πŸš€ Quick Start

Starting the Application

Option 1: Development Mode (for contributors)

npm run dev

This starts:

  • πŸ”§ Backend API: http://localhost:5000
  • 🎨 Frontend UI: http://localhost:5173
  • πŸ–₯️ Electron app in development mode

Option 2: Production Build

npm run build
npm start

Option 3: Backend Only (if you just want the API)

cd src/AgentDock.Backend
dotnet run

Using the API

1. Python Example (Most Popular)

from openai import OpenAI

# Connect to AgentDock
client = OpenAI(
    base_url="http://localhost:5000/v1",
    api_key="sk-agentdock-admin"
)

# Simple completion
response = client.chat.completions.create(
    model="llama-2-7b-chat.Q2_K.gguf",
    messages=[
        {"role": "system", "content": "You are a helpful coding assistant."},
        {"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
    ],
    temperature=0.7,
    max_tokens=500
)

print(response.choices[0].message.content)

# Streaming example
stream = client.chat.completions.create(
    model="llama-2-7b-chat.Q2_K.gguf",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

2. Node.js / TypeScript

import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:5000/v1',
  apiKey: 'sk-agentdock-admin'
});

async function chat() {
  const response = await client.chat.completions.create({
    model: 'llama-2-7b-chat.Q2_K.gguf',
    messages: [
      { role: 'user', content: 'Explain async/await in JavaScript' }
    ]
  });
  
  console.log(response.choices[0].message.content);
}

chat();

3. cURL (Terminal)

curl http://localhost:5000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-agentdock-admin" \
  -d '{
    "model": "llama-2-7b-chat.Q2_K.gguf",
    "messages": [
      {
        "role": "user",
        "content": "What is the meaning of life?"
      }
    ],
    "temperature": 0.8,
    "max_tokens": 200
  }'

4. LangChain Integration

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

# Point to AgentDock
llm = ChatOpenAI(
    base_url="http://localhost:5000/v1",
    api_key="sk-agentdock-admin",
    model="llama-2-7b-chat.Q2_K.gguf"
)

# Use anywhere in LangChain
messages = [HumanMessage(content="Translate 'hello' to French")]
response = llm.invoke(messages)
print(response.content)

Network Access (Use from Other Devices)

Your AgentDock API can be accessed from any device on your network:

  1. Find your machine's IP address

    • Windows: ipconfig β†’ look for IPv4 Address
    • macOS/Linux: ifconfig or ip addr β†’ look for inet
    • Example: 192.168.1.100
  2. Update your base URL

    client = OpenAI(
        base_url="http://192.168.1.100:5000/v1",  # Use your IP
        api_key="sk-agentdock-admin"
    )
  3. Firewall: Ensure port 5000 is allowed through your firewall

Use Cases:

  • πŸ“± Run AgentDock on a powerful desktop, access from laptop/tablet
  • πŸ”¬ Share with team members on the same network
  • 🏠 Run on a home server, access from anywhere in your house

πŸ“š Documentation

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         AgentDock                                β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   Electron  │◄────►│  React UI    │◄────►│  .NET Backend  β”‚ β”‚
β”‚  β”‚   Desktop   β”‚      β”‚  (Frontend)  β”‚      β”‚     (API)      β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                       β”‚          β”‚
β”‚                                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚                                              β”‚  llama.cpp     β”‚ β”‚
β”‚                                              β”‚  (Inference)   β”‚ β”‚
β”‚                                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                       β”‚          β”‚
β”‚                                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚                                              β”‚   AI Models    β”‚ β”‚
β”‚                                              β”‚   (.gguf)      β”‚ β”‚
β”‚                                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Technology Stack:

Layer Technologies
Desktop Electron, Node.js
Frontend React 18, TypeScript, TailwindCSS, Vite, React Router
Backend .NET 8, ASP.NET Core, Swagger/OpenAPI
AI Engine llama.cpp (CPU/CUDA/ROCm/Metal/Vulkan)
Models GGUF format (Llama, Mistral, Phi, etc.)

πŸ“‚ Project Structure

AgentDock/
β”‚
β”œβ”€β”€ electron/                      # Electron desktop application
β”‚   β”œβ”€β”€ main.js                    # Main process (app lifecycle)
β”‚   β”œβ”€β”€ preload.js                 # Preload scripts (security bridge)
β”‚   └── dev.js                     # Development launcher
β”‚
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ AgentDock.Backend/         # .NET 8 Web API
β”‚   β”‚   β”œβ”€β”€ Controllers/           # API endpoints
β”‚   β”‚   β”‚   β”œβ”€β”€ ChatController.cs         # Chat completions
β”‚   β”‚   β”‚   β”œβ”€β”€ ModelsController.cs       # Model management
β”‚   β”‚   β”‚   β”œβ”€β”€ OpenAIController.cs       # OpenAI-compatible routes
β”‚   β”‚   β”‚   β”œβ”€β”€ AnalyticsController.cs    # Usage analytics
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Services/              # Business logic
β”‚   β”‚   β”‚   β”œβ”€β”€ SettingsService.cs
β”‚   β”‚   β”‚   β”œβ”€β”€ SystemMonitorService.cs
β”‚   β”‚   β”‚   β”œβ”€β”€ LogsService.cs
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Infrastructure/        # External integrations
β”‚   β”‚   β”‚   β”œβ”€β”€ Llama/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ LlamaCppService.cs        # llama.cpp HTTP client
β”‚   β”‚   β”‚   β”‚   └── LlamaLifecycleService.cs  # Process management
β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   └── HuggingFace/
β”‚   β”‚   β”‚       β”œβ”€β”€ HuggingFaceService.cs     # Model search & details
β”‚   β”‚   β”‚       β”œβ”€β”€ ModelDownloadManager.cs   # Download queue
β”‚   β”‚   β”‚       └── ModelRecommendationService.cs
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Core/                  # Domain models
β”‚   β”‚   β”‚   β”œβ”€β”€ Interfaces/
β”‚   β”‚   β”‚   └── Models/
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ models/                # AI model files (.gguf)
β”‚   β”‚   β”œβ”€β”€ bin/llama/             # llama.cpp binaries
β”‚   β”‚   └── appsettings.json       # Configuration
β”‚   β”‚
β”‚   └── AgentDock.UI/              # React frontend
β”‚       β”œβ”€β”€ src/
β”‚       β”‚   β”œβ”€β”€ components/        # Reusable UI components
β”‚       β”‚   β”‚   β”œβ”€β”€ Layout.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ ModelBrowser.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ ModelDetailsDrawer.tsx
β”‚       β”‚   β”‚   └── ui/                # shadcn/ui components
β”‚       β”‚   β”‚
β”‚       β”‚   β”œβ”€β”€ pages/             # Main application pages
β”‚       β”‚   β”‚   β”œβ”€β”€ Dashboard.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ Chat.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ Models.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ DownloadManager.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ DownloadedModels.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ APIPlayground.tsx
β”‚       β”‚   β”‚   β”œβ”€β”€ Analytics.tsx
β”‚       β”‚   β”‚   └── Settings.tsx
β”‚       β”‚   β”‚
β”‚       β”‚   β”œβ”€β”€ api/               # API client
β”‚       β”‚   β”œβ”€β”€ hooks/             # Custom React hooks
β”‚       β”‚   β”œβ”€β”€ lib/               # Utilities
β”‚       β”‚   β”œβ”€β”€ locales/           # i18n translations
β”‚       β”‚   └── types/             # TypeScript types
β”‚       β”‚
β”‚       └── public/                # Static assets
β”‚
β”œβ”€β”€ llama.cpp/                     # llama.cpp binaries (zips)
β”‚   β”œβ”€β”€ llama-b7648-bin-win-cpu-x64.zip
β”‚   β”œβ”€β”€ llama-b7648-bin-win-cuda-12.4-x64.zip
β”‚   β”œβ”€β”€ llama-b7648-bin-win-vulkan-x64.zip
β”‚   └── models/                    # Optional: model storage
β”‚
β”œβ”€β”€ package.json                   # npm dependencies & scripts
β”œβ”€β”€ electron-builder.json          # Electron build configuration
β”œβ”€β”€ setup.ps1                      # Windows setup script
β”œβ”€β”€ setup.sh                       # Linux/macOS setup script
└── README.md                      # You are here!

πŸ”‘ API Endpoints

OpenAI-Compatible Routes

Endpoint Method Description
/v1/chat/completions POST Create chat completion (streaming supported)
/v1/models GET List available models

Management Routes

Endpoint Method Description
/api/models GET List local GGUF models
/api/models/search GET Search HuggingFace models
/api/models/suggestions GET Get hardware-based recommendations
/api/models/download POST Start model download
/api/models/download/{id} GET Check download progress
/api/models/downloaded GET List downloaded models
/api/analytics GET Get API usage analytics
/api/engine/health GET Check llama.cpp server health
/swagger GET Interactive API documentation

βš™οΈ Configuration

appsettings.json (Backend Configuration)

{
  "Llama": {
    "BaseUrl": "http://127.0.0.1:8080",
    "Port": 8080,
    "Host": "127.0.0.1",
    "DefaultModel": "llama-2-7b-chat.Q2_K.gguf",
    "ModelsPath": "models",
    "ExecutablePath": "bin/llama/llama-server.exe",
    "GpuLayers": 35,              // 0 for CPU-only, 35+ for GPU
    "ContextSize": 4096,          // Model context window
    "RequestTimeout": 300
  },
  "Security": {
    "ApiKey": "sk-agentdock-admin"  // Change this!
  }
}

Environment Variables (Optional)

# Override default configuration
LLAMA_PORT=8080
LLAMA_GPU_LAYERS=35
API_KEY=your-secure-key-here

πŸ’Ύ System Requirements

Minimum Requirements

Component Requirement
OS Windows 10/11, macOS 11+, Ubuntu 20.04+
CPU x64 processor with AVX support
RAM 8 GB (can run small models)
Storage 10 GB + model sizes (2-40 GB per model)
GPU Optional (CPU-only works fine)

Recommended for Best Performance

Component Recommendation
RAM 16-32 GB (for 7B-13B models)
GPU NVIDIA RTX 3060+ (12GB VRAM) or AMD RX 6800+
Storage SSD with 50+ GB free

GPU Acceleration Support

GPU Vendor Technology Models Supported
NVIDIA CUDA 12.4+ GeForce GTX 1060+, RTX series, Tesla, A100
AMD ROCm 5.0+ RX 6000+, Radeon VII, MI series
Intel SYCL/oneAPI Arc A-series, Iris Xe
Apple Metal M1, M2, M3 (all variants)
Universal Vulkan Any GPU with Vulkan 1.2+

🀝 Contributing

We love contributions! AgentDock is a community-driven project and we welcome developers of all skill levels.

🌟 Ways to Contribute

  • πŸ› Report Bugs: Found an issue? Open a bug report
  • πŸ’‘ Suggest Features: Have an idea? Request a feature
  • πŸ“ Improve Docs: Documentation can always be better
  • 🌍 Translate: Help us reach more users (i18n support built-in!)
  • 🎨 Design: UI/UX improvements welcome
  • πŸ’» Code: Implement features, fix bugs, optimize performance

πŸš€ Development Workflow

  1. Fork & Clone

    git clone https://github.com/YOUR_USERNAME/AgentDock.git
    cd AgentDock
  2. Setup Development Environment

    ./setup.ps1    # Windows
    ./setup.sh     # Linux/macOS
  3. Create a Feature Branch

    git checkout -b feature/amazing-new-feature
  4. Make Your Changes

    • Write clean, readable code
    • Follow existing code style
    • Add comments for complex logic
    • Update documentation if needed
  5. Test Your Changes

    npm run dev        # Test in development mode
    npm run build      # Ensure production build works
  6. Commit with Conventional Commits

    git commit -m "feat: add amazing new feature"

    Commit Types:

    • feat: New feature
    • fix: Bug fix
    • docs: Documentation changes
    • style: Code formatting (no logic changes)
    • refactor: Code refactoring
    • perf: Performance improvements
    • test: Adding tests
    • chore: Build/tooling changes
  7. Push & Create PR

    git push origin feature/amazing-new-feature

    Then open a Pull Request on GitHub with a clear description.

πŸ“‹ Code Review Process

  1. Automated Checks: CI/CD runs tests and builds
  2. Code Review: Maintainers review your code
  3. Feedback: We may request changes
  4. Approval: Once approved, we merge!
  5. Release: Your contribution ships in the next release

🎯 Good First Issues

New to the project? Look for issues labeled good first issue

πŸ“œ Code of Conduct

We follow the Contributor Covenant. Be respectful, inclusive, and constructive.


❓ FAQ

Q: Do I need to pay for anything?

A: No! AgentDock is 100% free and open-source. You only pay for the electricity to run it on your machine. No subscriptions, no API costs.

Q: Is my data private?

A: Absolutely. Everything runs locally on your machine. No data is ever sent to external servers (except when downloading models from HuggingFace, which is a one-time thing).

Q: Can I use this commercially?

A: Yes! AgentDock is MIT licensed. Use it however you wantβ€”personal, commercial, enterprise. Just keep the license file.

Q: What models can I use?

A: Any GGUF model from HuggingFace or elsewhere. Popular choices:

  • Llama 2 (7B, 13B, 70B)
  • Mistral (7B)
  • Phi-2 (2.7B - great for low-end hardware)
  • Code Llama (7B, 13B, 34B)
  • Mixtral (8x7B)
Q: How much RAM do I need?

A: Depends on the model:

  • 2-3B models: 4-6 GB RAM
  • 7B models: 8-12 GB RAM
  • 13B models: 16-24 GB RAM
  • 70B models: 64+ GB RAM (or use smaller quantizations)

AgentDock shows you compatibility before downloading!

Q: Do I need a powerful GPU?

A: No! AgentDock works great on CPU-only. GPU just makes it faster. Even a GTX 1060 can give you 5-10x speedup.

Q: Can I use this with LangChain/AutoGPT/etc?

A: Yes! Just point the base_url to http://localhost:5000/v1. Any tool that supports OpenAI's API will work.

Q: How do I update models?

A: Just download a new one from the Models page. You can have multiple models and switch between them instantly.


πŸ—ΊοΈ Roadmap

πŸš€ Coming Soon

  • Multi-Model Support - Run multiple models simultaneously
  • Model Fine-Tuning - UI for LoRA fine-tuning
  • Voice Input/Output - TTS and STT integration
  • Plugins System - Extend functionality with plugins
  • Cloud Sync - Sync settings across devices (optional)
  • Docker Support - Run AgentDock in containers
  • Function Calling - OpenAI function calling API
  • Vision Models - Support for LLaVA and other vision models
  • Model Merging - Merge multiple models in the UI

πŸ’­ Under Consideration

  • Collaborative workspaces
  • Built-in RAG (Retrieval Augmented Generation)
  • Model quantization tools
  • Prompt template library
  • Mobile companion app

Vote on features: GitHub Discussions


πŸ“„ License

MIT License

Copyright (c) 2024-2026 AgentDock Contributors

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

See LICENSE file for full details.


πŸ™ Acknowledgments

AgentDock stands on the shoulders of giants:

  • llama.cpp - The incredible C++ inference engine that makes this all possible
  • OpenAI - For the API specification that became the industry standard
  • HuggingFace - For hosting and democratizing AI models
  • Electron - For making cross-platform desktop apps easy
  • React - For the amazing UI framework
  • .NET - For the powerful backend framework
  • Tailwind CSS - For making styling actually enjoyable

And to all our contributors who make AgentDock better every day! πŸ’–


πŸ“ž Community & Support


⭐ Star Us on GitHub!

If AgentDock helps you, please consider giving us a star. It helps others discover the project!

Star History Chart


Made with ❀️ by developers, for developers

Privacy-first β€’ Open-source β€’ Community-driven

⬆ Back to Top

About

Local LLM execution platform with an OpenAI-compatible API, enabling seamless integration with existing applications. Built with a .NET backend and a React-based desktop interface.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors