Deploy fleets of specialized AI agents across 12 LLM providers simultaneously.
- Node.js 18 or higher
- npm 9 or higher
- At least one LLM provider API key (see Providers)
# 1. Clone the repository
git clone https://github.com/Dhwanil25/Agentis.git
cd Agentis
# 2. Install dependencies
npm install
# 3. Start the development server
npm run devOpen http://localhost:5173 in your browser.
When you open Agentis for the first time, you'll see a setup prompt asking for an API key.
Minimum to get started: An Anthropic API key (sk-ant-...).
- Go to Settings → Providers
- Paste your Anthropic API key
- Click Save
That's it. You can now run tasks in Chat and Universe.
- Click Chat in the sidebar
- Select a persona (e.g.,
dev,analyst,writer) - Type a task and press Enter
The agent will run a multi-step pipeline and stream the output live.
- Click Universe in the sidebar
- Type a complex task (e.g., "Research the top 5 AI coding tools and write a comparison report")
- Click Launch
Multiple specialized agents spawn in parallel — each working a different angle, then synthesizing into one output.
Agentis supports 12 LLM providers. Each one is optional. See Providers for keys and setup.
npm run buildOutput goes to dist/. Deploy anywhere static files are served — Vercel, Netlify, Cloudflare Pages, or self-hosted nginx.
| Topic | Doc |
|---|---|
| Multi-agent system | universe.md |
| All 12 providers | providers.md |
| Persistent memory | memory.md |
| Skills system | skills.md |
| Workflow templates | workflows.md |
| Codebase architecture | architecture.md |