Give your coding agents a canvas to express themselves.
Elenweave is a shared canvas for coding agents and humans to collaborate. The core app exposes a surface for coding agents to express complex ideas as rich graph on canvas. The agent controls the canvas via skill. While a human can manipulate the canvas by manualy placing nodes or by using an embedded AI.
This project is early stage, and it likely has bugs. It has only tested against codex so far. However the skill is general and should work with other agents.
Move coding agents beyond terminal and markdown file. Terminal is still the fastest way to build a software app, but it is not the right surface to build an understanding about the code. Coding agents have accelarated the pace of development. They have also accelarated the
- A visual workspace for projects and boards
- A collaboration surface for human input + AI-generated structure
- A lightweight local app with file-backed server mode or browser-only client mode
- You can control the app via coding agent. That is ask the agent to create documentation on a specific topic or a plan and it will update the app using the skill
- You can directly use the canvas in your browser.
- The same board and can be read and updated by an agent and human
- You can either run it locally or self host as server, in which case it will use file system as storage block.
- Or you can run a client only mode, in which case it will use IndexedDB as storage. More details can be found below.
- Create and edit boards with rich node types (text, forms, charts, code, markdown, media)
- Use
MermaidBlockandSvgBlocknodes for diagram-driven explanations - Ask AI to generate board actions using the
ew-actions/v1plan contract - Add follow-up AI interactions through
TextInputandOptionPickernodes - Attach image/audio/text assets to nodes
- Use realtime Gemini voice mode from the app panel (experimental)
- Persist work by project/board (server mode) or IndexedDB (client mode)
Install globally (optional):
npm i -g elenweave-appRun (recommended, no global install needed):
npx elenweave-appRun after global install:
elenweave-appInstall Codex skill: skill repo: https://github.com/the-code-rider/elenweave-skill
npx skills add the-code-rider/elenweave-skillDefault behavior with no flags (npx elenweave-app):
- mode:
server - host:
127.0.0.1 - port:
8787 - data root:
~/.elenweave(Windows:%USERPROFILE%\\.elenweave) - API routes enabled (
/api/*) - browser runtime injected as
storageMode: "server" - seed policy default:
first-run - seed read-only mode default:
off - read-only fork default:
local
Open:
http://127.0.0.1:8787/
Development run from repo:
npm run serverRun as packaged app (published):
npx elenweave-appOr run with file-watch:
npm run devCLI mode:
npm run server:cli -- --mode server --host 0.0.0.0 --port 8080
# or
node server/cli.js --mode server --host 0.0.0.0 --port 8080Client-only runtime (browser IndexedDB, API disabled):
npx elenweave-app --mode client
# or
npm run server:cli -- --mode clientIf startup fails with EADDRINUSE on 127.0.0.1:8787, either stop the process using that port or start on a different port:
npm run server:cli -- --port 8788Run from this repo:
npm run desktop:install
npm run desktop:devBuild Windows installer:
npm run desktop:build:winDesktop runtime behavior:
- Starts local Elenweave server (
server/cli.js) automatically - Opens app at
http://127.0.0.1:<port>/app/index.html - Exposes localhost API for coding agents at
http://127.0.0.1:<port>/api/* - If preferred port is busy, falls forward to next free port up to
maxPort
Desktop config file:
- Path:
%APPDATA%\\Elenweave\\config.json - Created automatically on first run
- Example shape:
desktop/config.example.json
Supported desktop config keys:
port: preferred start port (default8787)maxPort: upper bound for fallback scan (default8899)mode: runtime mode (serverdefault)dataDir: server storage root (default~/.elenweave)configPath: path to app/AI config JSON (maps toELENWEAVE_CONFIG)aiConfigPath: optional dedicated AI config JSON (maps toELENWEAVE_AI_CONFIG)envOverrides: additional env vars passed to local server process
Desktop-only in-app settings panel:
- Open
Settingsin the sidebar - Use the
Desktopsection to view runtime info, editport/dataDir/configPath/aiConfigPath, save config, and restart server
When running the local server, AI keys can be loaded from environment variables or config file so browser requests are served through local /api/ai/* endpoints.
See:
docs/SERVER.mddocs/seed.mddocs/followup.mddocs/public-catalog.mdserver/config.example.json
Configured in app/index.html (and overridden by server/index.js when served by the local server).
Accepted params:
| Param | Type | Required | Description |
|---|---|---|---|
storageMode |
'client' | 'server' |
Yes | Storage source-of-truth mode. |
serverBase |
string |
No | Base URL for server API in server mode. Defaults to same-origin (''). |
seedReadOnlyMode |
'off' | 'all' | 'projects' |
No | Hosted seed read-only mode injected by server. |
seedReadOnlyProjectIds |
string[] |
No | Read-only project IDs when seedReadOnlyMode='projects'. |
readOnlyFork |
'off' | 'local' |
No | Read-only edit behavior (local = browser IndexedDB fork). |
experimentalHandControls |
boolean |
No | Enable/disable MediaPipe hand-controls feature availability at runtime. |
handControlsModelBaseUrl |
string |
No | Optional base URL for hand_landmarker.task (defaults to hosted MediaPipe model). |
publicProjectsCatalogUrl |
string |
No | Optional URL to a public project catalog JSON. Enables the in-app Download panel. |
Mode behavior:
storageMode: "server": app uses/api/projects/*and file-backed server storage.storageMode: "client": app uses browser IndexedDB only and does not call server APIs.
- Toggle from
Tools->Hand Controls(Off/On (exp)). - Feature is off by default and does not change existing mouse/keyboard/touch controls.
- When enabled, browser camera access is required; disabling stops the camera stream immediately.
- URL kill switch: append
?hand=offto force-disable for that session URL. - URL force-enable: append
?hand=onto auto-enable at startup.
Set the catalog URL to enable the download panel:
ELENWEAVE_PUBLIC_CATALOG_URL=https://example.com/catalog.json npm run serverThe catalog JSON shape:
{
"projects": [
{
"id": "catalog-project-id",
"name": "Project name",
"description": "Short description",
"publisher": "Publisher name",
"publishedAt": "2026-01-01T00:00:00Z",
"updatedAt": "2026-01-10T00:00:00Z",
"version": "1.0.0",
"coverUrl": "https://example.com/cover.png",
"tags": ["tag-a", "tag-b"],
"manifestUrl": "https://example.com/project-manifest.json"
}
]
}The manifest JSON shape:
{
"id": "catalog-project-id",
"name": "Project name",
"description": "Short description",
"publisher": "Publisher name",
"publishedAt": "2026-01-01T00:00:00Z",
"updatedAt": "2026-01-10T00:00:00Z",
"version": "1.0.0",
"coverUrl": "https://example.com/cover.png",
"tags": ["tag-a", "tag-b"],
"assets": [
{ "id": "asset-1", "name": "logo", "mimeType": "image/png", "file": "assets/logo.png" }
],
"boards": [
{ "name": "Intro board", "file": "boards/intro.json" }
]
}Boards can be inlined instead of file:
{
"name": "Intro board",
"payload": { "nodes": [], "edges": [], "nodeOrder": [], "notifications": [] }
}File lookup order:
ELENWEAVE_CONFIG(explicit path, CLI:--config)ELENWEAVE_AI_CONFIG(legacy explicit path)~/.elenweave/config.json(default location)./config.json./server/config.json
Accepted params:
| Param | Type | Description |
|---|---|---|
openaiApiKey |
string |
OpenAI key |
geminiApiKey |
string |
Gemini key |
googleApiKey |
string |
Gemini-compatible Google key |
openaiModel |
string |
Default OpenAI model used for AI requests |
geminiModel |
string |
Default Gemini model used for AI requests |
openai.apiKey |
string |
OpenAI key (nested form) |
gemini.apiKey |
string |
Gemini key (nested form) |
openai.model |
string |
Default OpenAI model (nested form) |
gemini.model |
string |
Default Gemini model (nested form) |
openaiDefaultModel |
string |
OpenAI default model (top-level alias) |
geminiDefaultModel |
string |
Gemini default model (top-level alias) |
openai.defaultModel |
string |
OpenAI default model (nested alias) |
gemini.defaultModel |
string |
Gemini default model (nested alias) |
providers.openai.apiKey |
string |
OpenAI key (provider map form) |
providers.gemini.apiKey |
string |
Gemini key (provider map form) |
providers.openai.model |
string |
Default OpenAI model (provider map form) |
providers.gemini.model |
string |
Default Gemini model (provider map form) |
providers.openai.defaultModel |
string |
OpenAI default model (provider alias) |
providers.gemini.defaultModel |
string |
Gemini default model (provider alias) |
| Variable | Default | Description |
|---|---|---|
HOST |
127.0.0.1 |
Server bind host |
PORT |
8787 |
Server bind port |
ELENWEAVE_RUNTIME_MODE |
server |
Runtime mode: server or client |
ELENWEAVE_DATA_DIR |
~/.elenweave |
Data root for projects/boards/assets |
ELENWEAVE_LOCK_TIMEOUT_MS |
5000 |
Lock wait timeout (ms) |
ELENWEAVE_LOCK_RETRY_MS |
50 |
Lock retry interval (ms) |
ELENWEAVE_SEED_DIR |
(unset) | Native seed directory (data-root snapshot) |
ELENWEAVE_SEED_JSON |
(unset) | Portable JSON seed file |
ELENWEAVE_SEED_POLICY |
first-run |
Seed apply policy: first-run, always, versioned |
ELENWEAVE_SEED_VERSION |
(unset) | Seed version used with versioned policy |
ELENWEAVE_SEED_READONLY |
off |
Seed read-only mode: off, all, projects |
ELENWEAVE_READONLY_FORK |
local |
Read-only fork behavior: local or off |
ELENWEAVE_EXPERIMENTAL_HAND_CONTROLS |
true |
Enable or disable MediaPipe hand-controls runtime toggle |
ELENWEAVE_HAND_CONTROLS_MODEL_BASE_URL |
(unset) | Optional base URL containing hand_landmarker.task |
ELENWEAVE_CONFIG |
(unset) | Path to app/AI config JSON |
ELENWEAVE_AI_CONFIG |
(unset) | Path to AI config JSON |
ELENWEAVE_OPENAI_API_KEY |
(unset) | Preferred OpenAI key env var |
OPENAI_API_KEY |
(unset) | OpenAI key env var |
ELENWEAVE_GEMINI_API_KEY |
(unset) | Preferred Gemini key env var |
GEMINI_API_KEY |
(unset) | Gemini key env var |
GOOGLE_API_KEY |
(unset) | Gemini-compatible Google key env var |
ELENWEAVE_OPENAI_MODEL |
(unset) | Default OpenAI model for server-side AI proxy |
ELENWEAVE_GEMINI_MODEL |
(unset) | Default Gemini model for server-side AI proxy |
ELENWEAVE_OPENAI_DEFAULT_MODEL |
(unset) | Alias for OpenAI default model env var |
ELENWEAVE_GEMINI_DEFAULT_MODEL |
(unset) | Alias for Gemini default model env var |
AI key precedence:
- OpenAI:
ELENWEAVE_OPENAI_API_KEY->OPENAI_API_KEY-> config file values - Gemini:
ELENWEAVE_GEMINI_API_KEY->GEMINI_API_KEY->GOOGLE_API_KEY-> config file values
AI model precedence:
- OpenAI:
ELENWEAVE_OPENAI_MODEL->ELENWEAVE_OPENAI_DEFAULT_MODEL-> config file values -> app fallback - Gemini:
ELENWEAVE_GEMINI_MODEL->ELENWEAVE_GEMINI_DEFAULT_MODEL-> config file values -> app fallback
app/client app and AI/UI logicserver/static hosting + REST APIs for projects/boards/assets/AI proxydocs/architecture, server behavior, and AI feature docs
