From 2df81bf3dfba7f0e2ad6f5b8ee7c2a3eb2d39ace Mon Sep 17 00:00:00 2001 From: HiranoMasaaki Date: Wed, 25 Feb 2026 09:48:33 +0000 Subject: [PATCH] docs: rewrite CLI reference to match current implementation Complete rewrite covering all commands including the new expert management subcommands, with accurate arguments, options, and defaults. Co-Authored-By: Claude Opus 4.6 --- docs/references/cli.md | 520 ++++++++++++++++++++++------------------- 1 file changed, 286 insertions(+), 234 deletions(-) diff --git a/docs/references/cli.md b/docs/references/cli.md index 2b0ceb69..a5082826 100644 --- a/docs/references/cli.md +++ b/docs/references/cli.md @@ -4,367 +4,419 @@ sidebar: order: 2 --- +## Command Overview + +``` +perstack +├── start Interactive TUI for developing and testing experts +├── run Headless execution with JSON event output +├── log View execution history and events +├── install Pre-collect tool definitions for faster startup +└── expert Manage experts on Perstack API + ├── list List draft scopes + ├── create Create a new draft scope + ├── delete Delete a draft scope + ├── push Push local expert definitions to a draft ref + ├── refs List draft refs + ├── version Assign a version to a draft ref + ├── versions List published versions + ├── publish Make an expert scope public + ├── unpublish Make an expert scope private + └── yank Deprecate a specific version +``` + ## Running Experts ### `perstack start` -Interactive workbench for developing and testing Experts. +Interactive workbench for developing and testing experts. ```bash perstack start [expertKey] [query] [options] ``` **Arguments:** -- `[expertKey]`: Expert key (optional — prompts if not provided) -- `[query]`: Input query (optional — prompts if not provided) -Opens a text-based UI for iterating on Expert definitions. See [Running Experts](../using-experts/running-experts.md). +| Argument | Required | Description | +| ------------- | -------- | ---------------------------------------- | +| `[expertKey]` | No | Expert key (prompts if not provided) | +| `[query]` | No | Input query (prompts if not provided) | ### `perstack run` -Headless execution for production and automation. +Headless execution for production and automation. Outputs JSON events to stdout. ```bash perstack run [options] ``` **Arguments:** -- ``: Expert key (required) - - Examples: `my-expert`, `@org/my-expert`, `@org/my-expert@1.0.0` -- ``: Input query (required) -Outputs JSON events to stdout. +| Argument | Required | Description | +| -------------- | -------- | --------------------------------------------------------------------- | +| `` | Yes | Expert key (e.g., `my-expert`, `@org/my-expert`, `@org/expert@1.0.0`) | +| `` | Yes | Input query | -## Shared Options +**`run`-only option:** -Both `start` and `run` accept the same options: +| Option | Description | +| ------------------ | -------------------------------------------------------------------------- | +| `--filter ` | Filter events by type (comma-separated, e.g., `completeRun,stopRunByError`) | -### Model and Provider +### Shared Options -| Option | Description | Default | -| ----------------------- | ------------ | ------------------- | -| `--provider ` | LLM provider | `anthropic` | -| `--model ` | Model name | `claude-sonnet-4-5` | +Both `start` and `run` accept the following options: -Providers: `anthropic`, `google`, `openai`, `deepseek`, `ollama`, `azure-openai`, `amazon-bedrock`, `google-vertex` +#### Model and Provider -### Execution Control +| Option | Default | Description | +| ----------------------------- | --------------- | -------------------------------------------------------------------------------------- | +| `--provider ` | `anthropic` | LLM provider | +| `--model ` | `claude-sonnet-4-5` | Model name | +| `--reasoning-budget ` | - | Reasoning budget (`minimal`, `low`, `medium`, `high`, or token count) | -| Option | Description | Default | -| ------------------- | --------------------------------- | --------- | -| `--max-retries ` | Max retry attempts per generation | `5` | -| `--timeout ` | Timeout per generation (ms) | `300000` | +Providers: `anthropic`, `google`, `openai`, `deepseek`, `ollama`, `azure-openai`, `amazon-bedrock`, `google-vertex` -### Reasoning +#### Execution Control -| Option | Description | Default | -| ------------------------------- | ------------------------------------------------------------------------ | ------- | -| `--reasoning-budget ` | Reasoning budget for native LLM reasoning (`minimal`, `low`, `medium`, `high`, or token count) | - | +| Option | Default | Description | +| ------------------- | -------- | --------------------------------- | +| `--max-retries ` | `5` | Max retry attempts per generation | +| `--timeout ` | `300000` | Timeout per generation (ms) | -### Configuration +#### Configuration -| Option | Description | Default | -| ---------------------- | ----------------------- | ---------------------- | -| `--config ` | Path to `perstack.toml` | Auto-discover from cwd | -| `--env-path ` | Environment file paths | `.env`, `.env.local` | +| Option | Default | Description | +| ---------------------- | ---------------------- | ------------------------ | +| `--config ` | Auto-discover from cwd | Path to `perstack.toml` | +| `--env-path ` | `.env`, `.env.local` | Environment file paths | -### Job and Run Management +#### Job and Run Management | Option | Description | | --------------------- | ----------------------------------------------------------- | -| `--job-id ` | Custom Job ID for new Job (default: auto-generated) | -| `--continue` | Continue latest Job with new Run | -| `--continue-job ` | Continue specific Job with new Run | +| `--job-id ` | Custom job ID (default: auto-generated) | +| `--continue` | Continue latest job with new run | +| `--continue-job ` | Continue specific job with new run | | `--resume-from ` | Resume from specific checkpoint (requires `--continue-job`) | -**Combining options:** - ```bash -# Continue latest Job from its latest checkpoint +# Continue latest job from its latest checkpoint --continue -# Continue specific Job from its latest checkpoint +# Continue specific job from its latest checkpoint --continue-job -# Continue specific Job from a specific checkpoint +# Continue specific job from a specific checkpoint --continue-job --resume-from ``` -**Note:** `--resume-from` requires `--continue-job` (Job ID must be specified). You can only resume from the Coordinator Expert's checkpoints. +`--resume-from` requires `--continue-job`. You can only resume from the coordinator expert's checkpoints. -### Interactive +#### Interactive -| Option | Description | -| ------------------------------------ | ------------------------------------------- | +| Option | Description | +| ------------------------------------ | -------------------------------------- | | `-i, --interactive-tool-call-result` | Treat query as interactive tool call result | -Use with `--continue` to respond to interactive tool calls from the Coordinator Expert. +Use with `--continue` to respond to interactive tool calls from the coordinator expert. -### Output Filtering (`run` only) +#### Other -| Option | Description | -| ------------------ | ---------------------------------------------------------------------- | -| `--filter ` | Filter events by type (comma-separated, e.g., `completeRun,stopRunByError`) | +| Option | Description | +| ----------- | -------------------- | +| `--verbose` | Enable verbose logging | -### Other +## Debugging and Inspection -| Option | Description | -| ----------- | ---------------------------------------------------------- | -| `--verbose` | Enable verbose logging (see [Verbose Mode](#verbose-mode)) | +### `perstack log` -## Verbose Mode +View execution history and events for debugging. -The `--verbose` flag enables detailed logging for debugging purposes, showing additional runtime information in the output. +```bash +perstack log [options] +``` -## Examples +When called without options, shows a summary of the latest job (max 100 events). -```bash -# Basic execution (creates new Job) -npx perstack run my-expert "Review this code" +**Options:** -# With model options -npx perstack run my-expert "query" \ - --provider google \ - --model gemini-2.5-pro +| Option | Default | Description | +| ----------------------- | ------- | ---------------------------------------------- | +| `--job ` | - | Show events for a specific job | +| `--run ` | - | Show events for a specific run | +| `--checkpoint ` | - | Show checkpoint details | +| `--step ` | - | Filter by step number (e.g., `5`, `>5`, `1-10`) | +| `--type ` | - | Filter by event type | +| `--errors` | - | Show only error-related events | +| `--tools` | - | Show only tool call events | +| `--delegations` | - | Show only delegation events | +| `--filter ` | - | Simple filter expression | +| `--json` | - | Output as JSON | +| `--pretty` | - | Pretty-print JSON output | +| `--verbose` | - | Show full event details | +| `--take ` | `100` | Number of events to display (`0` for all) | +| `--offset ` | `0` | Number of events to skip | +| `--context ` | - | Include N events before/after matches | +| `--messages` | - | Show message history for checkpoint | +| `--summary` | - | Show summarized view | + +**Event types:** + +`startRun`, `callTools`, `resolveToolResults`, `callDelegate`, `stopRunByError`, `retry`, `completeRun`, `continueToNextStep` + +**Filter expression syntax:** -# Continue Job with follow-up -npx perstack run my-expert "initial query" -npx perstack run my-expert "follow-up" --continue +```bash +--filter '.type == "completeRun"' +--filter '.stepNumber > 5' +--filter '.toolCalls[].skillName == "base"' +``` -# Continue specific Job from latest checkpoint -npx perstack run my-expert "continue" --continue-job job_abc123 +**Step range syntax:** -# Continue specific Job from specific checkpoint -npx perstack run my-expert "retry with different approach" \ - --continue-job job_abc123 \ - --resume-from checkpoint_xyz +```bash +--step 5 # Exact step +--step ">5" # Greater than 5 +--step ">=5" # Greater than or equal to 5 +--step "1-10" # Range (inclusive) +``` -# Custom Job ID for new Job -npx perstack run my-expert "query" --job-id my-custom-job +## Performance Optimization -# Respond to interactive tool call -npx perstack run my-expert "user response" --continue -i +### `perstack install` -# Custom config -npx perstack run my-expert "query" \ - --config ./configs/production.toml \ - --env-path .env.production +Pre-collect tool definitions to enable instant LLM inference. -# Registry Experts -npx perstack run tic-tac-toe "Let's play!" -npx perstack run @org/expert@1.0.0 "query" +```bash +perstack install [options] ``` -## Debugging and Inspection +By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. `perstack install` solves this by: -### `perstack log` +1. Initializing all skills once and collecting their tool schemas +2. Caching the schemas in a `perstack.lock` file +3. Enabling the runtime to start LLM inference immediately using cached schemas +4. Deferring actual MCP connections until tools are called -View execution history and events for debugging. +**Options:** + +| Option | Default | Description | +| ---------------------- | ---------------------- | ----------------------- | +| `--config ` | Auto-discover from cwd | Path to `perstack.toml` | +| `--env-path ` | `.env`, `.env.local` | Environment file paths | + +The lockfile is optional. If not present, skills are initialized at runtime. + +## Expert Management + +The `expert` command group manages experts on the Perstack API. ```bash -perstack log [options] +perstack expert [options] ``` -**Purpose:** +**Parent options (inherited by all subcommands):** -Inspect job/run execution history and events for debugging. This command is designed for both human inspection and AI agent usage, making it easy to diagnose issues in Expert runs. +| Option | Default | Description | +| ------------------ | -------------------- | -------------- | +| `--api-key ` | `PERSTACK_API_KEY` env var | API key | +| `--base-url ` | `https://api.perstack.ai` | API base URL | -**Default Behavior:** +### `expert list` -When called without options, shows a summary of the latest job with: -- "(showing latest job)" indicator when no `--job` specified -- "Storage: " showing where data is stored -- Maximum 100 events (use `--take 0` for all) +List draft scopes. -**Options:** +```bash +perstack expert list [options] +``` -| Option | Description | -| ----------------------- | ----------------------------------------------------- | -| `--job ` | Show events for a specific job | -| `--run ` | Show events for a specific run | -| `--checkpoint ` | Show checkpoint details | -| `--step ` | Filter by step number (e.g., `5`, `>5`, `1-10`) | -| `--type ` | Filter by event type | -| `--errors` | Show only error-related events | -| `--tools` | Show only tool call events | -| `--delegations` | Show only delegation events | -| `--filter ` | Simple filter expression | -| `--json` | Output as JSON (machine-readable) | -| `--pretty` | Pretty-print JSON output | -| `--verbose` | Show full event details | -| `--take ` | Number of events to display (default: 100, 0 for all) | -| `--offset ` | Number of events to skip (default: 0) | -| `--context ` | Include N events before/after matches | -| `--messages` | Show message history for checkpoint | -| `--summary` | Show summarized view | - -**Event Types:** - -| Event Type | Description | -| -------------------- | ---------------------------- | -| `startRun` | Run started | -| `callTools` | Tool calls made | -| `resolveToolResults` | Tool results received | -| `callDelegate` | Delegation to another expert | -| `stopRunByError` | Error occurred | -| `retry` | Generation retry | -| `completeRun` | Run completed | -| `continueToNextStep` | Step transition | - -**Filter Expression Syntax:** - -Simple conditions are supported: +| Option | Description | +| ------------------ | -------------- | +| `--filter ` | Filter by name | +| `--take ` | Limit results | +| `--skip ` | Offset | -```bash -# Exact match ---filter '.type == "completeRun"' +### `expert create` -# Numeric comparison ---filter '.stepNumber > 5' ---filter '.stepNumber >= 5' ---filter '.stepNumber < 10' +Create a new draft scope. -# Array element matching ---filter '.toolCalls[].skillName == "base"' +```bash +perstack expert create --app ``` -**Step Range Syntax:** +| Argument | Required | Description | +| -------------- | -------- | ----------------- | +| `` | Yes | Expert scope name | + +| Option | Required | Description | +| ------------ | -------- | -------------- | +| `--app ` | Yes | Application ID | + +### `expert delete` + +Delete a draft scope. ```bash ---step 5 # Exact step number ---step ">5" # Greater than 5 ---step ">=5" # Greater than or equal to 5 ---step "1-10" # Range (inclusive) +perstack expert delete ``` -**Examples:** +| Argument | Required | Description | +| ----------- | -------- | -------------- | +| `` | Yes | Draft scope ID | + +### `expert push` + +Push local expert definitions to a draft ref. ```bash -# Show latest job summary -perstack log +perstack expert push [options] +``` -# Show all events for a specific job -perstack log --job abc123 +| Argument | Required | Description | +| ----------- | -------- | -------------- | +| `` | Yes | Draft scope ID | -# Show events for a specific run -perstack log --run xyz789 +| Option | Description | +| ----------------- | ----------------------- | +| `--config ` | Path to `perstack.toml` | -# Show checkpoint details with messages -perstack log --checkpoint cp123 --messages +Reads experts from `perstack.toml` and creates a new draft ref. -# Show only errors -perstack log --errors +### `expert refs` -# Show tool calls for steps 5-10 -perstack log --tools --step "5-10" +List draft refs for a draft scope. -# Filter by event type -perstack log --job abc123 --type callTools +```bash +perstack expert refs [options] +``` + +| Argument | Required | Description | +| ----------- | -------- | -------------- | +| `` | Yes | Draft scope ID | -# JSON output for automation -perstack log --job abc123 --json +| Option | Description | +| ------------ | ------------- | +| `--take ` | Limit results | +| `--skip ` | Offset | -# Error diagnosis with context -perstack log --errors --context 5 +### `expert version` -# Filter with expression -perstack log --filter '.toolCalls[].skillName == "base"' +Assign a semantic version to a draft ref. -# Summary view -perstack log --summary +```bash +perstack expert version [options] ``` -**Output Format:** +| Argument | Required | Description | +| ----------- | -------- | ---------------------------- | +| `` | Yes | Draft scope ID | +| `` | Yes | Draft ref ID | +| `` | Yes | Semantic version (e.g., `1.0.0`) | -Terminal output (default) shows human-readable format with colors: +| Option | Description | +| ----------------- | ---------------------------- | +| `--tag ` | Version tag (e.g., `latest`) | +| `--readme ` | Path to README file | +### `expert versions` + +List published versions for an expert scope. Does not require an API key for public experts. + +```bash +perstack expert versions ``` -Job: abc123 (completed) -Expert: my-expert@1.0.0 -Started: 2024-12-23 10:30:15 -Steps: 12 - -Events: -───────────────────────────────────────────── -[Step 1] startRun 10:30:15 - Expert: my-expert@1.0.0 - Query: "Analyze this code..." - -[Step 2] callTools 10:30:18 - Tools: read_file, write_file - -[Step 3] resolveToolResults 10:30:22 - ✓ read_file: Success - ✗ write_file: Permission denied -───────────────────────────────────────────── -``` -JSON output (`--json`) for machine parsing: - -```json -{ - "job": { "id": "abc123", "status": "completed" }, - "events": [ - { "type": "startRun", "stepNumber": 1 } - ], - "summary": { - "totalEvents": 15, - "errorCount": 0 - } -} +| Argument | Required | Description | +| ------------- | -------- | ----------------- | +| `` | Yes | Expert scope name | + +### `expert publish` + +Make an expert scope public. + +```bash +perstack expert publish ``` -## Performance Optimization +| Argument | Required | Description | +| ------------- | -------- | ----------------- | +| `` | Yes | Expert scope name | -### `perstack install` +### `expert unpublish` -Pre-collect tool definitions to enable instant LLM inference. +Make an expert scope private. ```bash -perstack install [options] +perstack expert unpublish ``` -**Purpose:** +| Argument | Required | Description | +| ------------- | -------- | ----------------- | +| `` | Yes | Expert scope name | -By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. `perstack install` solves this by: +### `expert yank` -1. Initializing all skills once and collecting their tool schemas -2. Caching the schemas in a `perstack.lock` file -3. Enabling the runtime to start LLM inference immediately using cached schemas -4. Deferring actual MCP connections until tools are called +Deprecate a specific expert version. -**Options:** +```bash +perstack expert yank +``` + +| Argument | Required | Description | +| -------- | -------- | --------------------------------------------- | +| `` | Yes | Expert key with version (e.g., `my-expert@1.0.0`) | + +## Environment Variables -| Option | Description | Default | -| ---------------------- | ----------------------- | ---------------------- | -| `--config ` | Path to `perstack.toml` | Auto-discover from cwd | -| `--env-path ` | Environment file paths | `.env`, `.env.local` | +| Variable | Description | +| ---------------------- | -------------------------------------------------------- | +| `PERSTACK_API_KEY` | API key for `expert` commands | +| `PERSTACK_STORAGE_PATH`| Storage directory for job/run data (default: `./perstack`) | -**Example:** +## Examples ```bash -# Generate lockfile for current project -perstack install +# Basic execution +perstack run my-expert "Review this code" -# Generate lockfile for specific config -perstack install --config ./configs/production.toml +# Interactive TUI +perstack start -# Re-generate after adding new skills -perstack install -``` +# With model options +perstack run my-expert "query" --provider google --model gemini-2.5-pro -**Output:** +# Continue job with follow-up +perstack run my-expert "initial query" +perstack run my-expert "follow-up" --continue -Creates `perstack.lock` in the same directory as `perstack.toml`. This file contains: +# Continue specific job from specific checkpoint +perstack run my-expert "retry" --continue-job job_abc --resume-from cp_xyz -- All expert definitions (including resolved delegates from registry) -- All tool definitions for each expert's skills +# Respond to interactive tool call +perstack run my-expert "user response" --continue -i -**When to run:** +# Custom config and env +perstack run my-expert "query" --config ./production.toml --env-path .env.production -- After adding or modifying skills in `perstack.toml` -- After updating MCP server dependencies -- Before deploying to production for faster startup +# Registry experts +perstack run @org/expert@1.0.0 "query" -**Note:** The lockfile is optional. If not present, skills are initialized at runtime as usual. +# Generate lockfile +perstack install +# Expert lifecycle +perstack expert create my-expert --app app_123 +perstack expert push clxxx --config ./perstack.toml +perstack expert version clxxx rfxxx 1.0.0 --tag latest +perstack expert versions my-expert +perstack expert publish my-expert +perstack expert yank my-expert@1.0.0 +perstack expert unpublish my-expert +perstack expert delete clxxx + +# View execution logs +perstack log +perstack log --job abc123 --errors --context 5 +perstack log --json --pretty +```