feat: add Smart Dispatcher for automatic agent type selection#20
feat: add Smart Dispatcher for automatic agent type selection#20
Conversation
Implements LLM-based automatic agent type selection when agentType is not specified in task creation. The dispatcher analyzes task descriptions and selects the most appropriate agent type with confidence scoring. Features: - SmartDispatcher service with in-memory cache and configurable TTL - Uses Claude Haiku for low-latency dispatch (~500ms) - Confidence threshold with fallback to default agent type - MCP task_create now accepts optional agentType - REST POST /api/v1/tasks supports optional agentType - New REST endpoint POST /api/v1/tasks/dispatch for preview - CLI command: aistack agent auto <description> - --dry-run: preview selection without executing - --confirm: ask before executing - --provider/--model: override LLM settings Configuration (aistack.config.json): - smartDispatcher.enabled (default: true) - smartDispatcher.cacheEnabled (default: true) - smartDispatcher.cacheTTLMs (default: 3600000) - smartDispatcher.confidenceThreshold (default: 0.7) - smartDispatcher.fallbackAgentType (default: 'coder') - smartDispatcher.maxDescriptionLength (default: 1000) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a new SmartDispatcher LLM service and integrates it across CLI, MCP server/tools, and web API so Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API as CLI/Web API
participant SmartDispatcher
participant LLM as LLMProvider
participant Cache
User->>API: submit task (no agentType)
API->>SmartDispatcher: dispatch(description)
alt cache hit
SmartDispatcher->>Cache: lookup(key)
Cache-->>SmartDispatcher: cached decision
SmartDispatcher-->>API: DispatchDecision (cached=true)
else cache miss
SmartDispatcher->>LLM: selectAgentType(prompt)
LLM-->>SmartDispatcher: {agentType, confidence, reasoning}
SmartDispatcher->>SmartDispatcher: apply threshold/fallback
SmartDispatcher->>Cache: store decision (if enabled)
SmartDispatcher-->>API: DispatchDecision (cached=false)
end
API->>API: create task with resolved agentType
API-->>User: return task result + optional dispatch metadata
sequenceDiagram
participant MCPClient
participant MCPServer
participant TaskTools
participant SmartDispatcher
participant Memory
MCPClient->>MCPServer: POST /tasks (optional agentType)
MCPServer->>TaskTools: createTaskTools(..., smartDispatcher, config)
TaskTools->>TaskTools: validate input
alt agentType provided
TaskTools->>TaskTools: use provided agentType
else missing agentType
TaskTools->>SmartDispatcher: dispatch(description)
SmartDispatcher-->>TaskTools: resolved agentType + dispatchInfo
end
TaskTools->>Memory: create/store task with resolved agentType
TaskTools-->>MCPServer: task creation result (includes dispatchInfo)
MCPServer-->>MCPClient: response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 🧪 Unit Test Generation v2 is now available!We have significantly improved our unit test generation capabilities. To enable: Add this to your reviews:
finishing_touches:
unit_tests:
enabled: trueTry it out by using the Have feedback? Share your thoughts on our Discord thread! Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/tasks/smart-dispatcher.ts`:
- Around line 311-319: The getCacheKey method currently computes a 32-bit hash
which can collide; replace the hashing logic in getCacheKey(description: string)
with a direct use of the (already truncated) description to form the cache key
(e.g., return a stable prefix plus the description) so cached entries map
uniquely to their original description; ensure you still return a string and
preserve the existing `dispatch:` prefix.
- Around line 217-272: The parseResponse function logs the entire LLM response
(content) and uses a greedy JSON regex; update parseResponse to (1) use a
non-greedy JSON extraction (e.g., /\{[\s\S]*?\}/) to avoid matching across extra
braces and improve parsing, and (2) stop logging raw content in log.warn —
instead log a capped preview (e.g., first N chars) or a redacted snippet before
calling log.warn and when building the returned reasoning; keep existing
behavior for agentType normalization (validAgentTypes, config.fallbackAgentType)
and error handling but replace full content with the safe preview in both
log.warn and the returned reasoning string.
In `@src/web/routes/tasks.ts`:
- Around line 69-99: Update the CreateTaskRequest type declarations so agentType
is optional (agentType?: string) in both places where that type is defined (the
backend CreateTaskRequest and the frontend/API CreateTaskRequest used by
clients) so callers can omit agentType and rely on the auto-dispatch logic in
router.post('/api/v1/tasks') which reads body?.agentType; update both type
declarations and run the TypeScript build to fix any type errors that result.
🧹 Nitpick comments (1)
src/mcp/tools/task-tools.ts (1)
96-99: Prefer SmartDispatcher config as fallback whenconfigisn’t provided.If callers pass a dispatcher but omit
config, fallback can drift from the dispatcher’s own configuration.♻️ Suggested tweak
- if (!agentType) { - agentType = config?.smartDispatcher?.fallbackAgentType ?? 'coder'; - } + if (!agentType) { + agentType = + config?.smartDispatcher?.fallbackAgentType ?? + smartDispatcher?.getConfig().fallbackAgentType ?? + 'coder'; + }
| private parseResponse(content: string): DispatchDecision { | ||
| try { | ||
| // Try to extract JSON from the response | ||
| const jsonMatch = content.match(/\{[\s\S]*\}/); | ||
| if (!jsonMatch) { | ||
| throw new Error('No JSON found in response'); | ||
| } | ||
|
|
||
| const parsed = JSON.parse(jsonMatch[0]) as { | ||
| agentType?: string; | ||
| confidence?: number; | ||
| reasoning?: string; | ||
| }; | ||
|
|
||
| // Validate required fields | ||
| if (!parsed.agentType || typeof parsed.agentType !== 'string') { | ||
| throw new Error('Invalid or missing agentType'); | ||
| } | ||
|
|
||
| const validAgentTypes = [ | ||
| 'coder', 'researcher', 'tester', 'reviewer', 'adversarial', | ||
| 'architect', 'coordinator', 'analyst', 'devops', 'documentation', | ||
| 'security-auditor', | ||
| ]; | ||
|
|
||
| // Normalize agent type | ||
| const normalizedType = parsed.agentType.toLowerCase().replace(/[_\s]/g, '-'); | ||
| const agentType = validAgentTypes.includes(normalizedType) | ||
| ? normalizedType | ||
| : this.config.fallbackAgentType; | ||
|
|
||
| return { | ||
| agentType, | ||
| confidence: typeof parsed.confidence === 'number' | ||
| ? Math.max(0, Math.min(1, parsed.confidence)) | ||
| : 0.5, | ||
| reasoning: typeof parsed.reasoning === 'string' | ||
| ? parsed.reasoning | ||
| : 'No reasoning provided', | ||
| cached: false, | ||
| latencyMs: 0, | ||
| }; | ||
| } catch (error) { | ||
| log.warn('Failed to parse LLM response', { | ||
| content, | ||
| error: error instanceof Error ? error.message : String(error), | ||
| }); | ||
|
|
||
| return { | ||
| agentType: this.config.fallbackAgentType, | ||
| confidence: 0, | ||
| reasoning: `Failed to parse response: ${error instanceof Error ? error.message : String(error)}`, | ||
| cached: false, | ||
| latencyMs: 0, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Avoid logging raw LLM response content.
Line 260 logs the full content, which can include user-provided details and become a PII leak. Prefer a capped preview/length. Also, a non‑greedy JSON match helps avoid parse failures when extra braces appear.
🔧 Proposed fix (redact logs + non‑greedy JSON match)
- const jsonMatch = content.match(/\{[\s\S]*\}/);
+ const jsonMatch = content.match(/\{[\s\S]*?\}/);- log.warn('Failed to parse LLM response', {
- content,
- error: error instanceof Error ? error.message : String(error),
- });
+ log.warn('Failed to parse LLM response', {
+ contentPreview: content.slice(0, 200),
+ contentLength: content.length,
+ error: error instanceof Error ? error.message : String(error),
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private parseResponse(content: string): DispatchDecision { | |
| try { | |
| // Try to extract JSON from the response | |
| const jsonMatch = content.match(/\{[\s\S]*\}/); | |
| if (!jsonMatch) { | |
| throw new Error('No JSON found in response'); | |
| } | |
| const parsed = JSON.parse(jsonMatch[0]) as { | |
| agentType?: string; | |
| confidence?: number; | |
| reasoning?: string; | |
| }; | |
| // Validate required fields | |
| if (!parsed.agentType || typeof parsed.agentType !== 'string') { | |
| throw new Error('Invalid or missing agentType'); | |
| } | |
| const validAgentTypes = [ | |
| 'coder', 'researcher', 'tester', 'reviewer', 'adversarial', | |
| 'architect', 'coordinator', 'analyst', 'devops', 'documentation', | |
| 'security-auditor', | |
| ]; | |
| // Normalize agent type | |
| const normalizedType = parsed.agentType.toLowerCase().replace(/[_\s]/g, '-'); | |
| const agentType = validAgentTypes.includes(normalizedType) | |
| ? normalizedType | |
| : this.config.fallbackAgentType; | |
| return { | |
| agentType, | |
| confidence: typeof parsed.confidence === 'number' | |
| ? Math.max(0, Math.min(1, parsed.confidence)) | |
| : 0.5, | |
| reasoning: typeof parsed.reasoning === 'string' | |
| ? parsed.reasoning | |
| : 'No reasoning provided', | |
| cached: false, | |
| latencyMs: 0, | |
| }; | |
| } catch (error) { | |
| log.warn('Failed to parse LLM response', { | |
| content, | |
| error: error instanceof Error ? error.message : String(error), | |
| }); | |
| return { | |
| agentType: this.config.fallbackAgentType, | |
| confidence: 0, | |
| reasoning: `Failed to parse response: ${error instanceof Error ? error.message : String(error)}`, | |
| cached: false, | |
| latencyMs: 0, | |
| }; | |
| } | |
| private parseResponse(content: string): DispatchDecision { | |
| try { | |
| // Try to extract JSON from the response | |
| const jsonMatch = content.match(/\{[\s\S]*?\}/); | |
| if (!jsonMatch) { | |
| throw new Error('No JSON found in response'); | |
| } | |
| const parsed = JSON.parse(jsonMatch[0]) as { | |
| agentType?: string; | |
| confidence?: number; | |
| reasoning?: string; | |
| }; | |
| // Validate required fields | |
| if (!parsed.agentType || typeof parsed.agentType !== 'string') { | |
| throw new Error('Invalid or missing agentType'); | |
| } | |
| const validAgentTypes = [ | |
| 'coder', 'researcher', 'tester', 'reviewer', 'adversarial', | |
| 'architect', 'coordinator', 'analyst', 'devops', 'documentation', | |
| 'security-auditor', | |
| ]; | |
| // Normalize agent type | |
| const normalizedType = parsed.agentType.toLowerCase().replace(/[_\s]/g, '-'); | |
| const agentType = validAgentTypes.includes(normalizedType) | |
| ? normalizedType | |
| : this.config.fallbackAgentType; | |
| return { | |
| agentType, | |
| confidence: typeof parsed.confidence === 'number' | |
| ? Math.max(0, Math.min(1, parsed.confidence)) | |
| : 0.5, | |
| reasoning: typeof parsed.reasoning === 'string' | |
| ? parsed.reasoning | |
| : 'No reasoning provided', | |
| cached: false, | |
| latencyMs: 0, | |
| }; | |
| } catch (error) { | |
| log.warn('Failed to parse LLM response', { | |
| contentPreview: content.slice(0, 200), | |
| contentLength: content.length, | |
| error: error instanceof Error ? error.message : String(error), | |
| }); | |
| return { | |
| agentType: this.config.fallbackAgentType, | |
| confidence: 0, | |
| reasoning: `Failed to parse response: ${error instanceof Error ? error.message : String(error)}`, | |
| cached: false, | |
| latencyMs: 0, | |
| }; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@src/tasks/smart-dispatcher.ts` around lines 217 - 272, The parseResponse
function logs the entire LLM response (content) and uses a greedy JSON regex;
update parseResponse to (1) use a non-greedy JSON extraction (e.g.,
/\{[\s\S]*?\}/) to avoid matching across extra braces and improve parsing, and
(2) stop logging raw content in log.warn — instead log a capped preview (e.g.,
first N chars) or a redacted snippet before calling log.warn and when building
the returned reasoning; keep existing behavior for agentType normalization
(validAgentTypes, config.fallbackAgentType) and error handling but replace full
content with the safe preview in both log.warn and the returned reasoning
string.
src/tasks/smart-dispatcher.ts
Outdated
| private getCacheKey(description: string): string { | ||
| // Simple hash function for cache key | ||
| let hash = 0; | ||
| for (let i = 0; i < description.length; i++) { | ||
| const char = description.charCodeAt(i); | ||
| hash = ((hash << 5) - hash) + char; | ||
| hash = hash & hash; // Convert to 32-bit integer | ||
| } | ||
| return `dispatch:${hash}`; |
There was a problem hiding this comment.
Cache key collisions can return wrong decisions.
Line 312–318 uses a 32‑bit hash that can collide and misroute cached results. Since descriptions are already truncated, using the description directly avoids collisions with minimal cost.
♻️ Proposed fix (use description as key)
private getCacheKey(description: string): string {
- // Simple hash function for cache key
- let hash = 0;
- for (let i = 0; i < description.length; i++) {
- const char = description.charCodeAt(i);
- hash = ((hash << 5) - hash) + char;
- hash = hash & hash; // Convert to 32-bit integer
- }
- return `dispatch:${hash}`;
+ return `dispatch:${description}`;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private getCacheKey(description: string): string { | |
| // Simple hash function for cache key | |
| let hash = 0; | |
| for (let i = 0; i < description.length; i++) { | |
| const char = description.charCodeAt(i); | |
| hash = ((hash << 5) - hash) + char; | |
| hash = hash & hash; // Convert to 32-bit integer | |
| } | |
| return `dispatch:${hash}`; | |
| private getCacheKey(description: string): string { | |
| return `dispatch:${description}`; | |
| } |
🤖 Prompt for AI Agents
In `@src/tasks/smart-dispatcher.ts` around lines 311 - 319, The getCacheKey method
currently computes a 32-bit hash which can collide; replace the hashing logic in
getCacheKey(description: string) with a direct use of the (already truncated)
description to form the cache key (e.g., return a stable prefix plus the
description) so cached entries map uniquely to their original description;
ensure you still return a string and preserve the existing `dispatch:` prefix.
| router.post('/api/v1/tasks', async (_req, res, params) => { | ||
| const body = params.body as CreateTaskRequest | undefined; | ||
|
|
||
| if (!body?.agentType) { | ||
| throw badRequest('Agent type is required'); | ||
| let agentType = body?.agentType; | ||
| let dispatchInfo: { agentType: string; confidence: number; reasoning: string; cached: boolean } | undefined; | ||
|
|
||
| // Auto-dispatch if no agent type specified | ||
| if (!agentType && body?.input) { | ||
| const dispatcher = getSmartDispatcher(config); | ||
| if (dispatcher.isEnabled()) { | ||
| const dispatchResult = await dispatcher.dispatch(body.input); | ||
| if (dispatchResult.success && dispatchResult.decision) { | ||
| agentType = dispatchResult.decision.agentType; | ||
| dispatchInfo = { | ||
| agentType: dispatchResult.decision.agentType, | ||
| confidence: dispatchResult.decision.confidence, | ||
| reasoning: dispatchResult.decision.reasoning, | ||
| cached: dispatchResult.decision.cached, | ||
| }; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Fallback to default agent type | ||
| if (!agentType) { | ||
| agentType = config.smartDispatcher?.fallbackAgentType ?? 'coder'; | ||
| } | ||
|
|
||
| const manager = getManager(); | ||
| const task = manager.createTask(body.agentType, body.input, body.sessionId); | ||
| const task = manager.createTask(agentType, body?.input, body?.sessionId); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for CreateTaskRequest definitions
rg -n 'interface CreateTaskRequest|type CreateTaskRequest' --type tsRepository: blackms/aistack
Length of output: 177
🏁 Script executed:
#!/bin/bash
# Read the CreateTaskRequest definitions
echo "=== src/web/types.ts ==="
sed -n '97,110p' src/web/types.ts
echo ""
echo "=== web/src/api/types.ts ==="
sed -n '123,136p' web/src/api/types.tsRepository: blackms/aistack
Length of output: 639
Make agentType optional in CreateTaskRequest types.
Both src/web/types.ts and web/src/api/types.ts define agentType: string as required, but the route now supports auto-dispatch when agentType is missing. Update the type to agentType?: string in both files so TypeScript consumers can submit requests without specifying an agent type and let the auto-dispatcher determine it.
🤖 Prompt for AI Agents
In `@src/web/routes/tasks.ts` around lines 69 - 99, Update the CreateTaskRequest
type declarations so agentType is optional (agentType?: string) in both places
where that type is defined (the backend CreateTaskRequest and the frontend/API
CreateTaskRequest used by clients) so callers can omit agentType and rely on the
auto-dispatch logic in router.post('/api/v1/tasks') which reads body?.agentType;
update both type declarations and run the TypeScript build to fix any type
errors that result.
- Add dispatchModel config option (default: claude-3-5-haiku-20241022) - Replace simple hash with FNV-1a algorithm for better cache key distribution - Update configEquals to include dispatchModel comparison - Fix integration test to be order-agnostic for task listing Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update default dispatch model from claude-3-5-haiku to claude-haiku-4-5-20251001. Available Claude 4.5 models: - claude-haiku-4-5-20251001 (default, fast & cost-effective) - claude-sonnet-4-5-20250929 (balanced intelligence & speed) - claude-opus-4-5-20251101 (maximum intelligence) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/tasks/smart-dispatcher.ts`:
- Around line 296-302: In cacheDecision, avoid storing the original
DispatchDecision reference; create and store a deep copy of the decision before
calling this.cache.set (use a safe deep-clone approach such as structuredClone
or a JSON-based clone) so external callers cannot mutate cached state; also
ensure the corresponding getter (the method that reads from this.cache, e.g.,
wherever getCacheKey / cache lookup uses the cached entry) returns a copy of the
cached decision rather than the stored object to fully prevent outside mutation.
| private cacheDecision(description: string, decision: DispatchDecision): void { | ||
| const key = this.getCacheKey(description); | ||
| this.cache.set(key, { | ||
| decision, | ||
| expiresAt: Date.now() + this.config.cacheTTLMs, | ||
| }); | ||
|
|
There was a problem hiding this comment.
Prevent cache entries from being mutated by external callers.
The cached decision shares the same object reference returned to callers; mutations could corrupt cache state.
🛡️ Proposed fix (store a copy)
- this.cache.set(key, {
- decision,
- expiresAt: Date.now() + this.config.cacheTTLMs,
- });
+ this.cache.set(key, {
+ decision: { ...decision },
+ expiresAt: Date.now() + this.config.cacheTTLMs,
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private cacheDecision(description: string, decision: DispatchDecision): void { | |
| const key = this.getCacheKey(description); | |
| this.cache.set(key, { | |
| decision, | |
| expiresAt: Date.now() + this.config.cacheTTLMs, | |
| }); | |
| private cacheDecision(description: string, decision: DispatchDecision): void { | |
| const key = this.getCacheKey(description); | |
| this.cache.set(key, { | |
| decision: { ...decision }, | |
| expiresAt: Date.now() + this.config.cacheTTLMs, | |
| }); | |
🤖 Prompt for AI Agents
In `@src/tasks/smart-dispatcher.ts` around lines 296 - 302, In cacheDecision,
avoid storing the original DispatchDecision reference; create and store a deep
copy of the decision before calling this.cache.set (use a safe deep-clone
approach such as structuredClone or a JSON-based clone) so external callers
cannot mutate cached state; also ensure the corresponding getter (the method
that reads from this.cache, e.g., wherever getCacheKey / cache lookup uses the
cached entry) returns a copy of the cached decision rather than the stored
object to fully prevent outside mutation.
- Add tests for cache cleanup when exceeding 1000 entries - Add tests for cleanCache() expired entry removal - Add tests for selectAgentType error when no provider - Add tests for parseResponse edge cases (missing/invalid agentType, missing reasoning) - Add tests for singleton configEquals edge cases (undefined configs) - Add tests for provider creation failure handling Coverage improved: - Statements: 92.54% → 100% - Branches: 80.95% → 94.66% - Functions: 94.11% → 100% Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
agentTypeis not specifiedagentTypestill works as beforeFeatures
Smart Dispatcher Service
Integration Points
task_createtool now accepts optionalagentTypePOST /api/v1/taskssupports optionalagentTypePOST /api/v1/tasks/dispatchendpoint for previewaistack agent auto <description>commandCLI Options
Configuration
{ "smartDispatcher": { "enabled": true, "cacheEnabled": true, "cacheTTLMs": 3600000, "confidenceThreshold": 0.7, "fallbackAgentType": "coder", "maxDescriptionLength": 1000 } }Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Behavior Changes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.