This project is an AI workflow engine that turns a natural-language intent into an executable workflow: a planner (LLM) produces a plan, which is compiled into a multi-step workflow, and an executor runs each step by calling tools.
Tools are pluggable—in-process (e.g. Notion), remote HTTP (e.g. MCP Tavily), and stdio subprocesses (e.g. a small prime-check MCP server)—and are registered in a central registry so the executor can invoke them in a uniform way.
The AI Workflow Engine bridges probabilistic AI (LLMs) and deterministic software execution:
- User Intent → Natural language query
- Planner (LLM) → Translates intent into a structured Plan
- Workflow Compilation → Plan compiled into immutable, executable Workflow
- Executor → Runs workflow step-by-step, calling tools
Key Features: Tool discovery from MCP servers, dynamic planner prompts, workflow persistence, transport-agnostic tools (in-process/HTTP/stdio).
Example Queries:
- "Check if 101 is a prime number and write the result to Notion"
- "Search for Arsenal FC latest match analysis and post it on Notion"
Note: The LLM is used only once to translate the query into a plan. The rest executes deterministically.
For detailed architecture documentation, see CASESTUDY.md.
sequenceDiagram
actor User
participant Application
participant MCPServers@{ "type": "collections"}
participant ExternalWorld@{ "type": "collections" }
participant LLM
Note over Application: Application Started
Note over Application: Initialize ToolRegistry
Note over Application, MCPServers: Tool Discovery
Application->>MCPServers: GET /tools/list
MCPServers->>Application: Array<{ name: string, inputSchema: JsonSchema}>
Note over Application: Load Tool Details in ToolRegistry
User-->>Application: User Prompt/Query
Note over Application: Generates a prompt from UserInput
Application-->>LLM: Sends Prompt
LLM-->>Application: Receives rawOutput from LLM
Note over Application: Parses and validates the rawOutput to a Workflow
Note over Application: Executor starts executing the Workflow.
loop Required number of call for the Workflow
Application->>MCPServers: POST /tools/call or JSON over stdio
MCPServers->>ExternalWorld: Side Effect (eg writing to file) or fetching data
ExternalWorld->>MCPServers: Data From ExternalWorld
MCPServers->>Application: `{ result: JSON }`
end
Application-->>User: Final Response
Prerequisites: Node.js v18+, npm/yarn, Docker & Docker Compose
Environment Variables (create .env file):
OPENAI_API_KEY=your_openai_api_key_here
NOTION_API_KEY=your_notion_api_key_here
NOTION_DATABASE_ID=your_notion_database_id_here
TAVILLY_API_KEY=your_tavily_api_key_here # Optional
MCP_TAVILY_URL=http://localhost:13001 # Optional, defaults shown
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/ai_workflow_dev# 1. Clone and install
git clone <repository-url>
cd ai-workflow-engine
npm install
# 2. Set up environment variables (create .env file)
# See Requirements section above
# 3. Start database and run migrations
npm run db:start
npx prisma migrate dev
# 4. (Optional) Start MCP Tavily server
npm run dev:mcp-tavily
# 5. Start application
npm run devThe application runs on http://localhost:13099.
Test endpoints:
# Health check
curl http://localhost:13099/health
# Create and execute workflow
curl -X POST http://localhost:13099/run \
-H "Content-Type: application/json" \
-d '{"intent": "Check if 101 is a prime number"}'
# Create plan only
curl -X POST http://localhost:13099/plan \
-H "Content-Type: application/json" \
-d '{"intent": "Search for Arsenal FC match analysis"}'GET /health- Health checkPOST /plan- Create plan from intent (no execution){ "intent": "Your query here" }POST /run- Create plan, compile workflow, save to DB, and execute{ "intent": "Your query here" }
Database commands:
npm run db:start # Start database
npm run db:stop # Stop database
npm run db:logs # View logs
npm run db:console # Open consoleType checking: npm run build
For detailed architecture documentation, see CASESTUDY.md.
ISC