Skip to content

robroyhobbs/aigne-framework

 
 

Repository files navigation

English | 中文

GitHub star chart Open Issues codecov NPM Version Elastic-2.0 licensed

What is AIGNE Framework

AIGNE Framework is a functional AI application development framework designed to simplify and accelerate the process of building modern applications. It combines functional programming features, powerful artificial intelligence capabilities, and modular design principles to help developers easily create scalable solutions. AIGNE Framework is also deeply integrated with the Blocklet ecosystem, providing developers with a wealth of tools and resources.

Key Features

  • Modular Design: With a clear modular structure, developers can easily organize code, improve development efficiency, and simplify maintenance.
  • TypeScript Support: Comprehensive TypeScript type definitions are provided, ensuring type safety and enhancing the developer experience.
  • Multiple AI Model Support: Built-in support for OpenAI, Gemini, Claude and other mainstream AI models, easily extensible to support additional models.
  • Flexible Workflow Patterns: Support for sequential, concurrent, routing, handoff and other workflow patterns to meet various complex application requirements.
  • MCP Protocol Integration: Seamless integration with external systems and services through the Model Context Protocol.
  • Code Execution Capabilities: Support for executing dynamically generated code in a secure sandbox, enabling more powerful automation capabilities.
  • Blocklet Ecosystem Integration: Closely integrated with the Blocklet ecosystem, providing developers with a one-stop solution for development and deployment.

Quick Start

Installation

# Using npm
npm install @aigne/core

# Using yarn
yarn add @aigne/core

# Using pnpm
pnpm add @aigne/core

Usage Example

import { AIAgent, AIGNE } from "@aigne/core";
import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";

const model = new OpenAIChatModel({
  apiKey: process.env.OPENAI_API_KEY,
  model: process.env.DEFAULT_CHAT_MODEL || "gpt-4-turbo",
});

function transferToAgentB() {
  return agentB;
}

function transferToAgentA() {
  return agentA;
}

const agentA = AIAgent.from({
  name: "AgentA",
  instructions: "You are a helpful agent.",
  outputKey: "A",
  skills: [transferToAgentB],
});

const agentB = AIAgent.from({
  name: "AgentB",
  instructions: "Only speak in Haikus.",
  outputKey: "B",
  skills: [transferToAgentA],
});

const aigne = new AIGNE({ model });

const userAgent = aigne.invoke(agentA);

const response = await userAgent.invoke("transfer to agent b");
// output
// {
//   B: "Agent B awaits here,  \nIn haikus I shall speak now,  \nWhat do you seek, friend?",
// }

Packages

  • examples - Example project demonstrating how to use different agents to handle various tasks.
  • packages/core - Core package providing the foundation for building AIGNE applications.

Documentation

Architecture

AIGNE Framework supports various workflow patterns to address different AI application needs. Each workflow pattern is optimized for specific use cases:

Sequential Workflow

Use Cases: Processing multi-step tasks that require a specific execution order, such as content generation pipelines, multi-stage data processing, etc.

flowchart LR
in(In)
out(Out)
conceptExtractor(Concept Extractor)
writer(Writer)
formatProof(Format Proof)

in --> conceptExtractor --> writer --> formatProof --> out

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class conceptExtractor processing
class writer processing
class formatProof processing
Loading

Concurrency Workflow

Use Cases: Scenarios requiring simultaneous processing of multiple independent tasks to improve efficiency, such as parallel data analysis, multi-dimensional content evaluation, etc.

flowchart LR
in(In)
out(Out)
featureExtractor(Feature Extractor)
audienceAnalyzer(Audience Analyzer)
aggregator(Aggregator)

in --> featureExtractor --> aggregator
in --> audienceAnalyzer --> aggregator
aggregator --> out

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class featureExtractor processing
class audienceAnalyzer processing
class aggregator processing
Loading

Router Workflow

Use Cases: Scenarios where requests need to be routed to different specialized processors based on input content type, such as intelligent customer service systems, multi-functional assistants, etc.

flowchart LR
in(In)
out(Out)
triage(Triage)
productSupport(Product Support)
feedback(Feedback)
other(Other)

in ==> triage
triage ==> productSupport ==> out
triage -.-> feedback -.-> out
triage -.-> other -.-> out

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class triage processing
class productSupport processing
class feedback processing
class other processing
Loading

Handoff Workflow

Use Cases: Scenarios requiring control transfer between different specialized agents to solve complex problems, such as expert collaboration systems, etc.

flowchart LR

in(In)
out(Out)
agentA(Agent A)
agentB(Agent B)

in --> agentA --transfer to b--> agentB --> out

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class agentA processing
class agentB processing
Loading

Reflection Workflow

Use Cases: Scenarios requiring self-assessment and iterative improvement of output quality, such as code reviews, content quality control, etc.

flowchart LR
in(In)
out(Out)
coder(Coder)
reviewer(Reviewer)

in --Ideas--> coder ==Solution==> reviewer --Approved--> out
reviewer ==Rejected==> coder

classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class coder processing
class reviewer processing
Loading

Code Execution Workflow

Use Cases: Scenarios requiring dynamically generated code execution to solve problems, such as automated data analysis, algorithmic problem solving, etc.

flowchart LR

in(In)
out(Out)
coder(Coder)
sandbox(Sandbox)

coder -.-> sandbox
sandbox -.-> coder
in ==> coder ==> out


classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;

class in inputOutput
class out inputOutput
class coder processing
class sandbox processing
Loading

Examples

MCP Server Integration

  • Puppeteer MCP Server - Learn how to leverage Puppeteer for automated web scraping through the AIGNE Framework.
  • SQLite MCP Server - Explore database operations by connecting to SQLite through the Model Context Protocol.
  • Github - Interact with GitHub repositories using the GitHub MCP Server.

Workflow Patterns

  • Workflow Router - Implement intelligent routing logic to direct requests to appropriate handlers based on content.
  • Workflow Sequential - Build step-by-step processing pipelines with guaranteed execution order.
  • Workflow Concurrency - Optimize performance by processing multiple tasks simultaneously with parallel execution.
  • Workflow Handoff - Create seamless transitions between specialized agents to solve complex problems.
  • Workflow Reflection - Enable self-improvement through output evaluation and refinement capabilities.
  • Workflow Orchestration - Coordinate multiple agents working together in sophisticated processing pipelines.
  • Workflow Code Execution - Safely execute dynamically generated code within AI-driven workflows.
  • Workflow Group Chat - Share messages and interact with multiple agents in a group chat environment.

Contributing and Releasing

AIGNE Framework is an open source project and welcomes community contributions. We use release-please for version management and release automation.

License

This project is licensed under the Elastic-2.0 - see the LICENSE file for details.

Community and Support

AIGNE Framework has a vibrant developer community offering various support channels:

  • Documentation Center: Comprehensive official documentation to help developers get started quickly.
  • Technical Forum: Exchange experiences with global developers and solve technical problems.

About

AIGNE Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 99.0%
  • JavaScript 1.0%