Skip to content

Releases: intelligentnode/Intelli

intelli v1.4.1

22 Dec 11:43
9c2b7fe

Choose a tag to compare

New Features 🌟

  • Vibe Flow (Beta): Build and execute multi-modal AI flows from natural language descriptions.
  • LoopTask: Iterate steps (refine → critique → improve) without introducing cycles in the main DAG.
  • Web Search Support: Search Agent now supports Google Custom Search alongside Intellicloud semantic search.
  • CustomAgent: Base class to plug in proprietary logic or local models into any Flow.

Code Example

Describe your workflow in plain English and let VibeFlow build it:

from intelli.flow.vibe import VibeFlow
import asyncio

async def run_vibe():
    vf = VibeFlow(
        planner_provider="openai",
        planner_api_key="YOUR_KEY",
        text_model="openai gpt-5.2"
    )

    flow = await vf.build("Summarize the input text and then translate the summary into French.")
    results = await flow.start(initial_input="IntelliNode is an open-source library...")
    print(results)

asyncio.run(run_vibe())

More docs:

Contributors

intelli v1.3.4

14 Dec 15:50
9859f20

Choose a tag to compare

New Features 🌟

  • OpenAI: Added tools/tool_choice support while keeping legacy.
  • OpenAI: Default model switched to gpt-5.2.
  • Gemini: Added structured outputs helper (JSON schema).
  • Gemini: Added streaming support (streamGenerateContent) and updated TTS request format.
  • MCP: Added async APIs.
  • Speechmatics: Return per-token confidence scores and added support for partial transcripts.

Contributors

@Barqawiz and @nabeel-bassam

intelli v1.3.0

31 Oct 14:39
ef922c1

Choose a tag to compare

New Features 🌟

  • Add real-time streaming transcription via WebSocket.
  • Integration with RemoteRecognitionModel for unified API.

Technical Details 💻

Installation:
pip install 'intelli[speech]'

Import:

import os
from intelli.controller.remote_recognition_model import (
    RemoteRecognitionModel,
    SupportedRecognitionModels
)
from intelli.model.input.text_recognition_input import SpeechRecognitionInput

Code:

# Works with: OPENAI, KERAS, ELEVENLABS, SPEECHMATICS
recognizer = RemoteRecognitionModel(
    key_value=os.environ.get('SPEECHMATICS_API_KEY'),
    provider=SupportedRecognitionModels['SPEECHMATICS']
)

# Create input
recognition_input = SpeechRecognitionInput(
    audio_file_path="audio.mp3",
    language="en"
)

# Get transcription
result = recognizer.recognize_speech(recognition_input)
print(result)

Contributor: @nabeel-bassam

intelli v1.2.2

19 Oct 20:18
9afdacc

Choose a tag to compare

New Features 🌟

  • Add support to GTP5, now the openai provider use GPT5 by default.
  • Add sample to build flows using latest models.
  • Minor bug fixes and enhancement.

intelli v1.1.0

17 May 09:53
550a48e

Choose a tag to compare

New Features 🌟

  • Model Context Protocol (MCP): Connect your own code functions directly to Intelli flows with minimal setup.
  • Improved flow graph visual.

Using MCP in Your Flows

# Create an MCP agent for a math tool
mcp_agent = Agent(
    agent_type=AgentTypes.MCP.value,
    provider="mcp",
    mission="Do simple math",
    model_params={
        "command": sys.executable,
        "args": ["mcp_math_server.py"],  # Path to your MCP server
        "tool": "add",                   # Tool function to call
        "arg_a": 7,                      # First argument for add function
        "arg_b": 8,                      # Second argument for add function
    }
)

# Create a single task flow
flow = Flow(
    tasks={"calc": Task(TextTaskInput("Calculate"), mcp_agent)},
    map_paths={"calc": []}  # Empty list means no outgoing connections
)
result = asyncio.run(flow.start())

For complete documentation and details of remote connectors, check the MCP Getting Started Guide.

Contributors

@Barqawiz and @hydrogeohc

Intelli 0.5.7

19 Feb 16:36
6d0c444

Choose a tag to compare

New Features 🌟

  • Offline Llama CPP Integration: run LLMs locally using llama.cpp through unified chatbot or flow interface.
  • Multiple Model Support: switch between different GGUF models such as TinyLlama and DeepSeek-R1.
  • Enhanced Prompt Formatting: support for model-specific prompt formats.
  • Added options to suppress verbose llama.cpp logs.

Using Llama CPP Chat Features 💻

from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

# Configure tinyLlama Chatbot
options = {
    "model_path": "./models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf",
    "model_params": {
        "n_ctx": 512,
        "embedding": False,      # True if you need embeddings
        "verbose": False         # Suppress llama.cpp internal logs
    }
}

llama_bot = Chatbot(provider=ChatProvider.LLAMACPP, options=options)

# Prepare a chat input and get a response
chat_input = ChatModelInput("You are a helpful assistant.", model="llamacpp", max_tokens=64, temperature=0.7)
chat_input.add_user_message("What is the capital of France?")
response = llama_bot.chat(chat_input)

For more details check the llama.cpp docs.

Intelli 0.5.3

01 Feb 20:15
13e1a98

Choose a tag to compare

New Features 🌟

  • Support NVIDIA hosted models (Deepseek and Llama 3.3) via a unified chatbot interface.
  • Add streaming responses when calling NVIDIA models.
  • Add new embedding provider.

Using NVIDIA Chat Features 💻

from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

# get your API key from https://build.nvidia.com/
nvidia_bot = Chatbot("YOUR_NVIDIA_KEY", ChatProvider.NVIDIA.value)

# prepare the input
input_obj = ChatModelInput("You are a helpful assistant.", model="deepseek-ai/deepseek-r1", max_tokens=1024, temperature=0.6)
input_obj.add_user_message("What do you think is the secret to balanced life?")

Synchronous response example

response = nvidia_bot.chat(input_obj)

Streaming response example

async def stream_nvidia():
    for i, chunk in enumerate(nvidia_bot.stream(input_obj)):
        print(chunk, end="")  # Print each chunk as it arrives
        if i >= 4:  # Print only the first 5 chunks
            break

# In an async context, you can run:
result = await stream_nvidia()

For more details, check the docs.

Intelli 0.5.1

30 Jan 21:45
670a129

Choose a tag to compare

Offline Whisper Transcription 🎤

Load and use OpenAI's Whisper model offline for audio transcription.
Intellinode module support initial prompt to improve the transcription quality.

Code

Load audio

import soundfile as sf
audio_data, sample_rate = sf.read(file_name)

Inference:

from intelli.wrappers.keras_wrapper import KerasWrapper
wrapper = KerasWrapper(model_name="whisper_large_multi_v2")
result = wrapper.transcript(audio_data, user_prompt="medical content")

check the documentation.

Intelli 0.4.2

24 Jul 21:26
0d93363

Choose a tag to compare

New Features 🌟

  • Update the agent to support the Llama 3.1 offline model.
  • Add offline model capability to the chatbot.
  • Unify Keras loader under a dedicated wrapper KerasWrapper.

Using the New Features 💻

Intelli v0.2.3

09 Mar 17:06
1495f31

Choose a tag to compare

New Features 🌟

  • Support for ANTHROPIC Models: Our chatbot integration now supports advanced ANTHROPIC models, including those with large context windows.
  • Chatbot Provider Enumeration: The selection of AI providers has been simplified through the use of enumerators.
  • Minor Bug Fixes: Adjust the parameter order for the controllers.

Using the New Features 💻

  • ChatProvider enum simplifies the selecting providers.
from intelli.function.chatbot import ChatProvider

# check available chatbot providers
for provider in ChatProvider:
    print(provider.name)

Contributors