An AI-Native, Intent-Oriented Programming Language built on Python
import socket
from intentlang import MagicIntent
MagicIntent.hack_str(cache=True)
# Natural language as an executable expression
data = [1, 2, 3, 4, 5]
if "sum of all even numbers".i(data).o(int) > 5:
print("Result is greater than 5")
# Natural language controlling real objects
sock = socket.socket()
"Connect to example.com and send an HTTP GET request".i(sock).r("Do not receive data, only send, and keep the socket open")()
print(sock.recv(15).decode())This is not a prompt. This is not function calling. This is executable natural language embedded directly into Python.
IntentLang allows you to write intent-driven programs, where:
- Strings are no longer passive data
- Natural language becomes a first-class executable expression
- AI reasoning participates directly in program control flow
- Real Python objects are manipulated at runtime, not serialized into prompts
IntentLang is not an AI framework in the traditional sense.
It is an experiment in AI-native programming:
treating human intent as a first-class, executable construct inside a general-purpose programming language.
Read more:
- IntentLang: Rebuilding AI Agents with First Principles and Intent Engineering
- IntentLang:以第一性原理与意图工程重建 AI Agent
- 为了创造 AI-native programming language, 我 hack 了 CPython 的 str
IntentLang fundamentally rethinks how AI agents work:
| Traditional Agents | IntentLang |
|---|---|
| 🔧 Discrete function calls | 🎯 Continuous code execution |
| 📦 Data in context | 🔗 Data in runtime (Schema-only) |
| 🔌 Tools as external APIs | 🧬 Tools as embedded objects |
| 📝 Prompt engineering | 🏗️ Intent engineering |
1. Precise Modeling and Computation of Intent: Human Intent as a First-Class Citizen
IntentLang is the first framework to formally represent human intent (Goal, Contexts, Tools, Input, Strategy, Constraints, Output). These elements are transformed into an XML-based Intent IR (Intermediate Representation), guiding the LLM to progressively generate code and realize the computation and iterative convergence of human intent. This is not mere prompt engineering, but a systematic construction of intent engineering, allowing expert knowledge to be accumulated in a reusable and verifiable form.
2. Paradigm Shift: From Constrained “Function Calling” to Free “Code Execution”
IntentLang completely abandons the discrete, inefficient, and strictly schema-constrained Function Calling paradigm found in mainstream frameworks. Instead of confining AI to predefined tool functions, we empower it to generate and execute Python code directly. This means AI expression and operation are no longer isolated, atomic calls, but a continuous, stateful, and Turing-complete computational process.
3. Separation of Data and Instructions: A Fundamental Break from Context Limitations
In IntentLang, input data (regardless of size) is not serialized and injected into the LLM context. The AI receives only metadata about the data objects (names and descriptions). It must generate code to access these in-memory objects on demand at runtime. This model fundamentally eliminates token limits and cost issues caused by large inputs in LLM applications.
4. Embedded Execution: Eliminating Tool Invocation Boundaries
You can inject any Python object—whether an initialized database connection, a browser instance, or a complex business model—directly into the AI execution environment as a tool. Code generated by the AI shares the same execution flow and runtime context as the host program, enabling continuous access to object attributes, method invocations, and natural perception and evolution of state. In this model, the AI no longer participates through discrete tool calls, but acts as an embedded execution unit within the program, collaborating with host code to complete computation.
5. Native Python Expression: A “Super DSL” with Zero Learning Cost
IntentLang treats Python itself as its domain-specific language (DSL). There is no need to learn new, complex graph orchestration syntaxes or YAML configurations. If you can write Python, you already know IntentLang. This dramatically lowers the learning barrier while allowing full leverage of Python’s vast and mature ecosystem.
IntentLang requires Python 3.10 or higher.
Using pip:
pip install intentlang
Using uv:
uv add intentlang
IntentLang uses environment variables for LLM configuration. Create a .env file in your project root directory and add the following settings:
# OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
# OPENAI_BASE_URL=https://open.bigmodel.cn/api/paas/v4
OPENAI_BASE_URL=https://api.deepseek.com
OPENAI_API_KEY=
OPENAI_MODEL_NAME=
# OPENAI_EXTRA_BODY={}
from intentlang import Intent
# Define what you want
intent = Intent().goal("Sum all even numbers").input(
numbers=([1,2,3,4,5,6], "list of integers")
).output(sum=(int, "sum of even numbers"))
# Execute the computation
result = intent.run_sync()
print(result.output.sum) # Output: 12
print(result.usage.model_dump_json())What just happened?
-
✅ Intent formalized into 7 elements (Goal, Input, Output)
-
✅ AI generated Python code to solve this
-
✅ Code executed in runtime, not function calls
-
✅ Input data never entered LLM context (only schema did)
CRITICAL: IntentLang executes AI-generated code in your runtime.
Always run in isolated environments:
-
🐳 Docker containers
-
📦 Sandboxed Python environments
-
🔒 Virtual machines
This example demonstrates how to allow the AI to directly operate on a Playwright object initialized by the host program, enabling continuous interaction for opening a web page and extracting its title.
View Code
import asyncio
from intentlang import Intent
from playwright.async_api import async_playwright, Page
async def test():
playwright = await async_playwright().start()
browser = await playwright.chromium.launch(headless=False)
context = await browser.new_context()
url = "https://l3yx.github.io/"
intent_a = (
Intent()
.goal("Open the url")
.input(
# Format: name=(value, "description")
url=(url, "Target URL to open")
)
.tools([
# Formats: function, or (object, "name", "description")
(context, "context", "Playwright Context instance (async API)")
])
.output(
# Format: name=(type, "description")
page=(Page, "Playwright Page instance")
)
)
page = (await intent_a.run()).output.page
print(page)
intent_b = (
Intent()
.goal("Get webpage title")
.input(
page=(page, "Playwright Page instance")
)
.output(
title=(str, "Webpage title")
)
)
title = (await intent_b.run()).output.title
print(title)
asyncio.run(test())This example demonstrates how to handle a semantic recognition task.
View Code
import os
from typing import List, Literal
from pydantic import Field
from intentlang import Intent, LLMConfig, IntentIO
from intentlang.tools import create_reason_func
llm_config = LLMConfig(
base_url=os.getenv("OPENAI_BASE_URL"),
api_key=os.getenv("OPENAI_API_KEY"),
model_name=os.getenv("OPENAI_MODEL_NAME"),
extra_body=os.getenv("OPENAI_EXTRA_BODY")
)
movie_reviews = """
This movie has a gripping plot with constant twists—my adrenaline was through the roof, highly recommend!
The visual effects are mind-blowing; the VFX team deserves an Oscar—pure audiovisual feast!
The acting is spot-on, especially the protagonist's inner turmoil feels so real—I was in tears.
Pacing is a bit slow, but the philosophy is profound; it leaves you thinking long after, worth a rewatch.
Avoid this one—full of plot holes, logic collapses, wasted two hours of my life.
"""
class ReviewResult(IntentIO):
review: str = Field(description="Original evaluation content")
sentiment: Literal["positive", "negative",
"mixed"] = Field(description="Sentiment Classification")
reason: str = Field(description="Classification Reasons")
class Result(IntentIO):
reviews: List[ReviewResult] = Field(
description="Sentiment analysis results for each review")
intent = (
Intent()
.goal("Sentiment categorization for each movie review")
.input(
movie_reviews=(movie_reviews, "Contains multiple reviews, one per line")
)
.output(Result)
.how("Multiple threads analyze each comment simultaneously to determine the sentiment and provide reasons")
.tools([create_reason_func(llm_config)])
.rules(["When performing sentiment classification using a multi-threaded concurrency of 5"])
)
result = intent.compile(cache=True).run_sync()
print(result.output.model_dump_json(indent=2))
print(result.usage.model_dump_json())The core of IntentLang revolves around the construction of the Intent object, which transforms natural language intent into executable Python code and enables seamless collaboration with the host program.
Intent is the minimal logical unit of the IntentLang framework, encapsulating all the information AI needs to complete a specific task. An Intent object is constructed through method chaining, clearly defining each element of the task.
The 7 Defining Elements of Intent:
.goal(goal: str)
Purpose: Clearly describe the ultimate goal that AI needs to achieve. This is the core instruction for the LLM to understand the task.
intent = Intent().goal("Calculate the sum of even numbers").ctxs(ctxs: list[str])
Purpose: Provide additional contextual information to the LLM. These are plain text descriptions that help the LLM better understand the task background or domain-specific knowledge.
intent = Intent().ctxs([
"integers ending in 7 are lucky numbers",
"4 is considered an unlucky number"
]).tools(tools: list[Callable | Tuple[object, str, str]])
Purpose: Inject available tools into AI's execution environment. These tools can be regular Python functions or any Python objects already initialized in the host program.
Key Feature: Unlike traditional Function Calling which is limited to functions, IntentLang allows you to inject any Python object as a tool. This means AI can directly call object methods, access object attributes, and achieve object-level continuous operations without serialization and deserialization.
Usage: Accepts a list where elements can be functions or (object, name, description) tuples.
intent = Intent().tools([
check_lucky_number, # function
(browser_context, "context", "Playwright context") # object
]).input(input: IntentIO | None = None, **field_definitions)
Purpose: Define the input data that AI can access when executing this intent. This data is passed as references to Python objects.
Key Feature: Input data itself is NOT serialized and sent to the LLM as context. The LLM only receives metadata (name and description) about these data objects. AI must generate Python code to access and manipulate these in-memory objects on-demand.
Usage:
- Dynamic definition: Define input fields through keyword arguments. Each argument is a
(value, description)tuple. - Predefined model: Directly pass an instantiated
IntentIOsubclass object.
# Dynamic definition
intent = Intent().input(
numbers=([1,2,3,4,5], "list of integers")
)
# Predefined model
class MyInput(IntentIO):
numbers: list[int]
intent = Intent().input(MyInput(numbers=[1,2,3,4,5])).how(how: str)
Purpose: Provide high-level strategy or implementation approach on how to achieve the goal. It guides the LLM to follow specific methodologies when generating code.
intent = Intent().how("Process each item one by one").rules(rules: list[str])
Purpose: Set specific constraints or behavioral guidelines that must be followed during execution. These rules help the LLM refine its code generation logic and ensure outputs meet expectations.
intent = Intent().rules([
"Must validate all inputs before processing",
"Handle errors gracefully"
]).output(output: Type[IntentIO] | None = None, **field_definitions)
Purpose: Define the result structure that AI needs to produce after successfully completing the intent. It forces AI to return data in the expected Pydantic model format, ensuring structured and verifiable output.
Usage:
- Dynamic definition: Define output fields through keyword arguments. Each argument is a
(type, description)tuple. - Predefined model: Directly pass a Pydantic model class that inherits from
IntentIO.
# Dynamic definition
intent = Intent().output(
sum=(int, "sum of even numbers")
)
# Predefined model
class Result(IntentIO):
sum: int
count: int
intent = Intent().output(Result)After defining an Intent, you can launch it through the compile and run methods.
View Details
-
.compile(engine_factory: EngineFactory | None = None, max_iterations: int = 30, cache: bool = False, record: bool = True) -> Executor:- Purpose: "Compile" the
Intentobject into an executableExecutorinstance. This process generates the final Prompt and prepares the execution environment. - Parameters:
engine_factory: Currently onlyLLMEngineFactoryis available.max_iterations: Maximum iteration rounds (default 30) to prevent infinite loops.cache: Whether to enable caching (default False), reusing code from Jupyter Notebook cache.record: Whether to record execution process to Notebook (default True).
- Usage: Use this method when you need more fine-grained control over the execution process.
- Purpose: "Compile" the
-
.run() -> IntentResult:- Purpose: This is a convenience method that automatically compiles and immediately executes the
Intent, then returns the final result. - Relationship: Calling
intent.run()is essentially equivalent tointent.compile().run(). - Key Feature:
run()is an asynchronous method that waits for AI to complete all necessary code generation and execution steps, finally returning anIntentResultobject containing the structured data produced by AI.Intentalso provides a synchronous versionrun_sync().
- Purpose: This is a convenience method that automatically compiles and immediately executes the
Executor is the execution engine for Intent. It is responsible for transforming the constructed Intent object into a Prompt that the LLM can understand, then iteratively executing the Python code generated by the LLM in the Python runtime environment. The Executor continues iterating until the AI successfully produces a result that conforms to the OutputModel definition, or reaches the preset maximum number of iterations.
Runtime is an embedded Python REPL (Read-Eval-Print Loop) environment that supports top-level await.
Core Capabilities
-
Code Execution: Executes Python code generated by the LLM.
-
Observation Feedback: Captures all
printoutputs during code execution and feeds them back to the LLM as "observation results", helping AI refine and iterate its subsequent code. -
Exception Handling: Gracefully handles exceptions that may occur during code execution and feeds error information back to the LLM, enabling it to self-correct.
-
Object Sharing:
Runtimeshares the same process space with the host program, allowing AI-generated code to directly access and manipulate Python objects passed from the host program (such as objects defined ininputandtools), achieving true embedded collaboration.
Coming Soon:
-
Intent visualization UI (Agent Pattern Graph → IntentLang conversion)
-
More real-world examples
Long-term Vision:
-
Native support in LLMs for code-level intent expression
-
Intent marketplace for domain-specific patterns
IntentLang is created and maintained by 淚笑.
Connect with me:
-
Twitter: @leixiao_cn
-
Blog: l3yx.github.io
-
GitHub: l3yx
-
Organ: chainreactors
For questions, suggestions, or collaboration:
