Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/actionlint-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
rules:
# Enable all rules by default
all: true

# Disable specific rules if needed
# actions-have-safe-quotes: disable
# expression-syntax: disable
Expand All @@ -14,6 +14,6 @@ shellcheck:
# Enable shellcheck for run steps
enabled: true

# Pyflakes configuration
# Pyflakes configuration
pyflakes:
enabled: false
12 changes: 6 additions & 6 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,15 @@ Thank you for your interest in contributing to Agentics! This document provides
```bash
uv run pytest
```

Also, to ensure the [version is correctly computed from Git tags](#versioning-scheme)
try running:

```bash
uvx --with uv-dynamic-versioning hatchling version
```


## Pre-commit Hooks

We use pre-commit hooks to ensure code quality and consistency. These hooks automatically run checks before each commit.
Expand Down Expand Up @@ -145,8 +145,8 @@ The test report will be saved as `report.html` in the project root for later ana

### Running Tests with Coverage

**Code coverage** measures the percentage of your codebase that is exercised by tests.
It's an important metric that helps you understand how thoroughly your how much code
**Code coverage** measures the percentage of your codebase that is exercised by tests.
It's an important metric that helps you understand how thoroughly your how much code
is not exercised by your tests.


Expand Down
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Install Agentics in your current env, set up your environment variable, and run
```bash
uv pip install agentics-py
```
set up your .env using the required parameters for your LLM provider of choice. Use [.env_sample](.env_sample) as a reference.
set up your .env using the required parameters for your LLM provider of choice. Use [.env_sample](.env_sample) as a reference.

Find out more
👉 **Getting Started**: [docs/getting_started.md](docs/getting_started.md)
Expand Down Expand Up @@ -90,7 +90,7 @@ genre, explanation = await classify_genre(

## 📘 Documentation and Notebooks

Complete documentation available [here](./docs/index.md)
Complete documentation available [here](./docs/index.md)

| Notebook | Description |
|---|---|
Expand Down Expand Up @@ -120,25 +120,25 @@ Apache 2.0

## 👥 Authors

**Project Lead**
**Project Lead**
- Alfio Massimiliano Gliozzo (IBM Research) — gliozzo@us.ibm.com

**Core Contributors**
- Nahuel Defosse (IBM Research) — nahuel.defosse@ibm.com
- Junkyu Lee (IBM Research) — Junkyu.Lee@ibm.com
- Naweed Aghmad Khan (IBM Research) — naweed.khan@ibm.com
- Christodoulos Constantinides (IBM Watson) — Christodoulos.Constantinides@ibm.com
- Mustafa Eyceoz (Red Hat) — Mustafa.Eyceoz@partner.ibm.com
**Core Contributors**
- Nahuel Defosse (IBM Research) — nahuel.defosse@ibm.com
- Junkyu Lee (IBM Research) — Junkyu.Lee@ibm.com
- Naweed Aghmad Khan (IBM Research) — naweed.khan@ibm.com
- Christodoulos Constantinides (IBM Watson) — Christodoulos.Constantinides@ibm.com
- Mustafa Eyceoz (Red Hat) — Mustafa.Eyceoz@partner.ibm.com

---


## 🧠 Conceptual Overview

Most “agent frameworks” let untyped text flow through a pipeline. Agentics flips that: **types are the interface**.
Most “agent frameworks” let untyped text flow through a pipeline. Agentics flips that: **types are the interface**.
Workflows are expressed as transformations between structured states, with predictable schemas and composable operators.

Because every step is a typed transformation, you can **compose** workflows safely (merge and compose types/instances, chain transductions, and reuse `@transducible` functions) without losing semantic structure.
Because every step is a typed transformation, you can **compose** workflows safely (merge and compose types/instances, chain transductions, and reuse `@transducible` functions) without losing semantic structure.

Agentics makes it natural to **scale out**: apply transformations over collections with async `amap`, and aggregate results with `areduce`.

Expand All @@ -158,8 +158,8 @@ Core operations:

Agentics implements **Logical Transduction Algebra**, described in:

- Alfio Gliozzo, Naweed Khan, Christodoulos Constantinides, Nandana Mihindukulasooriya, Nahuel Defosse, Junkyu Lee.
*Transduction is All You Need for Structured Data Workflows* (August 2025).
- Alfio Gliozzo, Naweed Khan, Christodoulos Constantinides, Nandana Mihindukulasooriya, Nahuel Defosse, Junkyu Lee.
*Transduction is All You Need for Structured Data Workflows* (August 2025).
arXiv:2508.15610 — https://arxiv.org/abs/2508.15610


Expand Down
20 changes: 10 additions & 10 deletions agentics_full_course/project_submission_guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ Your project folder should be named after your project and organized as follows:

Create a **5-minute recorded video** presenting your project.

- Introduce the project goals, methods, and key results.
- Optionally include a demo or short code walkthrough with slides.
- Introduce the project goals, methods, and key results.
- Optionally include a demo or short code walkthrough with slides.
- You will present the same video **in person** during the student workshop.

---
Expand All @@ -59,20 +59,20 @@ Create a **5-minute recorded video** presenting your project.

Schedule a meeting with professors **at least two weeks before the final submission** to receive feedback on:

- The draft of your short paper
- Your runnable code and documentation
- The draft of your short paper
- Your runnable code and documentation
- Your recorded video presentation

---

## ✅ Submission Checklist

- [ ] Conference-style project paper (PDF)
- [ ] Runnable code in `applications/<project_name>`
- [ ] `README.md` with install/test instructions
- [ ] Documentation in `docs/` folder
- [ ] 5-minute recorded presentation video
- [ ] Faculty feedback meeting completed
- [ ] Conference-style project paper (PDF)
- [ ] Runnable code in `applications/<project_name>`
- [ ] `README.md` with install/test instructions
- [ ] Documentation in `docs/` folder
- [ ] 5-minute recorded presentation video
- [ ] Faculty feedback meeting completed

---

Expand Down
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ dependencies = [
"mellea",
"plotly>=6.5.0",
"rich>=13.0.0,<14.0.0",
"duckdb>=1.4.3",
"pandas>=2.3.3",
"async-lru>=2.0.5",
]


Expand Down
94 changes: 53 additions & 41 deletions src/agentics/core/agentics.py
Original file line number Diff line number Diff line change
Expand Up @@ -623,11 +623,14 @@ async def llm_call(input: AGString) -> AGString:
reasoning=self.reasoning,
**self.crew_prompt_params,
)
transduced_results = await pt.execute(
*input_prompts,
description=f"Transducing {self.__name__} << {'AG[str]' if not isinstance(other, AG) else other.__name__}",
transient_pbar=self.transient_pbar,
)
chunks = chunk_list(input_prompts, chunk_size=self.amap_batch_size)
transduced_results = []
for chunk in chunks:
transduced_results += await pt.execute(
*chunk,
description=f"Transducing {self.__name__[:30]} << {'AG[str]' if not isinstance(other, AG) else other.__name__[:30]}",
transient_pbar=self.transient_pbar,
)
except Exception as e:
transduced_results = self.states

Expand Down Expand Up @@ -660,14 +663,15 @@ async def llm_call(input: AGString) -> AGString:
output_state_dict = dict([output_state])
else:
output_state_dict = output_state.model_dump()

merged = self.atype(
**(
(self[i].model_dump() if len(self) > i else {})
| other[i].model_dump()
| output_state_dict
)
data = (
(self[i].model_dump() if len(self) > i else {})
| other[i].model_dump()
| output_state_dict
)
allowed = self.atype.model_fields.keys() # pydantic v2
filtered = {k: v for k, v in data.items() if k in allowed}
merged = self.atype(**filtered)

output.states.append(merged)
# elif is_str_or_list_of_str(other):
elif isinstance(other, list):
Expand All @@ -682,15 +686,18 @@ async def llm_call(input: AGString) -> AGString:

if self.provide_explanations and isinstance(other, AG):
target_explanation = AG(atype=Explanation)
output.prompt_template = None
output.transduce_fields = None
target_explanation.instructions = f"""
You have been presented with two Pydantic Objects:
a left object that was logically derived from a right object.
Your task is to provide a detailed explanation of how the left object was derived from the right object."""
target_explanation = await (
target_explanation << output.compose_states(other)
)

self.explanations = target_explanation.states
You have previously transduced an object of type {other.atype.__name__} (source) from an object of type {self.atype.__name__} (target).
Now look back at both objects and provide a detailed explanation on how each field of the target object was logically derived from the source object.
Provide short and concise, data grounded explanations, field by field, avoiding redundancy.
If you think that transduction was wrong or not logically supported by the source object, say it clearly in the explanation and provide low confidence score (0.0).
Provide high confidence score (1.0) only if you are certain that the transduction is logically correct and fully supported by the source object.
"""
explanation = await (target_explanation << output.compose_states(other))

self.explanations = explanation.states
self.states = output.states
return self
else:
Expand Down Expand Up @@ -1058,35 +1065,40 @@ def merge_states(self, other: AG) -> AG:
Merge states of two AGs pairwise

"""
merged = self.clone()
merged.states = []
merged.explanations = []
merged.atype = merge_pydantic_models(
self.atype,
other.atype,
name=f"Merged{self.atype.__name__}#{other.atype.__name__}",
)
for self_state in self:
for other_state in other:
if len(self) == len(other):
merged = self.clone()
merged.states = []
merged.explanations = []
merged.atype = merge_pydantic_models(
self.atype,
other.atype,
name=f"Merged{self.atype.__name__}#{other.atype.__name__}",
)
for self_state, other_state in zip(self, other):
merged.states.append(
merged.atype(**other_state.model_dump(), **self_state.model_dump())
)
return merged
return merged
else:
raise ValueError(
f"Cannot merge states of AGs with different lengths: {len(self)} != {len(other)}"
)

def compose_states(self, other: AG) -> AG:
"""
compose states of two AGs pairwise,
compose states of two AGs,

"""
merged = self.clone()
merged.states = []
merged.explanations = []
merged.atype = self.atype @ other.atype

for self_state in self:
for other_state in other:
merged.states.append(merged.atype(right=other_state, left=self_state))
return merged
composed = self.clone()
composed.states = []
composed.explanations = []
composed.atype = self.atype @ other.atype

for self_state, other_state in zip(self.states, other.states):
composed.states.append(
composed.atype(source=other_state, target=self_state)
)
return composed

async def map_atypes(self, other: AG) -> ATypeMapping:
if self.verbose_agent:
Expand Down
2 changes: 1 addition & 1 deletion src/agentics/core/async_executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ def __init__(

async def _execute(self, input: str) -> BaseModel:
instructions = f"""
You are a Logical Transducer. Your goal is to generate a JSON object that strictly
You are a Logical Transducer. Your goal is to generate a JSON object that strictly
conforms to the Output Pydantic schema below:

{self.atype.model_json_schema()}
Expand Down
55 changes: 49 additions & 6 deletions src/agentics/core/atype.py
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,50 @@ def infer_pydantic_type(dtype: Any, sample_values: pd.Series = None) -> Any:


def pydantic_model_from_dict(dict) -> type[BaseModel]:
"""
Create a dynamic Pydantic model class from a sample dictionary.

This utility inspects the provided mapping and generates a new `pydantic.BaseModel`
subclass whose fields correspond to the dictionary keys. For each key, the field
type is inferred from the sample value using `infer_pydantic_type(...)`, and the
resulting field is created with a default of `None` (i.e., optional-by-default in
practice, depending on the inferred type).

Field names are normalized via `sanitize_field_name(...)` to ensure they are valid
Python identifiers and compatible with Pydantic model field naming rules.

The model class name is synthesized as:
"AType#<key1>:<key2>:...:<keyN>"

Parameters
----------
dict : Mapping[str, Any]
A representative dictionary whose keys define field names and whose values
are used to infer field types.

Returns
-------
type[BaseModel]
A newly created Pydantic model class (subclass of `BaseModel`) with fields
derived from the input dictionary.

Notes
-----
- This function uses only the *sample values* present in the input mapping to
infer types; it does not scan multiple rows/records unless you pass richer
`sample_values` to `infer_pydantic_type` yourself.
- All fields are created with `Field(default=None)`, which makes them effectively
nullable unless additional validation is enforced by the inferred type.
- If two different keys sanitize to the same field name, the latter will overwrite
the former in `new_fields`.

Examples
--------
>>> Sample = pydantic_model_from_dict({"reviewId": 123, "reviewText": "Great!"})
>>> obj = Sample(reviewId=1, reviewText="Nice movie")
>>> obj.model_dump()
{'reviewId': 1, 'reviewText': 'Nice movie'}
"""
model_name = "AType#" + ":".join(dict.keys())
fields = {}

Expand Down Expand Up @@ -256,7 +300,7 @@ def create_pydantic_model(
Dynamically create a Pydantic model from a list of field definitions.

Args:
fields: A list of (field_name, type_name, description) tuples.
fields: A list of (field_name, type_name, description, required) tuples.
name: Optional name of the model.

Returns:
Expand All @@ -281,7 +325,6 @@ def create_pydantic_model(
model_name = name

field_definitions = {}
print(fields)
for field_name, type_name, description, required in fields:
ptype = type_mapping[type_name] if type_name in type_mapping else Any
if required:
Expand Down Expand Up @@ -592,8 +635,8 @@ def compose_types(A, B, *, name=None):

Composite = create_model(
name,
left=(Optional[A], None),
right=(Optional[B], None),
target=(Optional[A], None),
source=(Optional[B], None),
__base__=BaseModel,
)

Expand Down Expand Up @@ -626,7 +669,7 @@ def _istype_matmul(A, B):
def _instance_matmul(a: BaseModel, b: BaseModel):
"""
INSTANCE composition:
a @ b → Composite(left=a, right=b)
a @ b → Composite(target=a, surce=b)
"""
if not isinstance(b, BaseModel):
raise TypeError(f"Cannot compose instance {a} with {b}")
Expand All @@ -637,7 +680,7 @@ def _instance_matmul(a: BaseModel, b: BaseModel):
CompositeModel = A @ B

# Build structural composite
return CompositeModel(left=a, right=b)
return CompositeModel(target=a, source=b)


BaseModel.__matmul__ = _instance_matmul
Expand Down
Loading