A PyTorch-based framework to automatically generate AI agents tailored to user needs. Provide a simple natural-language prompt, and the system constructs a transformer-based agent (with embeddings, attention, and slot-filling) ready to be deployed as a conversational assistant for industries, customer support, education, and more. (work in progress, starting with the transformers, have not built the agent workflow generating)
- Overview
- Features
- Architecture
- Getting Started
- Usage
- Data & Training
- Project Structure
- Contributing
- License
- References
This project lets you turn a one-line prompt (e.g., “Build a customer-support agent for the educational sector”) into a fully operational transformer-based AI agent. The pipeline:
- Prompt Parsing: Extract high-level intent and required details (industry, domain, slots).
- Agent Generation: Auto-assemble embeddings, encoder–decoder layers, and slot classifiers.
- Deployment Ready: Export the trained model and inference loop for integration into chat applications.
-
Prompt-Driven Agent Creation
Build a specialized chatbot by simply describing your use case. -
Rich Embeddings
Token + role + turn-index embeddings with scaling, normalization, and dropout. -
Dynamic Slot-Filling
Configurable slots (industry, urgency, product, etc.) extracted via classification and span prediction. -
Prefix-Caching Decoder
Efficient incremental generation of clarifying questions and final responses. -
Modular Design
Swap in new attention heads, embeddings, or slot schemas with minimal changes.
-
Prompt Parser
- CLI or API to parse user prompt into
intent+slot schema.
- CLI or API to parse user prompt into
-
Agent Builder
- Programmatically instantiate
TransformerChatbotwith slot definitions.
- Programmatically instantiate
-
Embedding Layer
RichEmbeddings: tokens + roles + turns.
-
Transformer Core
- Encoder–decoder stacks with multi-head attention and feed-forward networks.
-
Slot Classifier & Extractor
- Heads to mark slots as filled/missing and retrieve values.
-
Inference Engine
- Dialogue manager to run multi-turn Q&A, fill slots, and finalize agent.
- Python 3.7+
- PyTorch 1.7+
- NumPy
git clone https://github.com/yourusername/ai-agent-generator.git
cd ai-agent-generator
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
pip install -r requirements.txt# Example: create an agent for educational customer support
python run_agent_generator.py \
--prompt "Build an AI agent for answering student questions in the educational sector" \
--output_dir ./agents/education_botThis produces:
model.pth: trained weightsconfig.json: slot definitions and hyperparametersinference.py: script to run the chat interface
Pass a JSON schema to define custom slots:
{
"slots": ["industry", "department", "urgency"],
"intents": ["support", "sales", "feedback"]
}Use --schema my_schema.json when running run_agent_generator.py.
- Prepare Dialogues: Multi-turn conversations with slot annotations.
- Tokenize & Tag: Generate token, role, and turn tensors.
- Train:
python train.py --data data/dialogues.json - Evaluate: Slot accuracy, dialogue length, and response quality.
ai-agent-generator/
├── src/
│ ├── embeddings.py # RichEmbeddings
│ ├── positional_encoding.py
│ ├── attention.py # MultiHeadAttention & ScaledDotProduct
│ ├── transformer.py # TransformerChatbot
│ ├── slot_classifier.py
│ ├── run_agent_generator.py
│ └── train.py
├── agents/ # Saved generated agents
├── data/ # Sample dialogue datasets
├── requirements.txt
└── README.md
- Fork the repo
- Create a branch (
git checkout -b feature/new-agent) - Commit your changes
- Push to your fork and open a PR
Please add tests and update documentation for new features.
Feel free to adapt and extend this framework to generate AI agents for any domain!
- Vaswani, A. et al. (2017). Attention Is All You Need. NeurIPS.
- “The Annotated Transformer” by Harvard NLP
- “The Illustrated Transformer” by Jay Alammar
- PyTorch’s official Transformer tutorial