A powerful orchestration system that leverages Large Language Models (LLMs) to interpret natural language requests and orchestrate containerized services.
AI Orchestrator combines the power of LLMs with containerized microservices to create an intelligent workflow automation system. The system interprets high-level natural language requests, determines the appropriate processing pipeline, and executes containerized tasks in the optimal sequenceβall without requiring technical syntax knowledge from users.
"Summarize this article" β Text Summarization Container β Concise Summary
- π§ LLM-Powered Decision Engine: Utilizes Groq's API to analyze user requests and determine optimal container execution strategy
- π¦ Containerized Microservices: Modular processing tasks isolated in Docker containers for maximum flexibility and scalability
- π Smart Orchestration: Automatically executes containers in the optimal sequence based on request semantics
- π» Dual Interfaces: Access via intuitive web UI or powerful command-line interface
β οΈ Resilient Fallback: Rule-based container selection ensures operation even when LLM API is unavailable- β‘ Parallel Execution: Support for running compatible tasks concurrently for improved performance
ai_orchestrator/
βββ containers/ # Containerized microservices
β βββ data_cleaning/ # Text normalization service
β βββ sentiment_analysis/ # Sentiment analysis service
β βββ text_summarization/ # Text summarization service
βββ orchestrator/ # Core orchestration logic
β βββ app.py # Flask web application
β βββ orchestrator.py # Container orchestration logic
β βββ llm_integration.py # LLM decision engine
β βββ templates/ # Web interface templates
βββ .env # Environment variables (API keys)
βββ requirements.txt # Python dependencies
βββ run.sh # Startup script
- Python 3.9+
- Docker and Docker Compose
- Groq API key (for LLM integration)
-
Clone the repository
git clone https://github.com/yourusername/AI_Orchestrator_with_Containers.git cd AI_Orchestrator_with_Containers -
Create and activate a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
cp .env.example .env # Edit .env file to add your Groq API key # LLM_API_KEY=your_groq_api_key_here
-
Build the Docker containers
# Using the automated script ./run.sh # Or manually docker build -t ai-orchestrator/data-cleaning containers/data_cleaning/ docker build -t ai-orchestrator/sentiment-analysis containers/sentiment_analysis/ docker build -t ai-orchestrator/text-summarization containers/text_summarization/
-
Start the web server
python orchestrator/app.py
-
Open your browser and navigate to
http://localhost:5000 -
Enter your natural language request in the input field and submit
Process text directly:
python orchestrator/orchestrator.py --request "Clean this text and analyze sentiment" --input-text "This is AMAZING! I love this product so much!!!"Process from file and save to output file:
python orchestrator/orchestrator.py --request "Summarize this text" --input-file some_article.txt --output-file summary.txtAdvanced usage with parameters:
# Summarize to a specific number of sentences
python orchestrator/orchestrator.py --request "Summarize this text to 2 sentences" --input-file long_article.txt
# Enable parallel execution
python orchestrator/orchestrator.py --request "Clean this text" --input-text "Text to clean" --parallelThe AI Orchestrator currently supports the following containerized services:
| Service | Description | Example Command |
|---|---|---|
| Text Summarization | Extracts key sentences to create concise summaries | "Summarize this article" |
| Data Cleaning | Normalizes text by removing special characters and standardizing format | "Clean this dataset" |
| Sentiment Analysis | Analyzes emotional tone with positive/negative scoring | "What's the sentiment of this review?" |
python orchestrator/orchestrator.py --request "Clean this text" --input-file messy_text.txt --output-file cleaned_text.txtInput:
THIS is a SAMPLE text with LOTS of CAPITALS!!!
It also has extra spaces, and @#$%&* special characters.
Output:
this is a sample text with lots of capitals it also has extra spaces and special characters
python orchestrator/orchestrator.py --request "Is this review positive?" --input-text "This product is amazing! I love it."Output:
{
"score": 0.845,
"positive_words": 2,
"negative_words": 0,
"classification": "positive"
}python orchestrator/orchestrator.py --request "Summarize this article to 3 sentences" --input-file long_article.txtThe system consists of three main components:
- Orchestrator Engine: Core logic that processes requests, makes decisions, and manages container lifecycle
- LLM Integration: Connects to Groq's API for natural language understanding and decision making
- Container Services: Independent microservices that perform specific data processing tasks
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β User Request ββββββΆβ Orchestrator ββββββΆβ LLM Decision β
βββββββββββββββββββ β Engine βββββββ Engine β
ββββββββββ¬βββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Container β
β Execution β
ββββββββββ¬βββββββββ
β
ββββββββββ΄βββββββββ
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β Data Processing β β Result β
β Containers β β Formatting β
βββββββββββββββββββ βββββββββββββββββββ
- Container Build Failures: Ensure Docker is running and you have sufficient permissions
- LLM API Errors: Verify your Groq API key is valid and properly set in the
.envfile - Missing Dependencies: Run
pip install -r requirements.txtto install required packages - File Mounting Issues: For Docker-related errors, check your Docker installation and permissions
- Groq Client Errors: If you see "proxies" errors, update the groq package with
pip install groq==0.4.1
For more detailed logging:
export LOG_LEVEL=DEBUG
python orchestrator/orchestrator.py --request "Your request" --input-text "Your text"To create a new container:
-
Create a directory in
containers/(e.g.,containers/new_service/) -
Implement your processing script with standard input/output handling:
if __name__ == "__main__": # Read from stdin or file if len(sys.argv) > 1: with open(sys.argv[1], 'r') as f: data = f.read() else: data = sys.stdin.read() # Process data result = your_processing_function(data) # Output to file or stdout if len(sys.argv) > 2: with open(sys.argv[2], 'w') as f: f.write(result) else: print(result)
-
Create a Dockerfile
-
Build the container:
docker build -t ai-orchestrator/new-service containers/new_service/ -
Register the container in
orchestrator/llm_integration.py
Contributions are welcome! Here's how you can help improve the AI Orchestrator:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or issues, please open an issue on GitHub or contact the project maintainers.
Made with β€οΈ by Tariq Omer