A modular, production-ready framework for building AI-powered applications. LMP provides a unified API for text processing, web search, podcast generation, RAG (Retrieval-Augmented Generation), and multi-agent orchestration.
Building AI applications often requires juggling multiple APIs, managing complex pipelines, and handling various data formats. LMP abstracts this complexity into a clean, composable architecture:
- Unified Interface — One API to access text processing, search, RAG, podcasts, and agents
- Provider Agnostic — Works with OpenAI, Anthropic, Together AI, Groq, and more via OpenRouter
- Production Ready — FastAPI-based services with proper error handling, caching, and logging
- Modular Design — Use only what you need; each program works independently
- Summarization — Generate concise summaries of any text
- Metadata Extraction — Extract topics, entities, sentiment, dates, and locations
- Web Scraping — Extract and process content from any URL
- Custom Prompts — Process text with your own prompts
- Web Search — Search the internet via Exa and Jina APIs
- Grounding — Verify and source claims in text
- Query Generation — Get suggested follow-up queries
- Full Pipeline — Create podcasts from text with voice synthesis
- Multiple Voices — Various voice options via PlayHT
- Music Generation — AI-generated music via FAL Stable Audio
- Episode Management — Create, store, and manage podcast episodes
- Document Q&A — Answer questions from your documents
- Vector Search — ChromaDB integration for semantic search
- Embeddings — OpenAI and Together AI embedding support
- Turn Strategies — Round robin, popcorn, moderated, random, most busy
- Tool Integration — Dynamic tool generation from OpenAPI specs
- Agent Capabilities — Track and manage agent skills
- Model Context Protocol — FastMCP server integration for Claude and other clients
# Clone the repository
git clone https://github.com/chatxbt/lmp-sh.git
cd lmp-sh
# Install dependencies
make installCreate a .env file with your API keys:
# Required
OPENROUTER_API_KEY=your_key
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
# Search (at least one)
EXA_API_KEY=your_key
JINA_API_KEY=your_key
# Optional - for additional features
FAL_API_KEY=your_key # Music/image generation
ANTHROPIC_API_KEY=your_key # Direct Anthropic access
TOGETHER_API_KEY=your_key # Together AI models
# Storage (optional)
POSTGRES_CONNECTION_STRING=your_connection_string
SUPABASE_URL=your_url
SUPABASE_KEY=your_key# Start the main API server
make run-lmp-api
# Or run in development mode with hot reload
make run-lmp-api-devThe API will be available at http://localhost:8000 with interactive documentation at the root URL.
# Summarize text
curl -X POST http://localhost:8000/txt/summarize \
-H "Content-Type: application/json" \
-d '{"text": "Your long text content here..."}'
# Extract metadata
curl -X POST http://localhost:8000/txt/metadata \
-H "Content-Type: application/json" \
-d '{"text": "Analyze this text for topics, entities, and sentiment"}'
# Process webpage
curl -X POST http://localhost:8000/txt/webpage \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com/article"}'
# Custom prompt processing
curl -X POST http://localhost:8000/txt/prompt \
-H "Content-Type: application/json" \
-d '{"text": "Input text", "prompt": "Translate to French"}'
# Ground text with sources
curl -X POST http://localhost:8000/txt/ground \
-H "Content-Type: application/json" \
-d '{"text": "Claims to verify with sources"}'# Search the web
curl -X POST http://localhost:8000/internet/search \
-H "Content-Type: application/json" \
-d '{"query": "latest developments in AI"}'# List available voices
curl http://localhost:8000/podcast/voices
# Create a new podcast
curl -X POST http://localhost:8000/podcast \
-H "Content-Type: application/json" \
-d '{"name": "Tech Weekly", "description": "Weekly tech news podcast"}'
# Generate an episode
curl -X POST http://localhost:8000/podcast/{podcast_id}/episode \
-H "Content-Type: application/json" \
-d '{"title": "AI News Roundup", "content": "This week in AI..."}'
# Generate music
curl -X POST http://localhost:8000/podcast/music \
-H "Content-Type: application/json" \
-d '{"prompt": "upbeat tech podcast intro", "duration_seconds": 30}'# Query your documents
curl -X POST http://localhost:8000/rag/query \
-H "Content-Type: application/json" \
-d '{"query": "What does the documentation say about authentication?"}'curl http://localhost:8000/healthlmp-sh/
├── src/
│ ├── programs/ # Core AI programs
│ │ ├── txt.py # Text processing
│ │ ├── internet.py # Web search
│ │ ├── conversation.py # Multi-agent orchestration
│ │ ├── podcast/ # Podcast generation
│ │ └── rag/ # RAG implementations
│ │
│ ├── libs/ # Shared libraries
│ │ ├── ai/ # AI provider integrations
│ │ │ ├── ell_flow.py # ELL language model flows
│ │ │ ├── jina_ai.py # Jina search/read/rerank
│ │ │ └── exa_ai.py # Exa search
│ │ ├── db/ # Database connectors
│ │ └── settings.py # Configuration
│ │
│ ├── services/
│ │ └── apis/ # FastAPI services
│ │ ├── lmp/ # Main aggregated API
│ │ ├── txt/ # Text processing API
│ │ ├── podcast/ # Podcast API
│ │ ├── search/ # Search API
│ │ └── rag/ # RAG API
│ │
│ ├── functions/ # Utility functions
│ │ ├── text/ # Text utilities
│ │ └── audio/ # Audio processing
│ │
│ └── mcp/ # Model Context Protocol
│ └── servers/ # MCP server implementations
│
├── Makefile # Build and run commands
├── Dockerfile # Container configuration
└── pyproject.toml # Python dependencies
| Variable | Description | Default |
|---|---|---|
RUNTIME_ENV |
Environment (dev, stg, prd) |
dev |
DOCS_ENABLED |
Enable OpenAPI docs | true |
CORS_ORIGIN_LIST |
Allowed CORS origins | * (dev) |
| Variable | Description |
|---|---|
OPENROUTER_API_KEY |
OpenRouter API key (recommended) |
OPENROUTER_BASE_URL |
OpenRouter base URL |
DEFAULT_MODEL |
Default LLM model |
EMBEDDING_MODEL |
Embedding model for RAG |
| Variable | Description |
|---|---|
EXA_API_KEY |
Exa search API key |
JINA_API_KEY |
Jina AI API key |
make run-txt-api-dev # Text processing only
make run-podcast-service-dev # Podcast generation only
make run-inv3-docs-service # Documentation service# Build the image
make build-docker
# Run the container
docker run -p 8000:8000 --env-file .env lmp-servicemake generate-requirementsmake clean- Framework: FastAPI with Pydantic
- AI Integration: ELL, LangChain, ControlFlow, Marvin
- LLM Providers: OpenRouter, OpenAI, Anthropic, Together AI, Groq
- Search: Exa, Jina AI
- Vector DB: ChromaDB, pgvector
- Audio: PlayHT, FAL Stable Audio
- Storage: S3/DigitalOcean Spaces, Supabase, PostgreSQL
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Built by the ChatXBT Team