Skip to content

chatxbt/lmp-sh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LMP.SH — Language Model Programs

A modular, production-ready framework for building AI-powered applications. LMP provides a unified API for text processing, web search, podcast generation, RAG (Retrieval-Augmented Generation), and multi-agent orchestration.

Why LMP?

Building AI applications often requires juggling multiple APIs, managing complex pipelines, and handling various data formats. LMP abstracts this complexity into a clean, composable architecture:

  • Unified Interface — One API to access text processing, search, RAG, podcasts, and agents
  • Provider Agnostic — Works with OpenAI, Anthropic, Together AI, Groq, and more via OpenRouter
  • Production Ready — FastAPI-based services with proper error handling, caching, and logging
  • Modular Design — Use only what you need; each program works independently

Features

Text Processing

  • Summarization — Generate concise summaries of any text
  • Metadata Extraction — Extract topics, entities, sentiment, dates, and locations
  • Web Scraping — Extract and process content from any URL
  • Custom Prompts — Process text with your own prompts

Internet Search

  • Web Search — Search the internet via Exa and Jina APIs
  • Grounding — Verify and source claims in text
  • Query Generation — Get suggested follow-up queries

Podcast Generation

  • Full Pipeline — Create podcasts from text with voice synthesis
  • Multiple Voices — Various voice options via PlayHT
  • Music Generation — AI-generated music via FAL Stable Audio
  • Episode Management — Create, store, and manage podcast episodes

RAG (Retrieval-Augmented Generation)

  • Document Q&A — Answer questions from your documents
  • Vector Search — ChromaDB integration for semantic search
  • Embeddings — OpenAI and Together AI embedding support

Multi-Agent Orchestration

  • Turn Strategies — Round robin, popcorn, moderated, random, most busy
  • Tool Integration — Dynamic tool generation from OpenAPI specs
  • Agent Capabilities — Track and manage agent skills

MCP Support

  • Model Context Protocol — FastMCP server integration for Claude and other clients

Quick Start

Installation

# Clone the repository
git clone https://github.com/chatxbt/lmp-sh.git
cd lmp-sh

# Install dependencies
make install

Environment Setup

Create a .env file with your API keys:

# Required
OPENROUTER_API_KEY=your_key
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1

# Search (at least one)
EXA_API_KEY=your_key
JINA_API_KEY=your_key

# Optional - for additional features
FAL_API_KEY=your_key              # Music/image generation
ANTHROPIC_API_KEY=your_key        # Direct Anthropic access
TOGETHER_API_KEY=your_key         # Together AI models

# Storage (optional)
POSTGRES_CONNECTION_STRING=your_connection_string
SUPABASE_URL=your_url
SUPABASE_KEY=your_key

Run the API

# Start the main API server
make run-lmp-api

# Or run in development mode with hot reload
make run-lmp-api-dev

The API will be available at http://localhost:8000 with interactive documentation at the root URL.

API Reference

Text Processing — /txt

# Summarize text
curl -X POST http://localhost:8000/txt/summarize \
  -H "Content-Type: application/json" \
  -d '{"text": "Your long text content here..."}'

# Extract metadata
curl -X POST http://localhost:8000/txt/metadata \
  -H "Content-Type: application/json" \
  -d '{"text": "Analyze this text for topics, entities, and sentiment"}'

# Process webpage
curl -X POST http://localhost:8000/txt/webpage \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com/article"}'

# Custom prompt processing
curl -X POST http://localhost:8000/txt/prompt \
  -H "Content-Type: application/json" \
  -d '{"text": "Input text", "prompt": "Translate to French"}'

# Ground text with sources
curl -X POST http://localhost:8000/txt/ground \
  -H "Content-Type: application/json" \
  -d '{"text": "Claims to verify with sources"}'

Internet Search — /internet

# Search the web
curl -X POST http://localhost:8000/internet/search \
  -H "Content-Type: application/json" \
  -d '{"query": "latest developments in AI"}'

Podcast Generation — /podcast

# List available voices
curl http://localhost:8000/podcast/voices

# Create a new podcast
curl -X POST http://localhost:8000/podcast \
  -H "Content-Type: application/json" \
  -d '{"name": "Tech Weekly", "description": "Weekly tech news podcast"}'

# Generate an episode
curl -X POST http://localhost:8000/podcast/{podcast_id}/episode \
  -H "Content-Type: application/json" \
  -d '{"title": "AI News Roundup", "content": "This week in AI..."}'

# Generate music
curl -X POST http://localhost:8000/podcast/music \
  -H "Content-Type: application/json" \
  -d '{"prompt": "upbeat tech podcast intro", "duration_seconds": 30}'

RAG — /rag

# Query your documents
curl -X POST http://localhost:8000/rag/query \
  -H "Content-Type: application/json" \
  -d '{"query": "What does the documentation say about authentication?"}'

Health Check

curl http://localhost:8000/health

Architecture

lmp-sh/
├── src/
│   ├── programs/           # Core AI programs
│   │   ├── txt.py          # Text processing
│   │   ├── internet.py     # Web search
│   │   ├── conversation.py # Multi-agent orchestration
│   │   ├── podcast/        # Podcast generation
│   │   └── rag/            # RAG implementations
│   │
│   ├── libs/               # Shared libraries
│   │   ├── ai/             # AI provider integrations
│   │   │   ├── ell_flow.py # ELL language model flows
│   │   │   ├── jina_ai.py  # Jina search/read/rerank
│   │   │   └── exa_ai.py   # Exa search
│   │   ├── db/             # Database connectors
│   │   └── settings.py     # Configuration
│   │
│   ├── services/
│   │   └── apis/           # FastAPI services
│   │       ├── lmp/        # Main aggregated API
│   │       ├── txt/        # Text processing API
│   │       ├── podcast/    # Podcast API
│   │       ├── search/     # Search API
│   │       └── rag/        # RAG API
│   │
│   ├── functions/          # Utility functions
│   │   ├── text/           # Text utilities
│   │   └── audio/          # Audio processing
│   │
│   └── mcp/                # Model Context Protocol
│       └── servers/        # MCP server implementations
│
├── Makefile                # Build and run commands
├── Dockerfile              # Container configuration
└── pyproject.toml          # Python dependencies

Configuration

API Settings

Variable Description Default
RUNTIME_ENV Environment (dev, stg, prd) dev
DOCS_ENABLED Enable OpenAPI docs true
CORS_ORIGIN_LIST Allowed CORS origins * (dev)

AI Provider Settings

Variable Description
OPENROUTER_API_KEY OpenRouter API key (recommended)
OPENROUTER_BASE_URL OpenRouter base URL
DEFAULT_MODEL Default LLM model
EMBEDDING_MODEL Embedding model for RAG

Search Settings

Variable Description
EXA_API_KEY Exa search API key
JINA_API_KEY Jina AI API key

Development

Run Individual Services

make run-txt-api-dev          # Text processing only
make run-podcast-service-dev  # Podcast generation only
make run-inv3-docs-service    # Documentation service

Docker

# Build the image
make build-docker

# Run the container
docker run -p 8000:8000 --env-file .env lmp-service

Generate Requirements

make generate-requirements

Clean Build Artifacts

make clean

Tech Stack

  • Framework: FastAPI with Pydantic
  • AI Integration: ELL, LangChain, ControlFlow, Marvin
  • LLM Providers: OpenRouter, OpenAI, Anthropic, Together AI, Groq
  • Search: Exa, Jina AI
  • Vector DB: ChromaDB, pgvector
  • Audio: PlayHT, FAL Stable Audio
  • Storage: S3/DigitalOcean Spaces, Supabase, PostgreSQL

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Authors

Built by the ChatXBT Team

About

Language Model Programs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages