This repository contains sample applications that demonstrate LLM observability and tracing capabilities using Langfuse Cloud.
- Python 3.8+
- Langfuse Cloud account (free at https://cloud.langfuse.com)
- OpenAI API key (for demo purposes)
- Visit https://cloud.langfuse.com and create a free account
- Go to Settings → API Keys
- Copy your Public Key and Secret Key
- Create a
.envfile:
cp env.example .env
# Edit .env with your actual keyspip install -r requirements.txt# Run all demos at once
python run_all_demos.py
# Or run individually
python simple_chat_demo.py
python rag_demo.py
python langchain_demo.pyAfter running the demos, visit https://cloud.langfuse.com to see:
- Real-time traces of all LLM calls
- Request/response details
- Performance metrics (latency, token usage)
- Error tracking
- Grouped conversations and workflows
- User interaction patterns
- Session-level analytics
- Token usage and costs
- Model performance comparison
- Usage patterns over time
- Prompt versioning and testing
- A/B testing capabilities
- Prompt optimization insights
- Basic LLM conversation tracing
- Multiple question-answer pairs
- Performance metrics collection
- Document retrieval simulation
- Context assembly
- Multi-step LLM pipeline
- Complex workflow tracing
- LangChain integration
- Conversation memory
- Multi-step workflows
- Chain composition tracing
Here are the key screenshots you should capture:
-
Langfuse Dashboard Home
- Overview of traces and sessions
- Recent activity feed
-
Trace Detail View
- Individual trace with timing
- Input/output data
- Nested spans for complex workflows
-
Analytics Dashboard
- Token usage charts
- Cost analysis
- Performance metrics
-
Sessions View
- Grouped conversations
- User journey mapping
-
Prompts Management
- Prompt library
- Version comparison
- A/B testing interface
# Langfuse Cloud Configuration
LANGFUSE_PUBLIC_KEY=pk-lf-your-key-here
LANGFUSE_SECRET_KEY=sk-lf-your-key-here
LANGFUSE_HOST=https://cloud.langfuse.com
# OpenAI Configuration
OPENAI_API_KEY=your-openai-api-key-hereYou can modify the demo scripts to:
- Add your own prompts and questions
- Integrate with different LLM providers
- Add custom metrics and tags
- Test different model parameters
-
API key errors
- Ensure your
.envfile is in the same directory - Verify keys are copied correctly from Langfuse cloud dashboard
- Check that your Langfuse account is active
- Ensure your
-
OpenAI API errors
- Verify your OpenAI API key is valid
- Check you have sufficient credits
- Ensure you're using a supported model
-
Connection errors
- Check your internet connection
- Verify the Langfuse host URL is correct
- Ensure your firewall allows HTTPS connections
# Remove .env file and recreate
rm .env
cp env.example .env
# Edit .env with your actual keysAfter exploring the demos:
- Integrate Langfuse into your own LLM applications
- Set up custom metrics and alerts
- Explore advanced features like evaluations and prompt management
- Consider upgrading to paid plans for production use
Happy tracing! 🚀