A privacy-focused, open-source tool for migrating your accumulated context from ChatGPT to other LLM platforms like Google Gemini or local models via Ollama. Break free from vendor lock-in while maintaining the personalized assistance you've built up over time.
- ๐ Privacy-First: All processing happens locally on your machine - no data leaves your computer
- ๐ฏ Multiple Target Platforms: Export to Gemini Gems or Ollama Modelfiles
- ๐ง Intelligent Context Extraction: Automatically identify projects, preferences, and technical expertise
- ๐๏ธ Interactive Filtering: Choose what context to include or exclude
- ๐ Incremental Updates: Keep your context current without re-exporting everything
- โ Validation Testing: Verify successful context transfer with generated test questions
- โก CLI Tool: Command-line interface for developers and power users
- ๐ Security Features: Encryption, sensitive data detection, and secure deletion
llm-context-exporter/
โโโ src/llm_context_exporter/
โ โโโ core/ # Core data models and processing logic
โ โ โโโ models.py # Data structures (UniversalContextPack, etc.)
โ โ โโโ extractor.py # Context extraction engine
โ โ โโโ filter.py # Filtering and selection engine
โ โ
โ โโโ parsers/ # Platform-specific input parsers
โ โ โโโ base.py # Abstract parser interface
โ โ โโโ chatgpt.py # ChatGPT export parser
โ โ
โ โโโ formatters/ # Platform-specific output formatters
โ โ โโโ base.py # Abstract formatter interface
โ โ โโโ gemini.py # Gemini Gems formatter
โ โ โโโ ollama.py # Ollama Modelfile formatter
โ โ
โ โโโ validation/ # Validation test generation
โ โ โโโ generator.py # Test question generator
โ โ
โ โโโ security/ # Privacy and security features
โ โ โโโ encryption.py # File encryption utilities
โ โ โโโ detection.py # Sensitive data detection
โ โ โโโ deletion.py # Secure file deletion
โ โ
โ โโโ cli/ # Command-line interface
โ โโโ main.py # CLI implementation
โ
โโโ tests/ # Test suite
โ โโโ conftest.py # Pytest fixtures
โ โโโ test_models.py # Model tests
โ
โโโ requirements.txt # Python dependencies
โโโ setup.py # Package configuration
โโโ pytest.ini # Test configuration
โโโ README.md # This file
pip install llm-context-exporter# Clone the repository
git clone https://github.com/llm-context-exporter/llm-context-exporter.git
cd llm-context-exporter
# Install dependencies
pip install -r requirements.txt
# Install in development mode
pip install -e .pipx install llm-context-exporter- Python: 3.10 or higher
- Operating System: Windows, macOS, or Linux
- Memory: At least 1GB RAM (more for large exports)
- Storage: 100MB for installation + space for your exports
For Ollama target platform:
# Install Ollama (visit https://ollama.ai for platform-specific instructions)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull the Qwen model (recommended)
ollama pull qwenFor development:
pip install llm-context-exporter[dev,test]- Go to ChatGPT Settings โ Data Export
- Click "Export data" and wait for the email
- Download the ZIP file when ready
# Compare platforms to help you decide
llm-context-export compareFor Gemini (Cloud-based):
llm-context-export export -i chatgpt_export.zip -t gemini -o ./gemini_outputFor Ollama (Local LLM):
llm-context-export export -i chatgpt_export.zip -t ollama -o ./ollama_outputGemini (Gems):
- Go to gemini.google.com โ Gem Manager (left sidebar)
- Click "New Gem"
- Paste contents of
gemini_gem_instructions.txtinto the Instructions field - Save and start using your personalized Gem!
Ollama: Create your custom model:
ollama create my-context -f ./ollama_output/Modelfile
ollama run my-contextllm-context-export validate -c ./output -t gemini --interactive# Simple export to Gemini
llm-context-export export -i chatgpt_export.zip -t gemini -o ./output
# Export to Ollama with specific model (QWEN as an example)
llm-context-export export -i chatgpt_export.zip -t ollama -m qwen -o ./output# Choose what to include/exclude interactively
llm-context-export export -i chatgpt_export.zip -t gemini -o ./output --interactive# Exclude specific topics and set minimum relevance
llm-context-export export -i chatgpt_export.zip -t ollama -o ./output \
--exclude-topics "personal,private" --min-relevance 0.5# Add new conversations to existing context
llm-context-export export -i new_export.zip -t gemini -o ./updated \
--update ./previous/context.json# Generate validation questions
llm-context-export validate -c ./output -t gemini
# Interactive validation with step-by-step testing
llm-context-export validate -c ./output -t gemini --interactive# Check if your export file is compatible
llm-context-export compatibility -f chatgpt_export.zip -t ollama
# Check Ollama installation
llm-context-export compatibility -t ollama# Generate package with only new information
llm-context-export delta -c new_export.zip -p ./old_context.json -o ./deltaThe LLM Context Exporter uses a standardized Universal Context Pack format that's platform-agnostic and designed for maximum portability:
{
"version": "1.0.0",
"created_at": "2024-01-15T10:30:00Z",
"source_platform": "chatgpt",
"user_profile": {
"role": "Senior Software Engineer",
"expertise_areas": ["Python", "Machine Learning", "Web Development"],
"background_summary": "Experienced full-stack developer with ML expertise"
},
"projects": [
{
"name": "E-commerce Platform",
"description": "Building a scalable e-commerce platform with microservices",
"tech_stack": ["Python", "FastAPI", "React", "PostgreSQL"],
"key_challenges": ["Performance optimization", "Payment integration"],
"current_status": "In production",
"relevance_score": 0.95
}
],
"preferences": {
"coding_style": {"language": "Python", "style": "clean and readable"},
"communication_style": "Direct and technical",
"preferred_tools": ["VS Code", "Git", "Docker"],
"work_patterns": {"methodology": "Agile", "testing": "TDD"}
},
"technical_context": {
"languages": ["Python", "JavaScript", "SQL"],
"frameworks": ["FastAPI", "React", "TensorFlow"],
"tools": ["Docker", "Kubernetes", "Git"],
"domains": ["Web Development", "Machine Learning"]
}
}- User Profile: Role, expertise areas, and background summary
- Projects: Detailed project information with tech stacks and challenges
- Preferences: Coding style, communication preferences, and work patterns
- Technical Context: Languages, frameworks, tools, and domain expertise
- Metadata: Version info, timestamps, and processing details
| Command | Description | Example |
|---|---|---|
export |
Export ChatGPT context to target platform | llm-context-export export -i export.zip -t gemini -o ./output |
validate |
Generate validation tests | llm-context-export validate -c ./output -t gemini |
compare |
Compare target platforms | llm-context-export compare |
delta |
Generate incremental update package | llm-context-export delta -c new.zip -p old.json -o ./delta |
compatibility |
Check platform compatibility | llm-context-export compatibility -f export.zip -t ollama |
info |
Show platform information | llm-context-export info --verbose |
examples |
Show usage examples | llm-context-export examples |
llm-context-export export [OPTIONS]
Options:
-i, --input PATH ChatGPT export file (ZIP or JSON) [required]
-t, --target [gemini|ollama] Target platform [required]
-o, --output PATH Output directory [required]
-m, --model TEXT Base model for Ollama (default: qwen)
--interactive Enable interactive filtering
--update PATH Previous context for incremental update
--exclude-conversations TEXT Comma-separated conversation IDs to exclude
--exclude-topics TEXT Comma-separated topics to exclude
--min-relevance FLOAT Minimum relevance score (0.0-1.0)
--dry-run Preview without creating files
--help Show help messageFor comprehensive guides and references, see our documentation directory:
- CLI Usage Guide - Complete command-line reference with examples
- Library Usage Guide - Python library integration and API reference
- Context Schema - Universal Context Pack format specification
- Privacy Policy - Data handling and privacy practices
- Terms of Service - Usage terms and conditions
from llm_context_exporter import ExportHandler, ExportConfig
# Simple export
config = ExportConfig(
input_path="chatgpt_export.zip",
target_platform="gemini",
output_path="./output"
)
handler = ExportHandler()
results = handler.export(config)
if results["success"]:
print(f"Export completed! Files: {results['output_files']}")See the examples directory for comprehensive demonstrations:
- library_usage_example.py - Complete library integration examples
- transfer_examples.py - Successful vs unsuccessful transfer scenarios
- security_demo.py - Security features demonstration
- validation_demo.py - Validation test generation
Project Context:
โ
"I'm building an e-commerce platform using FastAPI and React"
โ
"The main challenge is handling payment processing with Stripe"
โ
"We're using PostgreSQL for the database and Redis for caching"
Technical Preferences:
โ
"I prefer Python for backend development"
โ
"I use VS Code with the Python extension"
โ
"I follow TDD methodology and write tests first"
Domain Expertise:
โ
"I have experience with machine learning using TensorFlow"
โ
"I'm familiar with Docker and Kubernetes deployment"
โ
"I work primarily in web development and data science"
ChatGPT-Specific Features:
โ "Use the web browsing feature to check the latest React docs"
โ "Generate an image of a database schema"
โ "Run this code in the code interpreter"
Temporal/Contextual References:
โ "As we discussed earlier in this conversation..."
โ "Based on the file you uploaded..."
โ "Following up on yesterday's question..."
Personal/Sensitive Information:
โ ๏ธ "My API key is sk-1234567890abcdef..." (will be detected and redacted)
โ ๏ธ "My email is john@company.com" (will prompt for redaction)
โ ๏ธ "The database password is secret123" (will be flagged)
# Run all tests
pytest
# Run with coverage
pytest --cov=src/llm_context_exporter --cov-report=html
# Run specific test categories
pytest -m "not integration" # Unit tests only
pytest -m integration # Integration tests only
pytest -k "test_parser" # Parser tests onlyThe project uses Hypothesis for property-based testing to ensure correctness:
# Run property-based tests
pytest tests/test_*_properties.py
# Run with more examples
pytest --hypothesis-max-examples=1000# Test compatibility of your ChatGPT export
llm-context-export compatibility -f your_export.zip
# Dry run to see what would be extracted
llm-context-export export -i your_export.zip -t gemini -o ./test --dry-runThe project follows a modular architecture with clear separation of concerns:
- Core Layer: Platform-agnostic data models and processing logic
- Parser Layer: Platform-specific input handling (ChatGPT, Claude, etc.)
- Formatter Layer: Platform-specific output generation (Gemini, Ollama, etc.)
- Interface Layer: CLI interface for user interaction
- Security Layer: Privacy protection and secure file handling
This design enables easy extension to support additional platforms in the future.
- ๐ Local Processing: All data processing happens on your machine
- ๐ซ No Cloud Dependencies: No data is sent to external services during processing
- ๐ Encryption at Rest: Context files are encrypted when saved locally
- ๐ต๏ธ Sensitive Data Detection: Automatic detection and optional redaction of PII
- ๐๏ธ Secure Deletion: Multi-pass overwriting of temporary files
- ๐ก Network Monitoring: Ensures no unexpected network activity during processing
CLI Tool: Collects no data whatsoever. All processing happens locally on your machine.
- AES-256-GCM Encryption: Military-grade encryption for stored context files
- PBKDF2 Key Derivation: Secure password-based encryption keys
- Sensitive Data Patterns: Detects 15+ types of sensitive information
- Network Isolation: Monitors and prevents unexpected network calls
- Secure File Deletion: Multi-pass overwriting prevents data recovery
โ Allowed:
- Export your own ChatGPT conversation data
- Use for personal context migration
- Integrate into your own projects (open source)
- Modify and distribute (subject to license)
โ Not Allowed:
- Export other people's conversation data without permission
- Use for commercial data harvesting
- Attempt to reverse-engineer ChatGPT's algorithms
- Violate any platform's terms of service
- No Warranty: Software provided "as is" without warranty
- Platform Changes: Target platforms may change their APIs/features
- Context Quality: Results depend on your conversation content quality
- Compatibility: We can't guarantee compatibility with all export formats
- Your Responsibility: You're responsible for your data and its use
- Platform Compliance: Ensure you comply with target platform terms
- Data Accuracy: We don't guarantee perfect context extraction
We welcome contributions! Here's how to get started:
# Clone the repository
git clone https://github.com/llm-context-exporter/llm-context-exporter.git
cd llm-context-exporter
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -e ".[dev,test]"
# Install pre-commit hooks
pre-commit install
# Run tests to verify setup
pytest- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Write tests for your changes
- Ensure all tests pass:
pytest - Format code:
black src tests && isort src tests - Commit changes:
git commit -m "Add amazing feature" - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
- New Platform Adapters: Add support for Claude, Perplexity, etc.
- Enhanced Context Extraction: Improve project and preference detection
- Documentation: Tutorials, guides, and examples
- Testing: More test coverage and edge cases
- Performance: Optimization for large exports
"Export file not found"
# Check file path and permissions
ls -la your_export.zip
llm-context-export compatibility -f your_export.zip"Ollama not found"
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Verify installation
ollama --version
ollama pull qwen"Permission denied"
# Check output directory permissions
mkdir -p ./output
chmod 755 ./output"Context too large"
# Use filtering to reduce size
llm-context-export export -i export.zip -t gemini -o ./output \
--min-relevance 0.7 --exclude-topics "personal,casual"- Documentation: Check this README and
llm-context-export --help - Examples: Run
llm-context-export examples - Issues: GitHub Issues
- Discussions: GitHub Discussions
- ChatGPT export parsing
- Context extraction and filtering
- Gemini Gems and Ollama formatters
- CLI interface
- Security features
- Validation testing
- Claude export support - Import from Anthropic Claude conversations
- Easy deployment - One-click hosting options for self-hosted instances
- Docker support - Simple
docker-compose updeployment - Advanced filtering options
- Context analytics dashboard
- Perplexity export support
- Anthropic Claude target
- OpenAI Assistant API target
- Custom LLM targets
- Batch processing
- Web interface
- Team collaboration features
- Context sharing and templates
- API for third-party integrations
- Context version control
- Multi-platform sync
- Web service stability - Improved session handling and error recovery
This project is licensed under the MIT License - see the LICENSE file for details.
- โ Commercial use allowed
- โ Modification allowed
- โ Distribution allowed
- โ Private use allowed
- โ No warranty provided
- โ No liability accepted
- OpenAI for ChatGPT and the inspiration for context portability
- Google for Gemini and the Saved Info feature
- Ollama Team for making local LLMs accessible
- Open Source Community for the amazing tools and libraries
- Beta Testers for their valuable feedback and bug reports
- Pydantic - Data validation and parsing
- Click - Command-line interface
- Rich - Beautiful terminal output
- Cryptography - Security and encryption
- Hypothesis - Property-based testing
- Reach out to me directly via GH DM.
- GitHub: llm-context-exporter/llm-context-exporter
- Issues: Report a Bug
- Discussions: Community Forum
Made with โค๏ธ for the AI community
Break free from vendor lock-in. Your context, your choice.