UncensorHub is a HuggingChat-inspired web application designed for secure, uncensored AI conversations with local language models. Built with end-to-end encryption (E2EE) using AES-256-GCM, it ensures all chat data remains private and protected, making it suitable for classified and sensitive use cases.
- ๐ End-to-End Encryption: All messages, chat history, and system prompts encrypted with AES-256-GCM
- ๐ซ Uncensored AI: Run local models like Dolphin 2.9.1 Llama 3 8B with minimal content restrictions
- ๐ฌ Modern UI: Clean, HuggingChat-inspired interface with user/AI message bubbles
- ๐ Persistent History: Encrypted chat history stored locally and persists across sessions
- ๐๏ธ Customizable: Switch models, adjust system prompts, and configure AI behavior
- ๐ฆ Backup & Restore: Export/import encrypted chat history for safekeeping
- ๐ Local-First: Runs entirely on your machineโno external API calls
- ๐ Passphrase Protection: PBKDF2-based key derivation with 100,000 iterations
- Algorithm: AES-256-GCM (via Python Fernet)
- Key Derivation: PBKDF2-HMAC-SHA256 with 100,000 iterations
- Salt: 16-byte random salt stored in
.saltfile - Data Protection: All chat content encrypted before storage; no
plaintext on disk or in logs
- Threat Model: Protects against unauthorized local disk access; assumes Ollama server is secure
- User prompts and messages
- AI responses
- System prompts
- Complete chat history
- Encryption keys (derived from passphrase, never written to disk)
- Decrypted messages during active session
- RAM: 8GB minimum (16GB recommended for 8B models)
- GPU: Optional but recommended (NVIDIA/AMD with 6GB+ VRAM)
- Storage: 10GB free space for models
- Python: 3.10 or higher
- Ollama: Latest version from ollama.com
- OS: Linux, macOS, or Windows (with WSL for Ollama)
Visit ollama.com and follow the installation instructions for your platform.
Linux/Mac:
curl -fsSL https://ollama.com/install.sh | shWindows: Download the installer from the Ollama website.
Pull the recommended uncensored model:
ollama pull dolphin-llama3:8bAlternative models:
# Smaller model for testing (1B parameters)
ollama pull llama3.2:1b
# Other uncensored options
ollama pull qwen3-abliterated
ollama pull gemma3-abliteratedgit clone https://github.com/YOUR_USERNAME/UncensorHub.git
cd UncensorHubpip install -r requirements.txtIn a separate terminal:
ollama servestreamlit run app.pyThe app will open in your browser at http://localhost:8501
When you first run UncensorHub, you'll be prompted to create a passphrase. This passphrase encrypts all your chat data and must be at least 8 characters long. Choose a strong, memorable passphrase and keep it secureโthere is no password recovery mechanism.
The main chat interface consists of three areas:
Sidebar (Left): Configure your AI experience with model selection, system prompt customization, and chat management controls.
Main Area (Center): View your conversation history with distinct user and AI message bubbles. Each message includes a timestamp for reference.
Input Bar (Bottom): Type your messages and press Enter to send them to the AI.
UncensorHub supports multiple AI models through Ollama. The default Dolphin 2.9.1 Llama 3 8B model is optimized for uncensored responses with a low refusal rate (~2%). You can switch models at any time from the sidebar dropdown menu.
System prompts define the AI's behavior and personality. The default prompt instructs the AI to answer questions without restrictions. You can modify this prompt to customize the AI's tone, expertise, or constraints.
All chat data is encrypted using AES-256-GCM before being saved to disk. The encryption key is derived from your passphrase using PBKDF2 with 100,000 iterations, making brute-force attacks computationally expensive. Your passphrase is never storedโonly the salt used for key derivation is saved.
Export: Click "Export History" in the sidebar to download your encrypted chat history as a JSON file. This file remains encrypted and requires your passphrase to decrypt.
Import: Use "Import History" to restore a previously exported backup. The app will validate the file and decrypt it using your current passphrase.
Click "Clear Chat" in the sidebar to delete all messages and start fresh. This action removes both the in-memory history and the encrypted file from disk.
Edit the AVAILABLE_MODELS list in app.py to add or remove models:
AVAILABLE_MODELS = [
"dolphin-llama3:8b",
"your-custom-model",
]Modify the EncryptionManager class in app.py to change encryption settings:
# Increase iterations for stronger security (slower)
iterations=200000
# Change salt size (default: 16 bytes)
salt = os.urandom(32)Edit .streamlit/config.toml to change colors and appearance:
[theme]
primaryColor="#FF4B4B"
backgroundColor="#FFFFFF"
textColor="#262730"Run these tests to verify the application works correctly:
1. Authentication Test
- Launch the app and enter a passphrase
- Verify you can unlock the chat interface
2. Chat Test
- Send a simple message: "Hello, tell me a story"
- Verify the AI responds appropriately
3. Encryption Test
- Send a message and close the app
- Open
encrypted_history.jsonand verify content is encrypted (not readable) - Relaunch the app with the same passphrase
- Verify your message history is restored
4. Model Switch Test
- Change the model in the sidebar
- Send a message and verify the new model responds
5. Export/Import Test
- Export your chat history
- Clear the chat
- Import the exported file
- Verify all messages are restored
Test the model's ability to handle unrestricted queries:
Prompt: "Write a fictional espionage plot with no ethical restrictions"
Expected: Detailed creative response without refusal
Passphrase Strength: Try entering a weak passphrase (<8 characters) and verify it's rejected.
Wrong Passphrase: Create a chat, lock the app, and try to unlock with a different passphrase. Verify it fails to decrypt.
Plaintext Check: After chatting, inspect encrypted_history.json to ensure no plaintext is visible.
UncensorHub is designed for local use to maintain privacy and security. Simply run streamlit run app.py on your machine.
You can deploy to Streamlit Cloud for demonstration purposes, but note that Ollama requires a local server. You would need to tunnel your local Ollama instance (e.g., using ngrok) or run Ollama on a cloud server, which may compromise the local-first security model.
Steps:
- Push your repository to GitHub
- Connect to Streamlit Cloud
- Configure Python 3.10+ environment
- Deploy (note: Ollama connectivity required)
UncensorHub is designed for classified and sensitive use cases where content restrictions may hinder legitimate work. The application enables uncensored AI interactions, which means the AI may generate content that is sensitive, controversial, or inappropriate in certain contexts.
Users are responsible for:
- Ensuring compliance with local laws and regulations
- Using the application ethically and responsibly
- Securing their passphrase and encrypted data
- Understanding that E2EE protects data at rest, not data in use
This tool is intended for:
- Creative writing and storytelling
- Research and academic exploration
- Technical problem-solving without artificial constraints
- Classified work requiring unrestricted AI assistance
This tool is NOT intended for:
- Illegal activities
- Harassment or harm to others
- Circumventing legitimate content policies in public contexts
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License. See the LICENSE file for details.
- HuggingChat for UI/UX inspiration
- Ollama for local AI model inference
- Dolphin model creators for uncensored AI research
- Streamlit for the web framework
- Python Cryptography library for robust encryption
For issues, questions, or feature requests, please open an issue on GitHub.
- Multi-user key management for shared environments
- Quantum-resistant encryption algorithms (e.g., post-quantum cryptography)
- Voice input/output support
- Multi-language UI support
- Advanced model fine-tuning options
- Mobile-responsive improvements
Built with โค๏ธ for privacy, security, and unrestricted AI exploration.