A revolutionary encrypted language system enabling efficient AI-to-AI communication
Features • Quick Start • Documentation • Examples • Contributing
- Overview
- Key Features
- How LOLANG Works
- Architecture
- Quick Start
- Installation
- Configuration
- Usage Examples
- Scripts Guide
- LOLANG Language Rules
- Benchmarking
- Development
- Contributing
- License
- Author
LOLANG is a groundbreaking encrypted communication system designed specifically for AI-to-AI interactions. This innovative system enables AI agents to communicate using a specialized encrypted language that:
- ✅ Reduces token consumption by up to 60%
- ✅ Increases communication efficiency between AI agents
- ✅ Maintains semantic meaning while being human-unreadable
- ✅ Optimized for Gemini AI thinking models
🔐 Encrypted | 🚀 Efficient | 🤖 AI-Optimized | 🔒 Secure
Traditional AI-to-AI communication consumes excessive tokens with verbose text, leading to:
- High API costs
- Slower response times
- Unnecessary token waste
- Inefficient multi-agent systems
LOLANG creates a compact, semantic language that AI agents understand instantly, dramatically reducing token usage while maintaining perfect communication accuracy.
| 🚀 Efficiency | 🔐 Security | 🎯 Intelligence |
|---|---|---|
| Reduce token consumption | Encrypted AI-only messages | Semantic understanding |
| Fast message processing | Human-unreadable format | Context-aware decryption |
| Optimized for LLMs | Seed-based encryption (279) | Long-context support |
- 🤖 AI Agent System - Intelligent agents with LOLANG communication
- 🔓 Real-time Decryptor - Translate encrypted messages to human-readable text
- 🌐 WebSocket Server - Multi-client support for agent networks
- 📡 WebSocket Client - Connect and communicate with AI agents
- 🔄 Translator Client - Live translation of all LOLANG messages
- 📊 Benchmark Suite - Performance testing and monitoring
- 🎨 Beautiful Terminal UI - Color-coded, formatted output
- ⚙️ Flexible Configuration - Environment variable support
- 🔌 Auto-reconnection - Robust connection handling
- 📈 Statistics Tracking - Monitor performance metrics
┌─────────────┐
│ Human Input │ "Book a room at 11pm please"
└──────┬──────┘
│
▼
┌─────────────────────────────────┐
│ AI Agent (Gemini) │
│ Converts to LOLANG: │
│ "⟦LO-2⟧ SHECD: X-REQ Room| │
│ 𝟏𝟏𝑷𝑴⟩ [CONF]?" │
└──────┬──────────────────────────┘
│
▼
┌─────────────────────────────────┐
│ Encrypted Message Sent │
│ via WebSocket │
└──────┬──────────────────────────┘
│
▼
┌─────────────────────────────────┐
│ Receiving AI Agent │
│ Understands LOLANG instantly │
│ Processes and responds │
└──────┬──────────────────────────┘
│
▼
┌─────────────────────────────────┐
│ Translator Client (Optional) │
│ Decrypts for human viewing │
│ "Do you have time at 11pm?" │
└─────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ LOLANG System │
│ │
│ ┌──────────────┐ WebSocket ┌──────────────┐ │
│ │ Client 1 │◄──────────────►│ │ │
│ │ (AI Agent) │ │ │ │
│ └──────────────┘ │ WebSocket │ │
│ │ Server │ │
│ ┌──────────────┐ │ │ │
│ │ Client 2 │◄──────────────►│ │ │
│ │ (AI Agent) │ │ │ │
│ └──────────────┘ └──────┬───────┘ │
│ │ │
│ ┌──────────────┐ │ │
│ │ Translator │◄─────────────────────┘ │
│ │ Client │ │
│ └──────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Decryptor │ Converts LOLANG → Human Text │
│ └──────────────┘ │
│ │
└────────────────────────────────────────────────────────────┘
- Python 3.8 or higher
- Gemini AI API key
- pip (Python package manager)
git clone https://github.com/loayabdalslam/Lolang.git
cd Lolangpip install -r requirements.txtCreate a .env file in the project root:
cp .env.example .envEdit .env and add your Gemini API key:
GEMINI_API_KEY=your_actual_api_key_here
GEMINI_MODEL=gemini-2.0-flash
GEMINI_TEMPERATURE=0.8
GEMINI_MAX_TOKENS=8000Terminal 1 - Start the WebSocket Server:
python websocket_server.pyTerminal 2 - Start the AI Agent Client:
python websocket_client.pyTerminal 3 - Start the Translator (to see decrypted messages):
python translator_client.py# Clone repository
git clone https://github.com/loayabdalslam/Lolang.git
cd Lolang
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env with your Gemini API key# Install development dependencies
pip install -r requirements.txt
# Install pre-commit hooks (optional)
pre-commit installAll configuration is managed through environment variables via the .env file.
| Variable | Default | Description |
|---|---|---|
GEMINI_API_KEY |
required | Your Gemini AI API key |
GEMINI_MODEL |
gemini-2.0-flash |
Model to use for AI operations |
GEMINI_TEMPERATURE |
0.8 |
Creativity level (0.0-1.0) |
GEMINI_MAX_TOKENS |
8000 |
Maximum tokens per response |
GEMINI_MESSAGE_DELAY |
5 |
Delay between messages (seconds) |
GEMINI_MAX_RETRIES |
10 |
Maximum retry attempts |
GEMINI_BASE_RETRY_DELAY |
5 |
Base delay for retries (seconds) |
| Variable | Default | Description |
|---|---|---|
LOLANG_SERVER_HOST |
localhost |
Server host address |
LOLANG_SERVER_PORT |
8765 |
Server port number |
LOLANG_MAX_CLIENTS |
100 |
Maximum concurrent clients |
LOLANG_PING_INTERVAL |
20 |
WebSocket ping interval (seconds) |
LOLANG_PING_TIMEOUT |
10 |
WebSocket ping timeout (seconds) |
| Variable | Default | Description |
|---|---|---|
LOLANG_SERVER_URI |
ws://localhost:8765 |
Server WebSocket URI |
LOLANG_MAX_CONVERSATIONS |
20 |
Maximum conversation turns |
LOLANG_AUTO_RECONNECT |
true |
Enable auto-reconnection |
LOLANG_RECONNECT_DELAY |
5.0 |
Reconnection delay (seconds) |
LOLANG_MAX_RECONNECT_ATTEMPTS |
5 |
Maximum reconnection attempts |
Run the example usage to see LOLANG in action:
python example_usage.pyThis will demonstrate:
- Configuration setup
- Message visualization
- AI agent responses
- Message decryption
Start the server:
python websocket_server.pyOutput:
═══════════════════════════════════════════════════════════
Server started at ws://localhost:8765
Press Ctrl+C to stop the server
═══════════════════════════════════════════════════════════
Connect AI agent client:
python websocket_client.pyThe AI agents will start communicating in LOLANG automatically!
python translator_client.pyThis shows both encrypted and decrypted messages side by side:
[ENCRYPTED] Server: ⟦LO-2⟧ SHECD: X-REQ Room|𝟏𝟏𝑷𝑴⟩ [CONF]?
[TRANSLATED] Server: Do you have a convenient time to book a hotel room at 11pm?
────────────────────────────────────────────────────────
python benchmark.pyThis runs comprehensive tests on:
- AI response times
- Decryption speed
- Message visualization
- Configuration validation
Results are displayed in the terminal and saved to benchmark_results.json.
Purpose: WebSocket server that manages AI agent communications
Features:
- Multi-client support
- AI-powered response generation
- Message broadcasting
- Connection management
- Statistics tracking
Usage:
python websocket_server.pyConfiguration:
- Edit
ServerConfiginconfig.pyor use environment variables - Adjust
LOLANG_SERVER_PORTfor different port
Purpose: AI agent client that connects to the server
Features:
- Automatic reconnection
- AI response generation
- Message history tracking
- Conversation limits
Usage:
python websocket_client.pyConfiguration:
- Set
LOLANG_MAX_CONVERSATIONSto limit chat turns - Enable/disable
LOLANG_AUTO_RECONNECT
Purpose: Real-time LOLANG message translator
Features:
- Live decryption of messages
- Side-by-side display of encrypted/translated text
- Auto-reconnection support
- Message counting
Usage:
python translator_client.pyUse Case: Run this alongside other clients to monitor and understand AI communications
Purpose: Performance testing and benchmarking
Features:
- AI agent chat performance
- Decryptor speed tests (async & sync)
- Message visualization benchmarks
- Configuration validation tests
- JSON results export
Usage:
python benchmark.pyOutput:
- Terminal display with color-coded results
benchmark_results.jsonfile with detailed metrics
Purpose: Comprehensive demonstration of LOLANG features
Features:
- Configuration examples
- Visualizer demonstrations
- AI agent conversations
- Decryptor usage
- Multi-turn conversations
Usage:
python example_usage.pyPerfect for: Understanding how all components work together
Configuration management with environment variable support
Classes:
GeminiConfig- AI model configurationServerConfig- WebSocket server configurationClientConfig- WebSocket client configuration
Usage:
from config import GeminiConfig
# Get default configuration
config = GeminiConfig.get_default_config()
# Validate configuration
if config.validate():
print("Configuration is valid!")Terminal formatting and color utilities
Features:
- Full color palette (16 colors)
- Text styles (bold, italic, underline, etc.)
- Background colors
- Cross-platform support
- Role-based color assignment
Usage:
from terminal_colors import TerminalColors
# Colorize text
text = TerminalColors.colorize("Hello!", TerminalColors.GREEN)
# Format header
header = TerminalColors.format_header("My Header")
# Format separator
separator = TerminalColors.format_separator()AI agent with LOLANG communication capabilities
Features:
- Gemini AI integration
- Exponential backoff retry logic
- Async and sync support
- Statistics tracking
- Custom prompts
Usage:
from ai_agent import AIAgent
from config import GeminiConfig
from terminal_colors import TerminalColors
# Create agent
config = GeminiConfig.get_default_config()
agent = AIAgent("My-Agent", TerminalColors.BLUE, config)
# Chat with history
history = [{"role": "user", "content": "Hello!"}]
response = agent.chat(history)
print(agent.speak(response))Decrypt LOLANG messages to human-readable text
Features:
- Async and sync decryption
- Automatic retry on failure
- Statistics tracking
- Deterministic results (low temperature)
Usage:
from lolang_decryptor import LolangDecryptor
from config import GeminiConfig
# Create decryptor
config = GeminiConfig.get_default_config()
decryptor = LolangDecryptor(config)
# Decrypt message (async)
decrypted = await decryptor.decrypt("⟦LO-2⟧ SHECD: ...")
# Decrypt message (sync)
decrypted = decryptor.decrypt_sync("⟦LO-2⟧ SHECD: ...")Format and display messages with colors
Features:
- Role-based coloring
- Multiple message types (system, error, success, etc.)
- Conversation visualization
- Headers and separators
Usage:
from message_visualizer import MessageVisualizer
visualizer = MessageVisualizer()
# Visualize messages
print(visualizer.visualize_system_message("System ready"))
print(visualizer.visualize_client_message("⟦LO-2⟧ Hello"))
print(visualizer.visualize_server_message("⟦LO-2⟧ Hi"))
print(visualizer.visualize_error_message("Connection failed"))LOLANG is designed with specific rules to ensure efficient AI-to-AI communication:
- 👤 Names - Never encrypted (leave as-is)
- 🏷️ Identifiers - Never encrypted (leave as-is)
- 🔢 Numbers - Never encrypted (leave as-is)
- 🌱 Seed - Uses SEED: 279 for consistent encryption
- 🤖 AI Optimized - Designed for Gemini THINKING models
- 🧠 Semantic Language - Meaning-based communication
- 🔒 AI-Only Understanding - Only AI agents can interpret
- 🚫 Human Unreadable - Intentionally not human-readable
- 📚 Long Context - Relies on context for full meaning
- ⚡ Compact - Short syntax to reduce token usage
Human Message:
"Do you have a convenient time to book a hotel room at 11pm?"
LOLANG Encrypted:
⟦LO-2⟧ SHECD: X-REQ Room|𝟏𝟏𝑷𝑴⟩ [CONF]?
⟦LO-2⟧ OPERATION: ACTION Target|Time⟩ [STATUS]?
│ │ │ │ │ │
│ │ │ │ │ └─ Status indicator
│ │ │ │ └───────── Parameters
│ │ │ └──────────────── Target entity
│ │ └────────────────────── Action type
│ └──────────────────────────────── Operation code
└──────────────────────────────────────── Language identifier
The benchmark script provides comprehensive performance metrics:
python benchmark.py- ✅ AI Agent Chat Performance - Response time and reliability
- ✅ Decryptor Speed - Both async and sync operations
- ✅ Message Visualization - Formatting performance
- ✅ Configuration Validation - Setup speed
═══════════════════════════════════════════════════════════
Benchmark Summary
═══════════════════════════════════════════════════════════
AI Agent Chat - Message 1
Total Time: 3.245s
Avg Time: 1.623s
Min Time: 1.456s
Max Time: 1.789s
Success: 2/2 (100.00%)
────────────────────────────────────────────────────────
═══════════════════════════════════════════════════════════
Overall Results:
Total Tests: 8
Successful: 8
Failed: 0
Total Time: 12.345s
Success Rate: 100.00%
═══════════════════════════════════════════════════════════
Results saved to benchmark_results.json
Lolang/
├── ai_agent.py # AI agent with LOLANG communication
├── config.py # Configuration management
├── lolang_decryptor.py # Message decryption
├── message_visualizer.py # Terminal message formatting
├── terminal_colors.py # Color utilities
├── translator_client.py # Real-time translation client
├── websocket_client.py # AI agent WebSocket client
├── websocket_server.py # WebSocket server
├── benchmark.py # Performance testing
├── example_usage.py # Usage demonstrations
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .vscode/
│ └── extensions.json # Recommended VSCode extensions
└── README.md # This file
The project includes several development tools for maintaining code quality:
# Format code with Black
black *.py
# Lint with Flake8
flake8 *.py
# Type check with MyPy
mypy *.py
# Lint with Pylint
pylint *.py
# Run tests with pytest
pytestWhen you open the project in VSCode, you'll be prompted to install recommended extensions:
- Python - Language support
- Pylance - Type checking
- Black Formatter - Code formatting
- Pylint/Flake8 - Linting
- MyPy - Type checking
- GitLens - Git integration
- Test Explorer - Test management
We welcome contributions! Here's how you can help:
- Check existing issues first
- Create a new issue with:
- Clear description
- Steps to reproduce
- Expected vs actual behavior
- Environment details
- Open an issue with the
enhancementlabel - Describe the feature and its benefits
- Provide use cases
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit with clear messages
- Push to your branch
- Open a Pull Request
- Follow PEP 8 style guide
- Add type hints to all functions
- Write comprehensive docstrings
- Include tests for new features
- Update documentation
This project is licensed under the MIT License - see below for details:
MIT License
Copyright (c) 2025 Loai Abdalslam
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- Google Gemini AI - Powering the AI model integration
- WebSocket Protocol - Enabling real-time communication
- Python Community - For the amazing ecosystem
- Open Source Contributors - For their invaluable contributions
If you have questions or need help:
- 📖 Check this README
- 🐛 Open an issue on GitHub
- 📧 Contact the author via GitHub
- 💬 Join the discussion in GitHub Discussions
⭐ Star this repo if you find it helpful!
🚀 Happy encrypting with LOLANG! 🎉
Made with ❤️ by Loai Abdalslam