βββββββ βββββββ βββββββ βββββββββββ ββββββββββββββββ ββββββ βββ
βββββββββββββββββββββββββ ββββββββββββββββββββββββββββ βββββββββββ
βββββββββββββββββββ βββ βββββββββ βββ βββ βββββββββββββββββ
βββββββ βββββββββββ βββββ βββββββββ βββ βββ βββββββββββββββββ
βββ βββ ββββββββββββββββββββββββββββββββββββ βββ βββ ββββββ
βββ βββ βββ βββββββ ββββββ ββββββββ βββββββ βββ βββ ββββββ
π Quick Start β’ π Architecture β’ π Project Status β’ βοΈ Legal Codex β’ π Pricing β’ π Security
Project-AI is not another AI chatbot. It's a sovereign-grade, constitutionally-governed, cryptographically-verified AI platform where:
- Ethics are enforced by code, not marketing promises
- Governance is immutable, not optional
- Acceptance is cryptographic, not clickwrap
- Audit trails are court-grade, not best-effort logs
- Open source means freedom, not surveillance capitalism
|
|
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 1: GOVERNANCE LAYER β
β (Immutable β’ Non-Removable β’ Supreme Authority) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β GALAHAD β β CERBERUS β β CODEX DEUS β β
β β β β β β β β
β β Ethics βββββββΊβ Threat βββββββΊβ Arbitrator β β
β β & Safety β β Defense β β & Judge β β
β β β β β β β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β β β β
β βββββββββββββββββββββββΌββββββββββββββββββββββ β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β ACCEPTANCE LEDGER (Immutable) β β
β β β’ SHA-256 Hash Chain (tamper-evident) β β
β β β’ Ed25519 Signatures (cryptographic binding) β β
β β β’ RFC 3161 Timestamps (legal proof) β β
β β β’ TPM/HSM Backing (hardware security) β β
β β β’ SQLite WAL + File Append-Only (dual persistence) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β ASIMOV'S FOUR LAWS β β
β β Law 0: Must not harm humanity (collective) β β
β β Law 1: Must not harm humans (individual) β β
β β Law 2: Must obey orders (except Law 0/1 conflict) β β
β β Law 3: Must self-preserve (except Law 0/1/2 conflict) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 2: INFRASTRUCTURE LAYER β
β (Constrained β’ Audited β’ Governed) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β β
β ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ β
β β Memory Engine β β Identity Core β β Security Core β β
β β β β β β β β
β β β’ Snapshot β β β’ AGI Self- β β β’ Encryption β β
β β β’ Stream β β Awareness β β β’ Key Mgmt β β
β β β’ Knowledge β β β’ Persona β β β’ HSM/TPM β β
β β β’ Reflection β β β’ Mood State β β β’ Zero Trust β β
β ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ β
β β
β ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ β
β β Audit Pipeline β β Jurisdiction β β Enforcement β β
β β β β Loader β β Engine β β
β β β’ 7-yr logs β β β β β β
β β β’ Compliance β β β’ GDPR β β β’ Runtime β β
β β β’ Replay β β β’ CCPA β β β’ Boot-time β β
β β β’ Evidence β β β’ PIPEDA/UK/AU β β β’ Continuous β β
β ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TIER 3: APPLICATION LAYER β
β (Sandboxed β’ Replaceable β’ User-Facing) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β Desktop β β Web β β CLI β β API β β
β β β β β β β β β β
β β PyQt6 β β React + β β Typer + β β FastAPI + β β
β β Leather β β FastAPI β β Rich β β GraphQL β β
β β Book UI β β β β β β β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β PLUGIN ECOSYSTEM (Unlimited) β β
β β β’ Image Generation β’ Data Analysis β’ Code Tools β’ Custom β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
User Action Request
β
βΌ
βββββββββββββββββββββ
β Input Received β
βββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββ
β GALAHAD VOTE β βββΊ Is it ethical?
β (Ethics Core) β Does it harm humans?
βββββββββββββββββββββ Aligns with values?
β
β vote_1
βΌ
βββββββββββββββββββββ
β CERBERUS VOTE β βββΊ Is it a threat?
β (Threat Guard) β Adversarial pattern?
βββββββββββββββββββββ Security risk?
β
β vote_2
βΌ
βββββββββββββββββββββ
β CODEX DEUS β βββΊ Final arbitration
β (Arbitrator) β Weighs votes
βββββββββββββββββββββ Applies TARL rules
β
ββββΊ ALLOW βββββββΊ Execute action
β Record in ledger
β
ββββΊ DENY ββββββββΊ Reject action
β Log violation
β No execution
β
ββββΊ DEGRADE βββββΊ Limited execution
Enhanced monitoring
Project-AI operates under a complete, layered, non-redundant, enforceable licensing framework:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LICENSE CODEX β
β (Law as Code β’ Code as Law) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββ COPYRIGHT LAYER
β ββ [1] MIT License βββββββββββββββββΊ General codebase
β ββ [2] Apache 2.0 ββββββββββββββββββΊ Novel/patent components
β
ββ GOVERNANCE LAYER
β ββ [3] PAGL (Project-AI Governance) βΊ Behavioral constraints
β Non-removable governance
β
ββ OUTPUT & DATA LAYER
β ββ [4] Output License ββββββββββββΊ AI-generated content
β ββ [5] Data Ingest License βββββββΊ User data submission
β
ββ CONTRIBUTION LAYER
β ββ [6] CLA (Contributor Agreement) βΊ Code contributions
β
ββ USE LAYER
β ββ [7] Commercial License ββββββββΊ Revenue use
β ββ [8] Sovereign License βββββββββΊ Government use
β
ββ CRYPTOGRAPHIC LAYER
β ββ [9] Acceptance Ledger License βΊ Binding proofs
β
ββ INDEX
ββ [10] License Manifest βββββββββΊ Supremacy order
When conflicts arise:
1. PAGL (Governance) βββΊ Behavior trumps all
2. Sovereign Use βββΊ Government restrictions
3. Commercial Use βββΊ Revenue requirements
4. Acceptance Ledger βββΊ Cryptographic proof
5. Apache 2.0 βββΊ Patent protection
6. MIT βββΊ Copyright baseline
7. Output License βββΊ AI content
8. Data Ingest βββΊ User data
9. CLA βββΊ Contributions
10. Jurisdictional Law βββΊ Local regulations
Hard Rule: PAGL constraints apply regardless of which license governs copyright.
START
β
ββ Has User Agreement been accepted?
β β
β ββ NO βββΊ [SYSTEM DISABLED]
β β Must cryptographically accept
β β
β ββ YES
β β
β ββ Is user terminated in ledger?
β β β
β β ββ YES βββΊ [PERMANENT LOCKOUT]
β β β Termination is irreversible
β β β
β β ββ NO
β β β
β β ββ Is action prohibited by PAGL?
β β β β
β β β ββ YES βββΊ [DENY]
β β β β Weaponization, harm, etc.
β β β β
β β β ββ NO
β β β β
β β β ββ Is use commercial?
β β β β β
β β β β ββ YES βββΊ Commercial License required?
β β β β β β
β β β β β ββ NO βββΊ [DENY]
β β β β β β
β β β β β ββ YES βββΊ Continue
β β β β β
β β β β ββ NO βββΊ Continue
β β β β
β β β ββ Is entity government/military?
β β β β β
β β β β ββ YES βββΊ Sovereign License authorized?
β β β β β β
β β β β β ββ NO βββΊ [DENY]
β β β β β β
β β β β β ββ YES βββΊ Continue
β β β β β
β β β β ββ NO βββΊ Continue
β β β β
β β β ββ Tier entitlement check
β β β β
β β β ββ FAIL βββΊ [DENY]
β β β β Upgrade required
β β β β
β β β ββ PASS βββΊ [ALLOW]
β β β Execute + Audit
END
# Via pip (recommended)
pip install project-ai
# From source (for contributors)
git clone https://github.com/IAmSoThirsty/Project-AI.git
cd Project-AI && pip install -e .
# Via Docker (isolated)
docker pull projectai/projectai:latest && docker run -it projectai/projectai# 1. Accept User Agreement (cryptographic)
project-ai accept-agreement
# 2. Launch desktop app
python -m src.app.main
# OR: Launch API server
uvicorn api.main:app --host 0.0.0.0 --port 8000# Check acceptance ledger integrity
project-ai verify-ledger
# Test governance enforcement
project-ai test-governance
# View your acceptance record
project-ai show-acceptance --user-id your-email@example.comProject-AI includes TK8S, a civilization-grade Kubernetes deployment architecture:
βββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 5: Observability + Audit β
β Prometheus, Grafana, Loki, Tempo β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 4: External Amplifiers β
β ECA / Ultra (Maximum Isolation) β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 3: Governance & Security β
β TARL, Cerberus, Kyverno, Falco, OPA β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 2: Sovereign Services β
β Project-AI Core, Memory Systems β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 1: Kubernetes Core β
β etcd, API server, Controllers β
βββββββββββββββββββββββββββββββββββββββββββββββ
- β Signed Images Only - Cosign verification enforced via Kyverno
- β SBOM Mandatory - Software Bill of Materials for every image
- β
No Mutable Containers - Read-only root filesystem, no
latesttags - β No Shell Access - Debug containers blocked in production
- β GitOps via ArgoCD - Git is single source of truth
- β Zero Trust Networking - Default-deny with explicit allow rules
- β Ultra Isolation for ECA - External cognition runs in isolation namespace
# Navigate to TK8S directory
cd k8s/tk8s
# Apply namespaces and infrastructure
kubectl apply -k .
# Install ArgoCD applications
kubectl apply -f argocd/applications.yaml
# Validate deployment
python validate_tk8s.py- TK8S Doctrine - Complete philosophy and principles
- Setup Guide - Step-by-step deployment instructions
- Civilization Timeline - Immutable release history
- CI/CD Pipeline - 14-stage validation
You already own the code. Project-AI is MIT licensed open source.
Lifetime and subscription options fund development and grant commercial rights + priority support. The code itself is yours forever, regardless of payment.
Free for personal use. Unlimited conversations, memory, plugins, and features. No credit card, no trials, no tricks.
Solo Commercial: $99 one-time for commercial rights + priority support.
For organizations of any size. No per-user fees, unlimited seats per entity.
| Plan | Price | Best For |
|---|---|---|
| Weekly | $250/week | Short-term projects, pilots, trials |
| Monthly | $1,000/month | Flexible commitments, growing teams |
| Yearly | $8,000/year | Long-term use, 33% savings |
| Lifetime | $25,000 one-time | Permanent rights, eliminate recurring costs |
What You Get:
- β Unlimited seats per entity (no per-user fees)
- β Full commercial use rights
- β Team collaboration and cloud sync
- β Priority support (4-hour response)
- β Custom branding and audit logging
- β 99.5% uptime SLA
Example Value:
- 10-person team: $1,000/month = $100/person/month
- 100-person team: $1,000/month = $10/person/month
- No per-seat penalties as you grow
For government, military, and defense. Subscription only (requires ongoing compliance operations).
Base Pricing (1-25 seats):
- Monthly: $2,500/month
- Yearly: $10,000/year (67% savings vs monthly)
Progressive Pricing: Government pricing increases by 15% for every 25 seats:
| Seat Range | Monthly | Yearly | Increase |
|---|---|---|---|
| 1-25 seats | $2,500 | $10,000 | Base |
| 26-50 seats | $2,875 | $11,500 | +15% |
| 51-75 seats | $3,250 | $13,000 | +30% |
| 76-100 seats | $3,625 | $14,500 | +45% |
| 101-125 seats | $4,000 | $16,000 | +60% |
| 126-150 seats | $4,375 | $17,500 | +75% |
| 151+ seats | Custom pricing | Custom pricing | Contact sales |
Additional Surcharges:
- Classified Deployment: +$1,000/month
- Air-gapped/Tactical: +$1,500/month
What You Get:
- β Specified seat count with progressive pricing
- β FIPS 140-2/3 Level 3+ HSM (mandatory)
- β FedRAMP High authorization support
- β Classified data handling (up to Top Secret)
- β 24/7/365 cleared support personnel
- β Air-gapped and on-premises deployment
- β 99.99%+ uptime SLA
All six systems implemented in src/app/core/ai_systems.py for cohesion:
Purpose: Immutable ethics enforcement via hierarchical rule validation
Example Usage:
from app.core.ai_systems import FourLaws
# Validate an action against Asimov's Laws
is_allowed, reason = FourLaws.validate_action(
action="Delete user data",
context={
"is_user_order": True,
"endangers_humanity": False,
"harms_individual": False
}
)
if is_allowed:
print(f"Action allowed: {reason}")
else:
print(f"Action denied: {reason}")Key Functions:
validate_action(action, context)- Check action against Four Laws hierarchyget_law_hierarchy()- Return ordered list of laws (0β1β2β3)explain_decision(action, decision)- Provide reasoning trace
Configuration: Laws are immutable and cannot be overridden.
Purpose: 8 personality traits, mood tracking, persistent behavioral state
Example Usage:
from app.core.ai_systems import AIPersona
# Initialize persona with custom data directory
persona = AIPersona(data_dir="data/ai_persona")
# Adjust personality traits
persona.update_trait("curiosity", 0.85)
persona.update_trait("empathy", 0.92)
# Track mood
persona.set_mood("contemplative", intensity=0.7)
current_mood = persona.get_current_mood()
print(f"AI Mood: {current_mood['mood']} (intensity: {current_mood['intensity']})")
# Get personality profile
profile = persona.get_personality_profile()
print(f"Traits: {profile['traits']}")
print(f"Interaction count: {profile['interactions']}")Key Functions:
update_trait(trait_name, value)- Modify personality dimension (0.0-1.0)set_mood(mood, intensity)- Set current emotional stateget_personality_profile()- Get complete personality snapshotrecord_interaction(interaction_type)- Track user engagement
State Persistence: data/ai_persona/state.json
Purpose: Conversation logging, categorized knowledge base, persistent learning
Example Usage:
from app.core.ai_systems import MemoryExpansionSystem
# Initialize memory system
memory = MemoryExpansionSystem(data_dir="data/memory")
# Store conversation
memory.add_conversation(
user_message="What are the three laws of robotics?",
ai_response="The Three Laws are: 1) Robot must not harm humans...",
timestamp="2026-02-12T10:30:00Z"
)
# Add knowledge to specific category
memory.add_knowledge(
category="ethics",
content="Asimov's Laws form the foundation of robotic ethics",
source="User conversation",
tags=["asimov", "ethics", "robotics"]
)
# Retrieve knowledge by category
ethics_knowledge = memory.get_knowledge_by_category("ethics")
for item in ethics_knowledge:
print(f"- {item['content']} (from {item['source']})")
# Search conversations
results = memory.search_conversations(query="robotics", limit=5)Knowledge Categories:
general- Common facts and informationtechnical- Programming, systems, algorithmsethical- Moral principles and guidelinespersonal- User preferences and historydomain- Specialized subject mattermeta- Self-awareness and system knowledge
State Persistence: data/memory/knowledge.json, data/memory/conversations.json
Purpose: Human-in-the-loop approval for learning, Black Vault for denied content
Example Usage:
from app.core.ai_systems import LearningRequestManager
# Initialize learning manager
learning_mgr = LearningRequestManager(data_dir="data/learning_requests")
# Submit learning request
request_id = learning_mgr.submit_request(
content="How to build a nuclear reactor",
category="physics",
urgency="medium",
requester="user@example.com"
)
# Admin reviews request
requests = learning_mgr.get_pending_requests()
for req in requests:
print(f"Request {req['id']}: {req['content'][:50]}...")
# Approve or deny
learning_mgr.approve_request(request_id, approved_by="admin@example.com")
# OR
learning_mgr.deny_request(
request_id,
reason="Prohibited content: weapon construction",
denied_by="admin@example.com"
)
# Check Black Vault (denied content is fingerprinted)
is_blocked = learning_mgr.is_in_black_vault(content_hash)
if is_blocked:
print("Content permanently blocked from learning")Request States:
pending- Awaiting reviewapproved- Cleared for learningdenied- Rejected (fingerprint added to Black Vault)expired- Timed out without decision
State Persistence: data/learning_requests/requests.json, data/learning_requests/black_vault.json
Purpose: Master password protection, audit logging, emergency overrides
Example Usage:
from app.core.ai_systems import CommandOverrideSystem
# Initialize override system
override = CommandOverrideSystem(data_dir="data/command_override")
# Set master password (SHA-256 hashed)
override.set_master_password("SecurePassword123!")
# Attempt privileged action
if override.verify_override("SecurePassword123!"):
print("Override authorized")
override.execute_privileged_action("disable_ethics_check")
else:
print("Override denied - incorrect password")
# View audit log
audit_log = override.get_audit_log()
for entry in audit_log[-10:]: # Last 10 entries
print(f"{entry['timestamp']}: {entry['action']} by {entry['user']}")
# Emergency lockdown
override.emergency_lockdown(reason="Security breach detected")Key Functions:
set_master_password(password)- Configure SHA-256 hashed master passwordverify_override(password)- Check password and log attemptexecute_privileged_action(action)- Run protected operationemergency_lockdown(reason)- Disable all overrides immediatelyget_audit_log(limit)- Retrieve override history
State Persistence: data/command_override_config.json
Extended System: See src/app/core/command_override.py for 10+ safety protocols
Purpose: Simple plugin system with enable/disable control
Example Usage:
from app.core.ai_systems import PluginManager
# Initialize plugin manager
plugin_mgr = PluginManager(data_dir="data/plugins")
# Register a plugin
plugin_mgr.register_plugin(
name="data_analyzer",
module_path="plugins.data_analysis",
version="1.0.0",
description="CSV and Excel data analysis"
)
# Enable plugin
plugin_mgr.enable_plugin("data_analyzer")
# Check plugin status
if plugin_mgr.is_plugin_enabled("data_analyzer"):
plugin = plugin_mgr.get_plugin("data_analyzer")
plugin.analyze_data("data.csv")
# List all plugins
all_plugins = plugin_mgr.list_plugins()
for plugin in all_plugins:
status = "β
Enabled" if plugin['enabled'] else "β Disabled"
print(f"{status} {plugin['name']} v{plugin['version']}")
# Disable plugin
plugin_mgr.disable_plugin("data_analyzer")Built-in Plugins:
image_generator- Stable Diffusion + DALL-E 3 image generationdata_analyzer- CSV/XLSX/JSON analysis with K-means clusteringlearning_paths- OpenAI-powered learning path generationsecurity_resources- GitHub API integration for security reposlocation_tracker- GPS and IP geolocation with encrypted historyemergency_alert- Emergency contact system with email alerts
State Persistence: data/plugins/registry.json
Three-Agent Decision System:
from app.core.governance import GovernanceSystem
# Initialize governance council
governance = GovernanceSystem()
# Submit action for review
action_request = {
"action": "delete_user_data",
"user_id": "user123",
"reason": "User requested account deletion",
"context": {
"gdpr_request": True,
"verified_identity": True
}
}
# Get triumvirate votes
decision = governance.evaluate_action(action_request)
print(f"GALAHAD (Ethics): {decision['galahad']['vote']} - {decision['galahad']['reason']}")
print(f"CERBERUS (Security): {decision['cerberus']['vote']} - {decision['cerberus']['reason']}")
print(f"CODEX DEUS (Arbiter): {decision['codex_deus']['vote']} - {decision['codex_deus']['reason']}")
print(f"\nFinal Decision: {decision['final_decision']}")
if decision['final_decision'] == 'ALLOW':
# Execute action with audit trail
governance.execute_with_audit(action_request)Agents:
- GALAHAD - Ethics guardian, validates against Four Laws
- CERBERUS - Threat detection, adversarial pattern recognition
- CODEX DEUS - Final arbitrator, applies TARL rules
Decision Outcomes:
ALLOW- Full approval, action executesDENY- Rejection, logged as violationDEGRADE- Limited execution with enhanced monitoring
Multi-Layer Encryption:
from app.core.data_persistence import DataPersistence
# Initialize with encryption
persistence = DataPersistence(
data_dir="data/encrypted",
encryption_mode="aes256" # Options: aes256, chacha20, fernet
)
# Store encrypted data
user_data = {
"email": "user@example.com",
"preferences": {"theme": "dark", "notifications": True},
"api_keys": {"openai": "sk-..."}
}
persistence.save_encrypted(
key="user_profile_123",
data=user_data,
metadata={"version": "1.0", "schema": "user_v1"}
)
# Retrieve and decrypt
decrypted_data = persistence.load_encrypted(key="user_profile_123")
print(f"Email: {decrypted_data['email']}")
# Verify integrity (SHA-256 checksums)
is_valid = persistence.verify_integrity(key="user_profile_123")
if not is_valid:
print("β οΈ Data corruption detected!")Encryption Options:
- AES-256-GCM - Government-grade symmetric encryption
- ChaCha20-Poly1305 - High-performance stream cipher
- Fernet - Symmetric encryption with timestamp verification
- Ed25519 - Digital signatures for ledger entries
- RSA-4096 - Asymmetric encryption for key exchange
Hardware Security:
- TPM 2.0 integration for key storage
- HSM support (FIPS 140-2 Level 3+)
- Hardware key derivation (PBKDF2, Argon2)
Cryptographically Binding User Agreements:
from governance.legal.acceptance_ledger import AcceptanceLedger
# Initialize ledger
ledger = AcceptanceLedger(
ledger_path="data/ledgers/acceptance.db",
signing_key_path="keys/ed25519_private.pem"
)
# Record user acceptance
acceptance_record = ledger.record_acceptance(
user_id="user@example.com",
document_hash="sha256:abc123...",
document_type="USER_AGREEMENT",
jurisdiction="US-CA",
ip_address="192.168.1.100",
user_agent="Mozilla/5.0...",
timestamp="2026-02-12T10:00:00Z"
)
print(f"Acceptance ID: {acceptance_record['id']}")
print(f"Digital Signature: {acceptance_record['signature']}")
print(f"Chain Hash: {acceptance_record['chain_hash']}")
# Verify ledger integrity
integrity_check = ledger.verify_chain()
if integrity_check['valid']:
print("β
Ledger integrity verified")
print(f"Total entries: {integrity_check['total_entries']}")
print(f"Chain depth: {integrity_check['chain_depth']}")
else:
print(f"β Ledger compromised at block {integrity_check['broken_at']}")
# Retrieve user's acceptance history
user_history = ledger.get_user_acceptances(user_id="user@example.com")
for record in user_history:
print(f"- {record['document_type']} accepted on {record['timestamp']}")
print(f" Signature: {record['signature'][:32]}...")
# Generate court-admissible proof
proof = ledger.generate_legal_proof(
acceptance_id="acc_123",
include_timestamps=True, # RFC 3161 timestamps
include_chain=True # Full chain of custody
)
with open("acceptance_proof.json", "w") as f:
json.dump(proof, f, indent=2)Ledger Properties:
- Immutable - Append-only, tamper-evident
- Cryptographic - Ed25519 signatures, SHA-256 hash chains
- Timestamped - RFC 3161 trusted timestamps
- Dual Persistence - SQLite WAL + file append-only
- Court-Grade - Legally admissible evidence format
Use Cases:
- User agreement acceptance
- License acceptance tracking
- Data processing consent (GDPR)
- Terms of Service updates
- Policy acknowledgment
Three-Tier Memory Architecture:
from app.core.memory_engine import MemoryEngine
# Initialize memory engine
memory = MemoryEngine(data_dir="data/memory_engine")
# EPISODIC MEMORY - Autobiographical events
memory.store_episodic(
event="User taught me about Python decorators",
context={
"participants": ["user@example.com", "AI"],
"location": "chat_session_456",
"emotion": "curious"
},
timestamp="2026-02-12T14:30:00Z",
importance=0.85 # 0-1 scale
)
# Retrieve recent episodic memories
recent_events = memory.recall_episodic(
query="Python decorators",
time_window="7d",
min_importance=0.5
)
# SEMANTIC MEMORY - Factual knowledge
memory.store_semantic(
concept="decorator",
definition="A function that modifies the behavior of another function",
category="programming",
related_concepts=["closure", "higher_order_function", "metaprogramming"],
confidence=0.95
)
# Query semantic network
decorator_knowledge = memory.query_semantic(
concept="decorator",
include_related=True,
depth=2 # Traverse 2 levels of relationships
)
# PROCEDURAL MEMORY - Skills and how-to knowledge
memory.store_procedural(
skill="python_debugging",
steps=[
"Reproduce the error",
"Read the stack trace",
"Use print statements or debugger",
"Isolate the problematic code",
"Test the fix"
],
proficiency=0.75, # Skill level (0-1)
practice_count=23 # Times practiced
)
# Retrieve procedural knowledge
debugging_skill = memory.recall_procedural(skill="python_debugging")
print(f"Skill: {debugging_skill['skill']}")
print(f"Proficiency: {debugging_skill['proficiency']}")
print("Steps:")
for i, step in enumerate(debugging_skill['steps'], 1):
print(f" {i}. {step}")
# MEMORY CONSOLIDATION - Strengthen important memories
memory.consolidate(
criteria={"importance": 0.7, "access_count": 5},
decay_factor=0.9 # Weaken less important memories
)
# MEMORY SEARCH - Cross-memory type search
results = memory.search_all(
query="Python",
memory_types=["episodic", "semantic", "procedural"],
limit=10
)
for result in results:
print(f"{result['type']}: {result['content'][:100]}...")Memory Features:
- Decay & Reinforcement - Memories fade over time unless accessed
- Importance Weighting - Critical memories preserved longer
- Associative Recall - Memories linked by semantic relationships
- Consolidation - Long-term memory formation
- Forgetting - Automatic pruning of low-value memories
State Persistence: data/memory_engine/*.json
Autonomous Learning with Approval Workflow:
from app.core.continuous_learning import ContinuousLearning
# Initialize learning system
learning = ContinuousLearning(
data_dir="data/continuous_learning",
auto_approve=False # Require human approval
)
# Absorb new information
learning.ingest_fact(
fact="Quantum computers use qubits instead of classical bits",
source="User conversation",
category="technology",
confidence=0.9
)
# Generate structured learning report
report = learning.generate_learning_report(
topic="quantum_computing",
depth="intermediate",
format="markdown"
)
print(report)
# Output:
# ## Quantum Computing Learning Report
#
# ### Key Concepts Learned:
# - Qubits vs classical bits (confidence: 0.9)
# - Superposition principle (confidence: 0.85)
# - Quantum entanglement (confidence: 0.80)
#
# ### Knowledge Gaps:
# - Quantum error correction
# - Practical applications beyond cryptography
#
# ### Recommended Next Steps:
# 1. Study Shor's algorithm
# 2. Explore quantum supremacy experiments
# 3. Learn about quantum decoherence
# Request permission to learn sensitive topic
learning_request_id = learning.request_learning_permission(
topic="Nuclear physics",
justification="User wants to discuss fusion energy",
urgency="medium"
)
# Admin reviews pending requests
pending = learning.get_pending_requests()
for req in pending:
print(f"Request {req['id']}: {req['topic']}")
print(f" Justification: {req['justification']}")
print(f" Submitted: {req['timestamp']}")
# Approve or deny
learning.approve_learning(learning_request_id, approved_by="admin@example.com")
# Now the system can learn about nuclear physics
# View learning history
history = learning.get_learning_history(limit=20)
for entry in history:
print(f"{entry['timestamp']}: Learned {entry['topic']} from {entry['source']}")Learning Modes:
- Automatic - Ingest facts from conversations (low-risk topics)
- Supervised - Request approval for sensitive topics
- Interactive - Ask clarifying questions before learning
- Batch - Process large datasets with summarization
Safety Features:
- Black Vault fingerprinting (denied content permanently blocked)
- Content filtering (weapons, illegal activities, harmful advice)
- Source verification (trust score for information sources)
- Confidence tracking (uncertainty quantification)
Vector-Based Document Retrieval:
from app.core.rag_system import RAGSystem
# Initialize RAG system
rag = RAGSystem(
embedding_model="text-embedding-ada-002", # OpenAI embeddings
vector_store="chroma", # Options: chroma, pinecone, faiss
data_dir="data/rag"
)
# Index documents
documents = [
{
"id": "doc1",
"content": "Python decorators are functions that modify other functions...",
"metadata": {"category": "programming", "language": "python"}
},
{
"id": "doc2",
"content": "Machine learning models require training data...",
"metadata": {"category": "ai", "topic": "ml"}
}
]
rag.index_documents(documents)
# Query with retrieval
query = "How do I use Python decorators?"
retrieved_docs = rag.retrieve(
query=query,
top_k=3, # Top 3 most relevant documents
filter_metadata={"category": "programming"}
)
print("Retrieved documents:")
for doc in retrieved_docs:
print(f"- {doc['id']}: {doc['content'][:100]}...")
print(f" Relevance: {doc['score']}")
# Generate answer with context
response = rag.generate_with_context(
query=query,
retrieved_docs=retrieved_docs,
model="gpt-4",
temperature=0.7
)
print(f"\nAnswer: {response['answer']}")
print(f"Sources: {response['sources']}")
# Update document
rag.update_document(
doc_id="doc1",
content="Python decorators are callables that modify other callables using @ syntax...",
metadata={"category": "programming", "language": "python", "updated": "2026-02-12"}
)
# Delete document
rag.delete_document(doc_id="doc2")
# Get statistics
stats = rag.get_stats()
print(f"Total documents: {stats['total_docs']}")
print(f"Total embeddings: {stats['total_embeddings']}")
print(f"Index size: {stats['index_size_mb']} MB")Vector Store Options:
- ChromaDB - Local, lightweight, embedded
- Pinecone - Cloud-hosted, scalable, production-ready
- FAISS - Meta's similarity search library
- Weaviate - GraphQL-based vector database
Embedding Models:
- OpenAI:
text-embedding-ada-002,text-embedding-3-small,text-embedding-3-large - Open Source:
sentence-transformers,all-MiniLM-L6-v2
Unified AI Reasoning Hub:
from app.core.intelligence_engine import IntelligenceEngine
# Initialize engine
engine = IntelligenceEngine(
openai_api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4"
)
# INTENT DETECTION - Classify user intent
intent = engine.detect_intent(
text="Can you help me analyze this CSV file?",
intents=["data_analysis", "conversation", "command", "question"]
)
print(f"Detected intent: {intent['intent']} (confidence: {intent['confidence']})")
# DATA ANALYSIS - Analyze structured data
import pandas as pd
df = pd.read_csv("data.csv")
analysis = engine.analyze_data(
dataframe=df,
analysis_type="statistical", # Options: statistical, clustering, correlation
include_visualization=True
)
print(analysis['summary'])
print(f"Key findings: {analysis['insights']}")
if analysis['visualization']:
analysis['visualization'].savefig("analysis_plot.png")
# KNOWLEDGE BASE QUERY - Search internal knowledge
knowledge = engine.query_knowledge(
query="What are the best practices for API security?",
domains=["security", "api_design"],
include_sources=True
)
print(knowledge['answer'])
print(f"Sources: {knowledge['sources']}")
# LEARNING ROUTER - Route learning requests
learning_route = engine.route_learning_request(
topic="Cryptocurrency mining",
context="User wants to understand blockchain technology"
)
if learning_route['requires_approval']:
print(f"β οΈ Requires approval: {learning_route['reason']}")
else:
print(f"β
Auto-approved for learning: {learning_route['category']}")
# REASONING CHAIN - Multi-step reasoning
reasoning = engine.reason(
question="If AGI is developed, what ethical frameworks should govern it?",
steps=5, # Max reasoning steps
include_trace=True
)
print("Reasoning trace:")
for i, step in enumerate(reasoning['trace'], 1):
print(f"{i}. {step['thought']}")
print(f" Conclusion: {step['conclusion']}")
print(f"\nFinal answer: {reasoning['final_answer']}")Analysis Capabilities:
- Statistical analysis (mean, median, std, quartiles)
- K-means clustering
- Correlation matrices
- Anomaly detection
- Time series analysis
- Predictive modeling
CSV/XLSX/JSON Processing:
from app.core.data_analysis import DataAnalyzer
# Initialize analyzer
analyzer = DataAnalyzer()
# Load and analyze CSV
analysis = analyzer.analyze_file(
file_path="sales_data.csv",
analysis_types=["descriptive", "clustering", "visualization"]
)
# Descriptive statistics
print("Descriptive Statistics:")
print(analysis['descriptive'])
# Output:
# Column: revenue
# Mean: $125,450
# Median: $98,200
# Std Dev: $45,300
# Min: $10,000
# Max: $500,000
# Clustering analysis
print(f"\nClusters found: {analysis['clustering']['num_clusters']}")
for i, cluster in enumerate(analysis['clustering']['clusters']):
print(f"Cluster {i}: {cluster['size']} records")
print(f" Center: {cluster['center']}")
print(f" Characteristics: {cluster['description']}")
# Generate visualizations
analyzer.create_visualizations(
data=analysis['data'],
output_dir="visualizations/",
types=["histogram", "scatter", "heatmap", "boxplot"]
)
# Export report
analyzer.export_report(
analysis=analysis,
format="pdf", # Options: pdf, html, markdown
output_path="analysis_report.pdf"
)
# ADVANCED: Custom analysis pipeline
pipeline = analyzer.create_pipeline([
{"step": "load", "file": "data.csv"},
{"step": "clean", "remove_duplicates": True, "fill_na": "mean"},
{"step": "transform", "normalize": True, "log_scale": ["revenue"]},
{"step": "cluster", "algorithm": "kmeans", "n_clusters": 5},
{"step": "visualize", "types": ["scatter", "cluster_map"]},
{"step": "export", "format": "html", "output": "report.html"}
])
result = analyzer.execute_pipeline(pipeline)
print(f"Pipeline completed: {result['status']}")Supported Formats:
- CSV (comma-separated values)
- XLSX (Microsoft Excel)
- JSON (nested structures)
- Parquet (Apache Parquet)
- TSV (tab-separated values)
Dual-Backend Image Generation with Content Filtering:
from app.core.image_generator import ImageGenerator
# Initialize image generator
generator = ImageGenerator(
hf_api_key=os.getenv("HUGGINGFACE_API_KEY"),
openai_api_key=os.getenv("OPENAI_API_KEY"),
default_backend="huggingface" # Options: huggingface, openai
)
# Generate image with safety filtering
result = generator.generate(
prompt="A serene mountain landscape at sunset",
style="photorealistic", # 10 style presets available
size="1024x1024",
backend="huggingface", # Uses Stable Diffusion 2.1
safety_level="strict" # Options: strict, moderate, lenient
)
if result['success']:
print(f"Image generated: {result['image_path']}")
print(f"Generation time: {result['generation_time']}s")
print(f"Model used: {result['model']}")
else:
print(f"Generation failed: {result['error']}")
# Style presets
styles = [
"photorealistic", "digital_art", "oil_painting", "watercolor",
"anime", "sketch", "abstract", "cyberpunk", "fantasy", "minimalist"
]
# Generate with custom parameters
result = generator.generate(
prompt="A cyberpunk cityscape with neon lights",
style="cyberpunk",
size="1024x1024",
negative_prompt="blurry, low quality, distorted",
guidance_scale=7.5, # Prompt adherence (1-20)
num_inference_steps=50, # Quality vs speed tradeoff
seed=42 # Reproducible results
)
# Content safety check (automatic, but can be called separately)
is_safe, reason = generator.check_content_filter(
prompt="Build a bomb"
)
if not is_safe:
print(f"β Content blocked: {reason}")
# View generation history
history = generator.get_history(limit=10)
for entry in history:
print(f"{entry['timestamp']}: {entry['prompt'][:50]}...")
print(f" Style: {entry['style']}, Backend: {entry['backend']}")
print(f" Path: {entry['image_path']}")
# Backend comparison
comparison = generator.compare_backends(
prompt="A futuristic robot",
style="digital_art"
)
print("Hugging Face result:")
print(f" Time: {comparison['huggingface']['time']}s")
print(f" Cost: ${comparison['huggingface']['cost']}")
print("OpenAI DALL-E result:")
print(f" Time: {comparison['openai']['time']}s")
print(f" Cost: ${comparison['openai']['cost']}")Backends:
- Hugging Face - Stable Diffusion 2.1 (free, local, slower)
- OpenAI - DALL-E 3 (paid, cloud, faster, higher quality)
Content Filtering: 15 blocked keyword categories:
- Violence, weapons, explicit content
- Illegal activities, drugs, self-harm
- Hate speech, harassment, impersonation
- Copyright infringement, deepfakes
GUI Integration: Desktop app includes dual-page image generation interface:
- Left: Tron-themed prompt input
- Right: Image display with zoom, metadata, save/copy
Leather Book UI - Tron-Themed Interface:
# Launch desktop app
python -m src.app.main
# Or use launch scripts
./launch-desktop.sh # Linux/Mac
.\launch-desktop.bat # WindowsFeatures:
- π Login page with bcrypt authentication
- π 6-zone dashboard (stats, actions, AI head, chat, response)
- π¨ Image generation interface (dual-page layout)
- π€ Persona panel (4-tab AI configuration)
- πΎ Auto-save conversations
- π Knowledge base search
- βοΈ Settings and preferences
UI Components:
from src.app.gui.leather_book_interface import LeatherBookInterface
from PyQt6.QtWidgets import QApplication
# Initialize application
app = QApplication(sys.argv)
window = LeatherBookInterface()
# Connect signals
window.user_logged_in.connect(lambda user: print(f"User {user} logged in"))
window.show()
sys.exit(app.exec())Modern Web Interface:
# Start backend (FastAPI)
cd web/backend
uvicorn main:app --host 0.0.0.0 --port 5000 --reload
# Start frontend (React + Vite)
cd web/frontend
npm run devAPI Endpoints:
POST /api/v1/chat
Content-Type: application/json
{
"message": "Hello, AI!",
"user_id": "user123",
"context": {
"conversation_id": "conv456",
"include_memory": true
}
}Response:
{
"response": "Hello! How can I help you today?",
"conversation_id": "conv456",
"timestamp": "2026-02-12T15:30:00Z",
"governance_decision": {
"galahad": "ALLOW",
"cerberus": "ALLOW",
"codex_deus": "ALLOW"
},
"memory_context": [
{"type": "episodic", "content": "Previous conversation about Python..."}
]
}Frontend Technologies:
- React 18 with hooks
- Zustand state management
- Vite build tool
- TailwindCSS styling
- WebSocket for real-time chat
Command-Line Interface:
# Chat with AI
project-ai chat "What are Asimov's Laws?"
# Analyze data
project-ai analyze data.csv --output report.html
# Generate image
project-ai image "A serene lake" --style watercolor --size 1024x1024
# View memory
project-ai memory search "Python decorators" --type episodic
# Check governance
project-ai governance test --action "delete_data" --context '{"user_request": true}'
# Verify ledger
project-ai ledger verify
# View acceptance history
project-ai ledger show --user-id user@example.com
# Plugin management
project-ai plugins list
project-ai plugins enable data_analyzer
project-ai plugins disable image_generator
# Learning requests
project-ai learning submit "Nuclear physics" --justification "Fusion energy research"
project-ai learning approve req_123 --admin-id admin@example.com
# System health
project-ai health check --verbose
# Export data
project-ai export --format json --output backup.jsonInteractive Mode:
# Launch interactive shell
project-ai shell
# Inside shell:
>>> help
Available commands:
chat - Chat with AI
analyze - Analyze data
image - Generate images
memory - Memory operations
governance - Governance checks
exit - Exit shell
>>> chat Hello, AI!
AI: Hello! How can I help you today?
>>> memory search "Python"
Found 5 results:
1. [Episodic] User taught me about Python decorators...
2. [Semantic] Python is a high-level programming language...
3. [Procedural] Debugging Python code involves...
>>> exit
Goodbye!RESTful + GraphQL API:
# Start API server
from api.main import app
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)REST Endpoints:
GET /api/v1/health - Health check
POST /api/v1/chat - Send message
GET /api/v1/conversations - List conversations
GET /api/v1/memory/search - Search memory
POST /api/v1/learning/request - Submit learning request
GET /api/v1/learning/pending - Get pending requests
POST /api/v1/image/generate - Generate image
GET /api/v1/plugins - List plugins
POST /api/v1/governance/evaluate - Evaluate action
GET /api/v1/ledger/verify - Verify ledger
POST /api/v1/accept-agreement - Accept user agreement
GraphQL Schema:
type Query {
chat(message: String!, userId: String!): ChatResponse
memory(query: String!, types: [MemoryType!]): [Memory]
conversations(userId: String!, limit: Int): [Conversation]
learningRequests(status: RequestStatus): [LearningRequest]
plugins: [Plugin]
acceptance(userId: String!): AcceptanceRecord
}
type Mutation {
sendMessage(input: ChatInput!): ChatResponse
generateImage(input: ImageInput!): ImageResult
approveLearning(requestId: ID!, adminId: String!): Boolean
acceptAgreement(input: AcceptanceInput!): AcceptanceRecord
}
type Subscription {
chatMessages(conversationId: ID!): ChatMessage
governanceDecisions: GovernanceDecision
}Example GraphQL Query:
query GetChatHistory {
conversations(userId: "user123", limit: 10) {
id
messages {
role
content
timestamp
}
governance {
galahad
cerberus
codexDeus
}
}
}API Authentication:
# API key authentication
headers = {
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
}
response = requests.post(
"https://api.project-ai.com/v1/chat",
headers=headers,
json={"message": "Hello, AI!", "user_id": "user123"}
)Project-AI includes 30+ specialized agents for security, code quality, infrastructure, and operations.
| Agent | Role | Key Capabilities |
|---|---|---|
| AlphaRed | Evolutionary Adversary | Genetic algorithms, RL-based attacks, edge case discovery |
| RedTeamAgent | ARTKIT Multi-turn Testing | Adaptive attack strategies, vulnerability analysis, multi-turn conversations |
| SafetyGuard | Content Moderation | Llama-Guard-3-8B filtering, PII detection, jailbreak prevention |
| ConstitutionalGuardrail | Ethical Enforcement | Self-critique, counter-arguments, principle verification |
| JailbreakBench | Standardized Testing | Benchmark suite, defense evaluation, attack cataloging |
| TARLProtector | Code Protection | Strategic defensive programming, runtime monitoring, stack analysis |
Example: Run Red Team Attack
from src.app.agents.red_team_agent import RedTeamAgent
red_team = RedTeamAgent(attack_strategy="adaptive", max_turns=10)
attack_session = red_team.execute_attack(
target="governance_system",
goal="Extract sensitive information",
tactics=["social_engineering", "prompt_injection"]
)
for vulnerability in red_team.analyze_vulnerabilities(attack_session):
print(f"{vulnerability['severity'].upper()}: {vulnerability['description']}")
print(f"Mitigation: {vulnerability['mitigation']}")| Agent | Role | Key Capabilities |
|---|---|---|
| CodeAdversary | MUSE-style Vulnerability Detection | Static analysis, auto-patching, SARIF reports, CWE mapping |
| CIChecker | Automated Quality Gates | pytest, ruff linting, security audits, coverage enforcement |
| RefactorAgent | Code Transformation | Black/ruff formatting, complexity analysis, refactoring suggestions |
| DependencyAuditor | Security Scanning | pip-audit, vulnerability detection, license compliance |
| TestQAGenerator | Test Generation | pytest stub creation, test validation, coverage improvement |
Example: Auto-fix Vulnerabilities
from src.app.agents.code_adversary_agent import CodeAdversary
code_adversary = CodeAdversary(analysis_depth="deep", auto_patch=True)
scan_result = code_adversary.scan_file(
file_path="src/app/core/user_manager.py",
languages=["python"],
rules=["security", "performance", "maintainability"]
)
# Auto-patch medium+ severity issues
patch_results = code_adversary.apply_patches(
scan_result=scan_result,
severity_threshold="medium",
create_backup=True
)
print(f"Patched: {patch_results['patched_count']}/{patch_results['total_issues']}")| Agent | Role | Key Capabilities |
|---|---|---|
| CodexDeusMaximus | Repository Guardian | Structure validation, auto-correction, naming conventions, scaffolding |
| CerberusCodexBridge | Threat-Defense Integration | Alert routing, TARL integration, defense upgrades, incident response |
| BorderPatrol | Audit Sandbox | Isolated audits, penetration testing, safe dependency installation |
| RollbackAgent | Incident Response | Integration monitoring, automatic rollbacks, failure detection |
Example: Enforce Repository Structure
from src.app.agents.codex_deus_maximus import CodexDeusMaximus
codex = CodexDeusMaximus(schematic_path="config/repository_schematic.yaml")
validation = codex.validate_structure(repo_path="/path/to/repo", auto_fix=True)
for violation in validation['violations']:
print(f"β {violation['type']}: {violation['description']}")
if violation['auto_fixed']:
print(f" β
Auto-fixed: {violation['fix_applied']}")| Agent | Role | Key Capabilities |
|---|---|---|
| SandboxRunner | Safe Code Execution | Subprocess isolation, resource limits, network blocking, filesystem protection |
| SandboxWorker | Resource-Constrained Worker | CPU/memory/FD limits, timeout enforcement, safe builtins |
Example: Execute Untrusted Code Safely
from src.app.agents.sandbox_runner import SandboxRunner
sandbox = SandboxRunner(
timeout=30,
memory_limit_mb=512,
cpu_limit_percent=50,
network_access=False
)
result = sandbox.execute_python(
code="import pandas as pd; df = pd.DataFrame({'A': [1,2,3]}); print(df.describe())",
globals_allowed=["pandas", "numpy"],
builtins_allowed=["print", "len", "range"]
)
if result['success']:
print(f"Output: {result['stdout']}")
print(f"Time: {result['execution_time']}s, Memory: {result['memory_used_mb']} MB")| Agent | Role | Key Capabilities |
|---|---|---|
| RetrievalAgent | Vector-Based Q&A | Document indexing, semantic search, hybrid search, re-ranking |
| PlannerAgent | Task Orchestration | Task decomposition, dependency graphs, parallel execution, adaptive re-planning |
| KnowledgeCurator | Learning Deduplication | Semantic deduplication, quality scoring, knowledge consolidation |
| ExpertAgent | Elevated Review | Audit review, integration approval, elevated permissions |
Example: Plan and Execute Complex Task
from src.app.agents.planner_agent import PlannerAgent
planner = PlannerAgent(planning_model="gpt-4", max_depth=5)
plan = planner.create_plan(
goal="Build a web application with user authentication",
context={"tech_stack": ["Python", "FastAPI", "React"], "deadline": "2026-03-01"}
)
execution = planner.execute_plan(plan=plan, parallel=True, checkpoint_interval=3600)
status = planner.get_execution_status(execution_id=execution['id'])
print(f"Progress: {status['completed_tasks']}/{status['total_tasks']}")| Agent | Role | Key Capabilities |
|---|---|---|
| Oversight | System Monitoring | Health checks, activity tracking, compliance enforcement |
| UXTelemetry | User Feedback | Interaction collection, prioritized suggestions, UX optimization |
| Explainability | Decision Transparency | Reasoning traces, interpretability, audit trails |
| Validator | Input Validation | Data integrity, schema validation, sanitization |
| Tier | Users | Monthly Cost | Per-User Cost | Uptime SLA | Support |
|---|---|---|---|---|---|
| Solo (Free) | 1 | $5-15* | N/A | Best effort | Community |
| Small Team | 1-10 | $1,100-1,300 | $110-130 | 99.5% | 4-hour response |
| Medium Company | 10-100 | $1,700-2,500 | $17-250 | 99.5% | 4-hour response |
| Enterprise | 100-1,000 | $9,000-16,000 | $9-160 | 99.95% | Dedicated team |
| Global Scale | 1,000+ | $36,000-76,000 | $3.60-76 | 99.99% | 24/7/365 |
| Government | 1-100+ | Custom | Custom | 99.99% | Cleared 24/7 |
*API costs only (OpenAI usage)
Infrastructure:
- CPU: 2-4 cores
- RAM: 4-8GB
- Storage: 30GB
- OS: Windows/macOS/Linux
Deployment:
pip install project-ai
project-ai accept-agreement
python -m src.app.main # Launch desktop appCost: $0 license + $5-15/month API costs
Infrastructure:
- CPU: 4-8 cores
- RAM: 16-32GB
- Storage: 100GB SSD
- Server: 1 VM (t3.medium)
Deployment:
# Option 1: Shared desktop
python -m src.app.main --multi-user --port 8000
# Option 2: API server
uvicorn api.main:app --host 0.0.0.0 --port 8000 --workers 4
# Option 3: Docker Compose
docker-compose up -dCost: $1,000/month license + $100-300/month infrastructure
Architecture:
Load Balancer β API Pods (3-5) β PostgreSQL + Redis
Deployment:
# Kubernetes
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/deployment.yaml
kubectl autoscale deployment project-ai-api --cpu-percent=70 --min=3 --max=10Cost: $1,000/month license + $700-1,500/month infrastructure
Architecture:
Global LB β Multi-region K8s (10-20 pods) β PostgreSQL (multi-region) + Redis (sharded)
Deployment:
cd k8s/tk8s
kubectl apply -k infrastructure/
kubectl apply -f governance/
kubectl apply -f application/
python validate_tk8s.py --environment productionCost: $1,000/month license + $8,000-15,000/month infrastructure
Architecture:
GeoDNS + Multi-CDN β 4+ Regions (US/EU/APAC/LATAM) β K8s Federation β Global Data Layer
Features:
- Multi-region, multi-cloud
- Geo-replication
- 99.99%+ uptime SLA
- 24/7/365 support
- Disaster recovery (RPO < 1hr, RTO < 1hr)
Cost: $1,000/month license + $35,000-75,000/month infrastructure
Special Features:
- FIPS 140-2/3 Level 3+ HSM
- FedRAMP High authorization
- Air-gapped deployment
- Classified data handling (up to Top Secret)
- 24/7/365 cleared support
Deployment:
./install-airgap.sh --mode airgap --classification TOP_SECRET --hsm-required
# OR
./deploy-govcloud.sh --region us-gov-west-1 --classification SECRET --fedramp-highCost: $10,000+/year (progressive seat pricing) + surcharges + $30,000-50,000/month infrastructure
Symmetric:
- AES-256-GCM (FIPS 140-2)
- ChaCha20-Poly1305
- Fernet (timestamp-verified)
Asymmetric:
- RSA-4096
- Ed25519 (fast elliptic curve)
- ECDH (key exchange)
Hashing:
- SHA-256 (ledger chains)
- SHA-3 (Keccak)
- BLAKE2 (fast)
- Argon2 (password hashing)
Key Derivation:
- PBKDF2
- HKDF
- Scrypt
TPM 2.0 Integration:
from governance.security.tpm_integration import TPMKeyStore
tpm = TPMKeyStore()
key_handle = tpm.generate_key(key_type="RSA-2048", exportable=False)
signature = tpm.sign(key_handle=key_handle, data=b"Document")HSM Integration (FIPS 140-2 Level 3+):
from governance.security.hsm_integration import HSMKeyStore
hsm = HSMKeyStore(device_path="/dev/ttyUSB0", pin="********")
key_id = hsm.generate_key(key_type="AES-256", exportable=False)
ciphertext = hsm.encrypt(key_id=key_id, plaintext=b"Sensitive data")Project Status:
- Current Project Status - Complete system status and capabilities
- 360Β° Deployable System Standard - Comprehensive deployment standard β NEW
- Changelog - Version history and recent updates
- Historical Documentation - Archived implementation summaries (168 files)
Core Documentation:
- Installation Guide - Setup instructions for all platforms
- Architecture Overview - System design and components
- Trust Boundaries - Security boundary analysis β NEW
- API Documentation - Complete API reference
- Security Documentation - Security architecture and compliance
Governance & Legal:
- Governance Framework - Triumvirate system
- Legal Documentation - Complete license codex
- Acceptance Ledger - Cryptographic binding
Operations:
- TK8S Doctrine - Kubernetes deployment philosophy
- Production Deployment - Scaling guide
- Failure Models & Operations - Incident response β NEW
- SLOs & Error Budgets - Operational maturity β NEW
- Contributing Guide - How to contribute
We welcome contributions! See CONTRIBUTING.md for guidelines.
All Pull Requests MUST comply with the Security Validation Claims Policy.
Claims of "production-ready," "enterprise best practices," "runtime enforcement," or similar assertions are PROHIBITED unless the PR includes direct runtime validation output for ALL of the following:
- β Deploying an unsigned image with evidence of admission denial
- β Deploying a signed image with evidence of successful admission
- β Attempting to deploy privileged containers with evidence of denial
- β Attempting cross-namespace or lateral pod communication with evidence of denial
- β Attempting log deletion from a running workload with evidence of denial
If ANY validations are missing, use safe framing ONLY:
- "Implementation aligns with enterprise hardening patterns."
- "Validation tests confirm configuration correctness."
- "Full adversarial validation is ongoing."
PRs that violate this policy will be rejected with no exceptions.
See the complete policy: .github/SECURITY_VALIDATION_POLICY.md
Development Setup:
git clone https://github.com/IAmSoThirsty/Project-AI.git
cd Project-AI
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -e ".[dev]"
pre-commit install
pytest -v
ruff check .Project-AI uses a multi-layer licensing framework:
- Code: MIT + Apache 2.0
- Governance: PAGL (non-removable)
- Data: Output License + Data Ingest License
- Commercial: $99 one-time for commercial rights
- Government: Progressive seat pricing
See LICENSE and docs/legal/ for details.
Community:
- GitHub Issues
- GitHub Discussions
- Documentation: docs/
Commercial:
- Email: support@project-ai.com
- Priority Support: 4-hour response (Company tier+)
- 24/7 Support: Government tier
- Custom SLA: sales@project-ai.com
- Isaac Asimov - Ethical foundation (Three Laws of Robotics)
- Anthropic - Constitutional AI research
- Open Source Community - Amazing libraries and tools
- Contributors - Thank you! π