Skip to content

Security: ymcbzrgn/NeuralForge

Security

SECURITY.md

Security Policy

πŸ›‘οΈ Security Philosophy

NeuralForge is built with a security-first mindset:

  • 100% Local Execution: No code ever leaves your machine
  • Zero Telemetry: No usage data collection
  • Sandboxed Models: AI models run in restricted environments
  • Verified Adapters: Cryptographic signatures for community content
  • Open Source: Full code transparency for security audits

Supported Versions

We provide security updates for the following versions:

Version Supported End of Life
1.x.x βœ… Active Support TBD
0.9.x ⚠️ Security Only 2025-06-30
< 0.9 ❌ No Support Already EOL

🚨 Reporting a Vulnerability

For Security Issues

DO NOT create public GitHub issues for security vulnerabilities!

Instead, please report them via:

  1. Email: security@neuralforge.dev
  2. Encrypted Email: Use our PGP key (see below)
  3. GitHub Security Advisories: Private reporting

What to Include

Please provide:

  • Description of the vulnerability
  • Steps to reproduce
  • Potential impact assessment
  • Suggested fix (if any)
  • Your contact information

Response Timeline

  • Initial Response: Within 24 hours
  • Status Update: Within 72 hours
  • Fix Timeline: Based on severity (see below)
  • Public Disclosure: Coordinated after fix

Severity Levels & Response

Severity Criteria Fix Timeline Example
Critical Remote code execution, data exfiltration 24-48 hours Model sandbox escape
High Local privilege escalation, DoS 3-5 days Adapter verification bypass
Medium Limited information disclosure 1-2 weeks Memory leak exposing data
Low Minor issues, theoretical attacks Next release Verbose error messages

πŸ”’ Security Features

1. Model Sandboxing

All AI models run in restricted environments:

// Models cannot:
- Access file system (except model directory)
- Make network connections
- Execute system commands
- Access environment variables
- Read process memory
- Spawn new processes

2. Adapter Verification

Community adapters undergo verification:

# Signature verification
neuralforge verify-adapter my-adapter.lora

# Checks performed:
βœ“ Cryptographic signature valid
βœ“ No malicious patterns detected
βœ“ Size within limits
βœ“ Structure validated
βœ“ Sandbox test passed

3. Input Sanitization

All user inputs are sanitized:

public class InputSanitizer {
    // Prevent injection attacks
    - SQL injection prevention
    - Command injection blocking
    - Path traversal protection
    - XXE attack prevention
    - SSRF protection
}

4. Secure Communication

Internal IPC uses secure channels:

IPC Security:
β”œβ”€β”€ Named pipes (not network sockets)
β”œβ”€β”€ Process isolation
β”œβ”€β”€ Message authentication
β”œβ”€β”€ No external network access
└── Encrypted sensitive data

πŸ› οΈ Security Best Practices

For Users

  1. Keep Updated
# Check for security updates
neuralforge --check-updates

# Update to latest version
neuralforge --update
  1. Verify Downloads
# Verify installer checksum
sha256sum neuralforge-installer.exe
# Compare with official checksum
  1. Adapter Safety
# Only install verified adapters
neuralforge install-adapter --verified-only

# Check adapter source
neuralforge info adapter-name
  1. Workspace Isolation
# Use separate workspaces for sensitive projects
neuralforge --workspace ~/secure-project

For Developers

  1. Secure Coding
// Always validate inputs
public void processCode(String code) {
    validateInput(code);  // Never skip!
    sanitizeCode(code);
    // ... process
}
  1. Dependency Management
# Regular dependency audits
npm audit
./gradlew dependencyCheckAnalyze

# Update vulnerable dependencies immediately
npm audit fix
  1. Secrets Management
NEVER commit:
β”œβ”€β”€ API keys
β”œβ”€β”€ Passwords
β”œβ”€β”€ Private keys
β”œβ”€β”€ Personal data
└── Proprietary code
  1. Testing Security
# Run security tests
npm run test:security
./gradlew securityTest

# Penetration testing
npm run pentest

πŸ” Security Architecture

Threat Model

graph TD
    A[User Input] -->|Sanitized| B[Editor]
    B -->|IPC| C[Backend]
    C -->|Sandboxed| D[AI Models]
    
    E[Adapter] -->|Verified| C
    F[File System] -->|Restricted Access| C
    G[Network] -->|Blocked| D
    
    H[Attacker] -.->|Blocked| A
    H -.->|Blocked| E
    H -.->|Blocked| G
Loading

Security Boundaries

  1. Process Isolation

    • Editor process (Electron)
    • Backend process (Java)
    • Model inference (Sandboxed)
  2. File System Restrictions

Allowed Paths:
β”œβ”€β”€ ~/workspace (read/write)
β”œβ”€β”€ ~/.neuralforge/config (read/write)
β”œβ”€β”€ ~/.neuralforge/adapters (read only)
└── /app/models (read only)

Blocked Paths:
β”œβ”€β”€ System directories
β”œβ”€β”€ Other user directories
β”œβ”€β”€ Network mounts
└── Sensitive locations
  1. Network Isolation
Network Access:
β”œβ”€β”€ Editor β†’ Backend: Local IPC only
β”œβ”€β”€ Backend β†’ Models: In-process only
β”œβ”€β”€ Models β†’ External: BLOCKED
└── Adapter Download: Verified HTTPS only

🚫 Known Security Limitations

Current Limitations

  1. Electron Security

    • Based on Chromium (inherits its vulnerabilities)
    • Mitigation: Regular Electron updates
  2. Java Dependencies

    • Third-party libraries may have vulnerabilities
    • Mitigation: Automated dependency scanning
  3. Model Security

    • Models could theoretically memorize training data
    • Mitigation: Only train on public, licensed code

Not Protected Against

  • Physical access to machine
  • Compromised operating system
  • Malicious user with legitimate access
  • Side-channel attacks (timing, power)
  • Advanced persistent threats (APTs)

πŸ“‹ Security Checklist

For Every Release

  • All dependencies updated
  • Security scan passed
  • No sensitive data in code
  • Sandbox tests passed
  • Input validation complete
  • Error messages sanitized
  • Logs don't leak information
  • Cryptographic signatures updated

For Adapter Publishers

  • No proprietary code used
  • No hardcoded secrets
  • Dataset properly licensed
  • Verification signature included
  • Size under 100MB limit
  • Documentation included
  • No personal data exposed
  • Security scan passed

πŸ” Security Audits

Internal Audits

  • Frequency: Quarterly
  • Scope: Full codebase
  • Tools: OWASP dependency check, SonarQube, CodeQL

External Audits

  • Frequency: Annually
  • Scope: Critical components
  • Reports: Published publicly

Bug Bounty Program

Coming soon! We plan to launch a bug bounty program for security researchers.


πŸ“ž Security Contacts

PGP Key Fingerprint

1234 5678 90AB CDEF 1234 5678 90AB CDEF 1234 5678

πŸ›οΈ Security Hall of Fame

We thank the following security researchers for responsibly disclosing vulnerabilities:

Researcher Vulnerability Date Severity
Your name here Help us improve - -

πŸ“š Security Resources

For Learning

Tools We Use

  • Static Analysis: SonarQube, CodeQL
  • Dependency Scanning: OWASP Dependency Check
  • Runtime Protection: Java Security Manager
  • Sandboxing: System.SecurityManager + Custom Policies

🀝 Responsible Disclosure

We believe in responsible disclosure and will:

  1. Work with researchers to understand issues
  2. Provide credit (unless anonymity requested)
  3. Fix vulnerabilities promptly
  4. Coordinate disclosure timing
  5. Never pursue legal action against good-faith researchers

βš–οΈ Legal

Safe Harbor

We consider security research authorized if you:

  • Make good faith effort to avoid harm
  • Only test against your own instances
  • Don't access others' data
  • Report findings promptly
  • Give us reasonable time to fix

We won't pursue legal action if these guidelines are followed.


Updates to This Policy

This security policy may be updated. Major changes will be announced via:

  • GitHub releases
  • Discord announcement
  • Security mailing list

Last Updated: October 2024


Security is everyone's responsibility. Thank you for helping keep NeuralForge safe for all users.

There aren’t any published security advisories