NeuralForge is built with a security-first mindset:
- 100% Local Execution: No code ever leaves your machine
- Zero Telemetry: No usage data collection
- Sandboxed Models: AI models run in restricted environments
- Verified Adapters: Cryptographic signatures for community content
- Open Source: Full code transparency for security audits
We provide security updates for the following versions:
| Version | Supported | End of Life |
|---|---|---|
| 1.x.x | β Active Support | TBD |
| 0.9.x | 2025-06-30 | |
| < 0.9 | β No Support | Already EOL |
DO NOT create public GitHub issues for security vulnerabilities!
Instead, please report them via:
- Email: security@neuralforge.dev
- Encrypted Email: Use our PGP key (see below)
- GitHub Security Advisories: Private reporting
Please provide:
- Description of the vulnerability
- Steps to reproduce
- Potential impact assessment
- Suggested fix (if any)
- Your contact information
- Initial Response: Within 24 hours
- Status Update: Within 72 hours
- Fix Timeline: Based on severity (see below)
- Public Disclosure: Coordinated after fix
| Severity | Criteria | Fix Timeline | Example |
|---|---|---|---|
| Critical | Remote code execution, data exfiltration | 24-48 hours | Model sandbox escape |
| High | Local privilege escalation, DoS | 3-5 days | Adapter verification bypass |
| Medium | Limited information disclosure | 1-2 weeks | Memory leak exposing data |
| Low | Minor issues, theoretical attacks | Next release | Verbose error messages |
All AI models run in restricted environments:
// Models cannot:
- Access file system (except model directory)
- Make network connections
- Execute system commands
- Access environment variables
- Read process memory
- Spawn new processesCommunity adapters undergo verification:
# Signature verification
neuralforge verify-adapter my-adapter.lora
# Checks performed:
β Cryptographic signature valid
β No malicious patterns detected
β Size within limits
β Structure validated
β Sandbox test passedAll user inputs are sanitized:
public class InputSanitizer {
// Prevent injection attacks
- SQL injection prevention
- Command injection blocking
- Path traversal protection
- XXE attack prevention
- SSRF protection
}Internal IPC uses secure channels:
IPC Security:
βββ Named pipes (not network sockets)
βββ Process isolation
βββ Message authentication
βββ No external network access
βββ Encrypted sensitive data- Keep Updated
# Check for security updates
neuralforge --check-updates
# Update to latest version
neuralforge --update- Verify Downloads
# Verify installer checksum
sha256sum neuralforge-installer.exe
# Compare with official checksum- Adapter Safety
# Only install verified adapters
neuralforge install-adapter --verified-only
# Check adapter source
neuralforge info adapter-name- Workspace Isolation
# Use separate workspaces for sensitive projects
neuralforge --workspace ~/secure-project- Secure Coding
// Always validate inputs
public void processCode(String code) {
validateInput(code); // Never skip!
sanitizeCode(code);
// ... process
}- Dependency Management
# Regular dependency audits
npm audit
./gradlew dependencyCheckAnalyze
# Update vulnerable dependencies immediately
npm audit fix- Secrets Management
NEVER commit:
βββ API keys
βββ Passwords
βββ Private keys
βββ Personal data
βββ Proprietary code- Testing Security
# Run security tests
npm run test:security
./gradlew securityTest
# Penetration testing
npm run pentestgraph TD
A[User Input] -->|Sanitized| B[Editor]
B -->|IPC| C[Backend]
C -->|Sandboxed| D[AI Models]
E[Adapter] -->|Verified| C
F[File System] -->|Restricted Access| C
G[Network] -->|Blocked| D
H[Attacker] -.->|Blocked| A
H -.->|Blocked| E
H -.->|Blocked| G
-
Process Isolation
- Editor process (Electron)
- Backend process (Java)
- Model inference (Sandboxed)
-
File System Restrictions
Allowed Paths:
βββ ~/workspace (read/write)
βββ ~/.neuralforge/config (read/write)
βββ ~/.neuralforge/adapters (read only)
βββ /app/models (read only)
Blocked Paths:
βββ System directories
βββ Other user directories
βββ Network mounts
βββ Sensitive locations
- Network Isolation
Network Access:
βββ Editor β Backend: Local IPC only
βββ Backend β Models: In-process only
βββ Models β External: BLOCKED
βββ Adapter Download: Verified HTTPS only-
Electron Security
- Based on Chromium (inherits its vulnerabilities)
- Mitigation: Regular Electron updates
-
Java Dependencies
- Third-party libraries may have vulnerabilities
- Mitigation: Automated dependency scanning
-
Model Security
- Models could theoretically memorize training data
- Mitigation: Only train on public, licensed code
- Physical access to machine
- Compromised operating system
- Malicious user with legitimate access
- Side-channel attacks (timing, power)
- Advanced persistent threats (APTs)
- All dependencies updated
- Security scan passed
- No sensitive data in code
- Sandbox tests passed
- Input validation complete
- Error messages sanitized
- Logs don't leak information
- Cryptographic signatures updated
- No proprietary code used
- No hardcoded secrets
- Dataset properly licensed
- Verification signature included
- Size under 100MB limit
- Documentation included
- No personal data exposed
- Security scan passed
- Frequency: Quarterly
- Scope: Full codebase
- Tools: OWASP dependency check, SonarQube, CodeQL
- Frequency: Annually
- Scope: Critical components
- Reports: Published publicly
Coming soon! We plan to launch a bug bounty program for security researchers.
- Email: security@neuralforge.dev
- PGP Key: Download
- Response Time: 24 hours for critical issues
1234 5678 90AB CDEF 1234 5678 90AB CDEF 1234 5678
We thank the following security researchers for responsibly disclosing vulnerabilities:
| Researcher | Vulnerability | Date | Severity |
|---|---|---|---|
| Your name here | Help us improve | - | - |
- Static Analysis: SonarQube, CodeQL
- Dependency Scanning: OWASP Dependency Check
- Runtime Protection: Java Security Manager
- Sandboxing: System.SecurityManager + Custom Policies
We believe in responsible disclosure and will:
- Work with researchers to understand issues
- Provide credit (unless anonymity requested)
- Fix vulnerabilities promptly
- Coordinate disclosure timing
- Never pursue legal action against good-faith researchers
We consider security research authorized if you:
- Make good faith effort to avoid harm
- Only test against your own instances
- Don't access others' data
- Report findings promptly
- Give us reasonable time to fix
We won't pursue legal action if these guidelines are followed.
This security policy may be updated. Major changes will be announced via:
- GitHub releases
- Discord announcement
- Security mailing list
Last Updated: October 2024
Security is everyone's responsibility. Thank you for helping keep NeuralForge safe for all users.