First off, thank you for considering contributing to NeuralForge! This is an early-stage open-source project and we welcome contributors. 🚀
Current Status: Early development phase. Core architecture and documentation are being established.
- Code of Conduct
- Getting Started
- Development Setup
- Contribution Guidelines
- Commit Message Rules
- Pull Request Process
- Code Standards
- Testing Requirements
- Documentation
- Getting Help
This project follows standard open-source conduct principles. By participating, you are expected to:
- Respectful communication
- Constructive feedback
- Inclusive environment
- Professional interactions
- Privacy-focused development
Report unacceptable behavior via GitHub Issues.
- Java 21+ (GraalVM recommended)
- Node.js 18+
- Git
- 16GB RAM for building (8GB minimum)
- ~20GB free disk space
- Fork the repository
- Create a feature branch
- Write clean, tested code
- Follow commit message rules
- Update documentation
- Submit a pull request
# Fork via GitHub UI, then:
git clone https://github.com/YOUR_USERNAME/neuralforge.git
cd neuralforge
git remote add upstream https://github.com/neuralforge/neuralforge.git# Frontend dependencies
npm install
# Backend dependencies
./gradlew build
# Download development models (small versions)
npm run download-models:dev# Start backend in dev mode
./gradlew bootRun
# Start frontend with hot reload
npm run dev
# Run in integrated mode
npm run start:dev# Run all tests
npm test
./gradlew test
# Lint check
npm run lint
./gradlew checkstyle- Performance improvements (reduce RAM/CPU usage)
- Model optimization (quantization, caching)
- New language support
- Bug fixes
- Security enhancements
- New adapters for frameworks
- UI/UX improvements
- Documentation improvements
- Test coverage increase
- Accessibility features
- Features requiring cloud services
- Proprietary code or models
- Changes that increase RAM usage significantly
- Code without tests
- Breaking changes without discussion
🚨 ABSOLUTE RULES - NEVER BREAK THESE:
1. NEVER use the word "Claude" in ANY commit message
2. NEVER use the word "Claude" in code comments
3. NEVER reference AI assistance in commits
4. NEVER identify automated contributions
5. ALWAYS write commits as if YOU wrote the code
<type>(<scope>): <subject>
<body>
<footer>
- feat: New feature
- fix: Bug fix
- perf: Performance improvement
- refactor: Code restructuring
- docs: Documentation only
- style: Formatting, missing semicolons, etc
- test: Adding missing tests
- chore: Maintenance tasks
- security: Security improvements
✅ feat(completion): add multi-model routing logic
✅ fix(memory): resolve leak in adapter loading
✅ perf(inference): optimize KV cache usage
✅ docs(api): update REST endpoint documentation
✅ test(router): add unit tests for model selection❌ Claude implements feature # NEVER mention AI
❌ AI suggested optimization # NEVER mention AI
❌ update stuff # Too vague
❌ fix # No context
❌ WIP # Don't commit WIP- Atomic commits: One logical change per commit
- Present tense: "add feature" not "added feature"
- Imperative mood: "move cursor" not "moves cursor"
- No period: No period at end of subject line
- Body explains why: Not what (that's in the diff)
- Update from upstream
git fetch upstream
git rebase upstream/main- Run full test suite
npm run test:full
./gradlew test- Check code quality
npm run lint:fix
./gradlew spotlessApply- Update documentation
- Update README if needed
- Add/update API docs
- Update CHANGELOG.md
## Description
Brief description of what this PR does
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Performance improvement
- [ ] Documentation update
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing completed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Comments added for complex code
- [ ] Documentation updated
- [ ] No warnings generated
- [ ] Dependent changes merged
## Performance Impact
- RAM usage: [no change | +X MB | -X MB]
- CPU usage: [no change | +X% | -X%]
- Inference speed: [no change | +Xms | -Xms]
## Screenshots (if applicable)- Automated checks must pass
- Code review by 1+ maintainers
- Performance benchmarks verified
- Documentation reviewed
- Merged via squash or rebase (no merge commits)
// File header (required)
/*
* Copyright 2024 NeuralForge Contributors
* Licensed under Apache License 2.0
*/
package dev.neuralforge.service;
// Imports (organized)
import java.util.*; // Never use wildcard
import org.springframework.*;
/**
* Class documentation required
* @author YourName (optional)
* @since 1.0.0
*/
public class ExampleService {
// Constants first
private static final int MAX_TOKENS = 2048;
// Fields with Javadoc
/** Model inference engine */
private final InferenceEngine engine;
// Constructor with validation
public ExampleService(InferenceEngine engine) {
this.engine = Objects.requireNonNull(engine);
}
// Methods under 30 lines
public Completion complete(String prompt) {
// Comments for complex logic
validatePrompt(prompt);
// Clear variable names
TokenizedInput tokens = tokenize(prompt);
InferenceResult result = engine.infer(tokens);
return mapToCompletion(result);
}
}/**
* File header with copyright
*/
// Organized imports
import { Component } from '@core';
import type { Config } from '@types';
// Interfaces over types when possible
interface ServiceConfig {
readonly endpoint: string;
timeout?: number;
}
// Comprehensive JSDoc
/**
* Manages AI completion requests
* @class CompletionService
*/
export class CompletionService {
// Explicit types, no 'any'
private readonly config: ServiceConfig;
constructor(config: ServiceConfig) {
this.config = Object.freeze(config);
}
// Async/await over callbacks
async requestCompletion(
context: CodeContext
): Promise<Completion> {
// Guard clauses early
if (!context.isValid()) {
throw new Error('Invalid context');
}
// Descriptive variable names
const sanitizedInput = this.sanitize(context);
const response = await this.send(sanitizedInput);
return this.parse(response);
}
}// ✅ GOOD: Efficient resource usage
public class EfficientService {
private final int CACHE_SIZE = 1000; // Limited cache
private final Cache<String, Result> cache =
Caffeine.newBuilder()
.maximumSize(CACHE_SIZE)
.expireAfterAccess(5, TimeUnit.MINUTES)
.build();
}
// ❌ BAD: Wasteful resource usage
public class WastefulService {
private final Map<String, Result> cache =
new HashMap<>(); // Unbounded growth!
}- Unit Tests: 80% minimum
- Integration Tests: Core flows covered
- Performance Tests: No regression allowed
- Security Tests: All inputs validated
@Test
@DisplayName("Should route to StableCode for long context")
void testLongContextRouting() {
// Given
CodeContext context = CodeContext.builder()
.fileLength(5000)
.contextWindow(16000)
.build();
// When
Model selected = router.route(context);
// Then
assertThat(selected.getName())
.isEqualTo("stablecode-3b");
}describe('CompletionService', () => {
it('should handle network timeouts gracefully', async () => {
// Arrange
const service = new CompletionService({
endpoint: 'http://localhost:8080',
timeout: 100
});
// Act & Assert
await expect(service.requestCompletion(slowContext))
.rejects
.toThrow('Request timeout');
});
});# Run benchmarks before and after changes
npm run benchmark
./gradlew jmh
# Memory profiling
npm run profile:memory
./gradlew profiling- All public APIs must have JSDoc/Javadoc
- Complex algorithms need explanatory comments
- Include examples in documentation
- Keep README up-to-date
docs/
├── api/ # API reference
├── guides/ # How-to guides
├── architecture/ # System design
└── adapters/ # Adapter creation
- Clear and concise
- Include code examples
- Explain the "why" not just "what"
- Keep beginner-friendly
- Create adapter
npm run create-adapter -- --name my-framework- Train on quality data
# Use only Apache/MIT licensed code
dataset = load_dataset("apache-licensed-only")
train_adapter(dataset, epochs=3)- Validate performance
npm run validate-adapter -- --path ./my-adapter.lora- Submit to marketplace
npm run submit-adapter -- --public- Must improve base model by >5%
- Size must be <100MB
- Training data must be licensed appropriately
- Must include documentation
- Must pass security scan
Since this is an early-stage project, please use:
- GitHub Issues: Report bugs and request features
- GitHub Discussions: Ask questions and share ideas
- Pull Requests: Code contributions
Note: Additional community channels will be established as the project grows.
- Contributors get credited in CONTRIBUTORS.md
- Significant contributors will receive recognition in releases
- All contributions are valued, regardless of size
- Privacy First: No telemetry, ever
- Performance Obsessed: Every MB matters
- Community Driven: Users shape the product
- Transparent: Open development process
- Inclusive: Welcome all skill levels
By contributing, you agree that your contributions will be licensed under Apache License 2.0.
Feel free to:
- Open an issue for bugs
- Start a discussion for features
- Email maintainers for security issues (see SECURITY.md)
Thank you for considering contributing to NeuralForge! 🎉
Every contribution, no matter how small, helps build a better privacy-focused AI coding assistant.