Thanks for your interest in contributing to Neurix. This guide will help you get started.
- Fork the repository
- Clone your fork:
git clone https://github.com/your-username/neurix.git cd neurix - Install dependencies:
npm install
- Run the development server:
npx tauri dev
- Node.js 18+
- Rust 1.77+
- Tauri CLI v2 (
cargo install tauri-cli) - Android SDK + NDK (for mobile builds)
src/ # React frontend (TypeScript)
pages/ # Page components
components/ # Reusable UI components
context/ # React context providers
services/ # Tauri IPC service layer
theme/ # Design tokens and theme
src-tauri/ # Rust backend
src/commands/ # Tauri IPC command handlers
src/inference/ # Model loading and token generation
src/models/ # Model catalog, download, management
src/chat/ # Chat history storage
-
Create a feature branch from
main:git checkout -b feature/your-feature
-
Make your changes. Run lint and type checks:
npm run lint npx tsc --noEmit cargo check --manifest-path src-tauri/Cargo.toml
-
Test on desktop:
npx tauri dev
-
Commit with a clear message describing what changed and why.
-
Open a pull request against
main.
- Frontend: Biome handles formatting and linting. Run
npm run formatbefore committing. - Backend: Standard Rust formatting. Run
cargo fmtinsrc-tauri/. - Keep components small and focused.
- Avoid unnecessary abstractions.
Open a GitHub issue with:
- Steps to reproduce
- Expected vs actual behavior
- Device/OS info (especially for Android issues)
- Screenshots if relevant
To add a model to the catalog, edit src-tauri/src/models/catalog.rs. Requirements:
- Must be available as GGUF Q4_K_M on HuggingFace
- Must use a Llama-compatible architecture (works with candle's
quantized_llama) - Should be under 4GB (phone-friendly)
- Must have an accessible
tokenizer.json(preferably in the GGUF repo) - Add the appropriate chat template to the
ChatTemplateenum andformat_promptfunction
By contributing, you agree that your contributions will be licensed under the MIT License.