Your AI. Your pipeline. Zero code.
A complete MLOps suite built for makers, teams, and enterprises. Humigence provides zero-config, GPU-aware fine-tuning with surgical precision and complete reproducibility.
- ๐ฏ Interactive Wizard: Step-by-step configuration with Basic/Advanced modes
- ๐ฅ๏ธ Smart GPU Detection: Automatic detection and selection of available GPUs
- ๐ Dual-GPU Training: Multi-GPU support with Unsloth + TorchRun
- ๐งช Training Recipes: QLoRA (4-bit), LoRA (FP16/BF16), Full Fine-tuning
- ๐ Intelligent Batching: Auto-fit batch size to available VRAM
- ๐ Complete Reproducibility: Config snapshots and reproduce scripts
- ๐ Built-in Evaluation: Curated prompts and quality gates
- ๐ฆ Artifact Export: Structured outputs with run summaries
- GPU: NVIDIA GPU with CUDA support (RTX 5090, RTX 4080, etc.)
- RAM: 8GB+ recommended
- Storage: 10GB+ for models and datasets
- Python: 3.8+ with PyTorch
# Clone the repository
git clone https://github.com/your-username/humigence.git
cd humigence
# Install dependencies
pip install -r requirements.txt
# Set up Unsloth (required for training)
python3 training/unsloth/setup_humigence_unsloth.py
# Launch the interactive wizard
python3 cli/main.py# Launch the interactive wizard
python3 cli/main.py
# The wizard will guide you through:
# 1. Model selection
# 2. Dataset configuration
# 3. Training parameters
# 4. GPU selection (single or multi-GPU)
# 5. Launch trainingThe Humigence wizard guides you through:
- Setup Mode: Basic (essential config) or Advanced (full control)
- Hardware Detection: Automatic GPU, CPU, and memory detection
- Model Selection: Choose from supported models or custom paths
- Dataset Loading: Auto-detection from
~/humigence_data/or custom paths - Training Recipe: QLoRA, LoRA, or Full Fine-tuning
- GPU Selection: Single-GPU auto-selection or multi-GPU prompting
Humigence intelligently handles GPU selection:
- Single GPU: Automatically selects and uses the available GPU
- Multiple GPUs: Prompts you to choose:
๐ง Training Mode: > Multi-GPU Training (all available GPUs) Single GPU Training (choose specific GPU)
๐ Humigence Training Starting...
โ
Configuration Loaded: [all settings]
๐ฅ๏ธ GPU Detection: 2x RTX 5090 detected
๐ง Training Mode: Multi-GPU Training
๐ฆ Loading model: Qwen/Qwen2.5-0.5B
โ
LoRA adapters applied
๐ Loading dataset: wikitext2 (10,000 samples)
๐ Starting training with TorchRun...
โ
Training complete โ adapters saved.- Qwen/Qwen2.5-0.5B: 77M parameters (recommended for testing)
- microsoft/Phi-2: 839M parameters
- TinyLlama/TinyLlama-1.1B-Chat-v1.0: 369M parameters
- Custom Models: Any HuggingFace model or local path
- JSONL Format: Line-by-line JSON with instruction/output pairs
- Auto-Detection: Scans
~/humigence_data/directory - Custom Paths: Specify any local dataset file
- Sample Datasets: Includes demo datasets for testing
{"instruction": "What is machine learning?", "output": "Machine learning is a subset of artificial intelligence..."}
{"instruction": "Explain quantum computing", "output": "Quantum computing uses quantum mechanical phenomena..."}- GPU: NVIDIA GPU with 8GB+ VRAM
- RAM: 16GB+ system RAM
- Storage: 20GB+ free space
- GPU: RTX 4080/4090/5090 or better
- RAM: 32GB+ system RAM
- Storage: 50GB+ free space
- Dual-GPU: RTX 5090 + RTX 5090 (tested)
- Memory: 16GB+ VRAM per GPU recommended
- Training: Automatic TorchRun distribution
humigence/
โโโ cli/
โ โโโ main.py # Main CLI entry point
โ โโโ config_wizard.py # Interactive configuration wizard
โ โโโ lora_wizard.py # LoRA-specific wizard
โโโ training/
โ โโโ unsloth/ # Unsloth integration
โ โโโ wizard.py # Unsloth training wizard
โ โโโ train_lora_dual.py # Multi-GPU training script
โโโ pipelines/
โ โโโ lora_trainer.py # Training pipeline
โโโ utils/
โ โโโ device.py # Hardware detection
โ โโโ dataset_loader.py # Dataset utilities
โ โโโ validators.py # Data validation
โโโ config/
โ โโโ default_config.json # Default configuration
โโโ runs/ # Training outputs
โโโ humigence/
โโโ config.snapshot.json
โโโ adapters/ # LoRA weights
โโโ artifacts.zip # Complete export
Essential configuration with sensible defaults:
- Learning Rate: 2e-4
- Epochs: 1
- Gradient Accumulation: 4
- LoRA Rank: 16
- LoRA Alpha: 32
Full control over all parameters:
- LoRA configuration (rank, alpha, dropout)
- Training hyperparameters
- Data processing options
- Evaluation settings
# Automatically selected when 1 GPU detected
๐ง Single GPU detected - using GPU 0: RTX 5090
๐ Launching single-GPU training...# Prompts when multiple GPUs detected
๐ง 2 GPUs detected - choose training mode
> Multi-GPU Training (all available GPUs)
Single GPU Training (choose specific GPU)- Curated Prompts: 5 diverse evaluation questions
- Model Inference: Generation with temperature sampling
- Quality Gates: Loss thresholds and evaluation metrics
- Status Tracking: ACCEPTED.txt or REJECTED.txt files
# View training progress
tail -f runs/humigence/training.log
# Check evaluation results
cat runs/humigence/eval_results.jsonl
# View run summary
cat runs/humigence/run_summary.jsonEvery training run generates:
- Config Snapshot: Complete configuration in JSON
- Reproduce Script: One-click rerun capability
- Artifact Archive: Complete export of all outputs
- Run Summary: Structured metadata for tracking
# Rerun any training
./runs/humigence/reproduce.sh
# Or use the config directly
python3 training/unsloth/train_lora_dual.py --config runs/humigence/config.snapshot.jsonCore dependencies are pinned for stability:
transformers>=4.41.0,<5.0.0
torch>=2.1.0
unsloth @ git+https://github.com/unslothai/unsloth.git
rich>=13.0.0
inquirer>=3.1.0# Install in development mode
pip install -e .
# Run tests
python3 -m pytest tests/
# Run specific test
python3 test_gpu_selection.pyWe welcome contributions! Please see CONTRIBUTING.md for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Add tests if applicable
- Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Unsloth for fast LoRA training
- HuggingFace for the transformers library
- Microsoft for PEFT and LoRA implementations
- The open-source ML community
| Feature | Humigence CLI | Other Tools |
|---|---|---|
| Setup | Interactive wizard | Manual config |
| GPU Detection | Automatic | Manual |
| Multi-GPU | Built-in TorchRun | Complex setup |
| Reproducibility | Complete snapshots | Partial |
| Evaluation | Built-in prompts | External tools |
| Artifacts | Structured export | Manual collection |
GPU not detected:
# Check CUDA installation
python3 -c "import torch; print(torch.cuda.is_available())"
# Check GPU visibility
nvidia-smiOut of memory:
# Reduce batch size in config
# Or use QLoRA for memory efficiencyTraining fails:
# Check logs
cat runs/humigence/training.log
# Verify dataset format
head -5 ~/humigence_data/your_dataset.jsonl- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
- Interactive configuration wizard
- Single and multi-GPU training
- QLoRA and LoRA support
- Built-in evaluation
- Complete reproducibility
- RAG implementation
- EnterpriseGPT integration
- Batch inference
- Context length optimization
- Web UI interface
- Model serving
- Distributed training across nodes
- Advanced evaluation metrics
- Model compression
- Deployment automation
Built with โค๏ธ for the AI community
Humigence โ Your AI. Your pipeline. Zero code.