⚡ Optimize RichardsGlu::compute_gradients by removing unnecessary clones#28
⚡ Optimize RichardsGlu::compute_gradients by removing unnecessary clones#28google-labs-jules[bot] wants to merge 239 commits intomainfrom
Conversation
Added demo with zoom
fix(readme): correct repo URL and directory path in Quick Start
* isolate data loading * pair * encode to bytes for vocab * data loading from json * data loading from csv * csv files added * cargo run works! * cargo update and dataset_loader redundant paren --------- Co-authored-by: anshumanpatil <info@anshumanpatil.com> Co-authored-by: Nikhil Sriram <nikhil.sriram5@gmail.com> Co-authored-by: hobs <github@totalgood.com>
CI script to build and test
Fix Readme Page Badge
…0468487085254059
Optimized `EPropTrainer::apply_update`, `train_step`, and `train_step_classification` to use `ndarray::ArrayBase::scaled_add` instead of `Zip` iteration or allocating arithmetic. - Measured ~16% speedup on weight updates (512x512 matrices) vs Zip iteration. - >2x speedup vs allocating arithmetic (as described in task). Fixed build errors in `src/models/llm.rs` and `src/attention/sliding_window_attention.rs` to enable testing: - Resolved mutable borrow conflicts in `LLM` by using `std::mem::take` for scratch buffers. - Fixed `accumulated_param_grads` scope issue. - Updated `SlidingWindowAttention` to compatible `rand` 0.9 usage and fixed `use after move` error. - Implemented missing `Layer` trait methods for `SlidingWindowAttention`.
refactor(attention): restructure sliding window attention gradients computation fix(attention): handle learned predictor initialization in poly attention test: add iterator import for transformer block stability tests style: update floating point literals in pade sweep test
…-12003919183454948406
- Implemented `TitansMAG` struct and `Layer` trait. - Implemented `new`, `forward` (segment-based), `compute_gradients` (manual BPTT), `apply_gradients`. - Fixed `SlidingWindowAttention` by removing duplicate placeholder trait methods and adding `Clone`. - Verified implementation with unit tests for forward pass and gradient shapes. - Fixed shape mismatch issues in `TitansMAG` gradient computation logic. - Ensured correctness of gating mechanism and memory updates.
…47790043192176996
Add target_ci directories to .gitignore to prevent committing build artifacts generated during continuous integration runs. This includes compiled binaries, dependency files, and build metadata that should not be tracked in version control.
- Reformat code in llm.rs and titans/mag.rs to improve readability - Add AGENTS.md with comprehensive development guidelines including build commands, code style, testing, and best practices
…or ops - Replace `tracing::info` with `tracing::debug` for high-volume logs in adaptive p-change and high loss scenarios to reduce output verbosity. - Remove an unnecessary info log in `apply_gradients` that was checking a config flag. - Optimize tensor operations in TitansMAG layer: use direct method references (`mapv(Self::sigmoid)`), simplify elementwise expressions, and remove redundant variable resets. - Apply minor style improvements: use shorthand field initialization and remove unused import.
Add titans.rs as a new module in the models directory and remove the old titans/mod.rs structure. Introduce a new memory module with hybrid and engram submodules for enhanced memory management capabilities. Add nextest configuration for improved test execution with custom timeouts.
- Replace vec with array for repeat pattern in test - Remove unused variables and early allocations in backward passes - Simplify conditional logic using match expressions - Use struct update syntax for config initialization - Remove unused constants and struct fields - Eliminate unnecessary scoped blocks and variable shadowing - Improve iterator usage with enumerate and take
…urves - Add AdaptiveScalar type supporting fixed values or Richards curve modulation - Implement set_training_progress method throughout layer hierarchy for adaptive parameters - Add CLI arguments for adaptive modulation of ce_weight, min_snr_gamma, and MoH thresholds - Integrate training progress tracking into training loops and forward passes - Update diffusion training to use adaptive scalars for loss weights and SNR gamma
- Validated that TitansMAL correctly processes input through NeuralMemory and then SlidingWindowAttention. - Added a `#[cfg(test)]` module to `src/memory/titans/mal.rs` to verify the forward pass output shape. - Note: Integration tests could not be run due to pre-existing compilation errors in `src/layers/transformer/block.rs` and `model_config` (unrelated to this task).
Introduce AdaptiveScalar enum to modulate MoH activation thresholds based on training progress. This enables dynamic gating behavior where thresholds can be fixed, follow a Richards curve, or be learned. The modulation is integrated into MoHGating, training pipeline, and all block types (Transformer, Diffusion, HRM, LRM, Mamba2). Update configuration structures to include moh_threshold_modulation field and modify initialization to use Box<RichardsCurve> for proper ownership. Add proptest regression tests for adaptive behavior.
- Add predict_with_limit method to control generation length - Eliminate unnecessary allocations in neural memory by using row views - Optimize engram memory with scratch buffer and Zip operations - Improve hybrid memory with cached positional encoding and scratch buffers - Add contrastive margin and gradient methods to adaptive residuals - Remove outdated attention documentation file
…e cloning Replaced full struct cloning with manual lightweight construction of the RichardsCurve instance. This avoids copying heavy heap-allocated fields (optimizer, grad_norm_history, gamma, bias) when creating temporary scaled views for gating operations. Performance improvement: ~83% reduction in execution time (from ~1.72 µs to ~288 ns per call). Added benchmark `benches/richards_curve_bench.rs` to verify the improvement.
…915794032518470943
- Implement `HeadCache` for managing Key/Value states in attention heads. - Update `PolyAttention` to support incremental forward pass with caching. - Optimize `LLM::forward_with_limit` to pass only new tokens during generation. - Add dynamic switching between sequential/parallel execution in attention forward pass to optimize for small batches. - Add benchmark `benches/inference.rs` validating ~28x speedup.
…ation-4029054152255653832
- Replaced `.cloned()` with conditional borrowing for cached fields (x1, x2, swish, gated) - Added benchmarks for RichardsGlu gradients - Achieved ~7.4% performance improvement in gradient computation
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
There was a problem hiding this comment.
Review by RecurseML
🔍 Review performed on 2f07c31..34ffc48
✨ No bugs found, your code is sparkling clean
✅ Files analyzed, no issues (3)
• Cargo.toml
• benches/richards_glu_bench.rs
• src/richards/richards_glu.rs
This PR optimizes
RichardsGlu::compute_gradientsby eliminating unnecessaryArray2cloning operations when using cached values.💡 What:
src/richards/richards_glu.rsto use a conditional borrowing pattern. Instead of cloning the cachedOption<Array2<f32>>, the code now borrows the cached reference if available, or creates a new owned array (stored in a local variable) and borrows it if not.&Array2<f32>references.richards_glu_benchto verify performance.🎯 Why:
.cloned().unwrap_or_else(...), which forced a full matrix copy even when the cache was present. This caused significant allocation and memory copy overhead during training.📊 Measured Improvement:
compute_gradients.PR created automatically by Jules for task 4074737496829598444 started by @ryancinsight
High-level PR Summary
This PR optimizes the
compute_gradientsmethod inRichardsGluby replacing expensive cloning operations with conditional borrowing. Instead of cloning cachedArray2<f32>matrices usingcloned().unwrap_or_else(...), the code now uses a pattern that borrows cached references when available or creates owned values in local variables when cache misses occur. This eliminates unnecessary memory allocations and copies during the training forward pass, achieving approximately 7.4% performance improvement (from ~31.46ms to ~29.13ms). A new benchmark is included to measure the optimization impact.⏱️ Estimated Review Time: 5-15 minutes
💡 Review Order Suggestion
Cargo.tomlbenches/richards_glu_bench.rssrc/richards/richards_glu.rs