Commit 36edcce
feat: multi-arch builds, env refactoring, new features, and expanded tests (#307)
* Support Ampere with cutlass-based FFA_FA4 (#287)
* Support Ampere with cutlass-based FFA_FA4 (#287)
* Update v1.1.0 overview (#285)
* Update v1.1.0 public blogs (#281)
* [HotFix]: fix proxy (#284)
* Add DSA attention interface in extensions (#283)
See merge request: !1
* support multi-arch CUDA builds (Hopper + Blackwell)
Refactor build system to accept comma-separated compute capabilities
via MAGI_ATTENTION_BUILD_COMPUTE_CAPABILITY (e.g. "90,100"). Add
helper functions parse_compute_capabilities, get_gencode_flags, and
resolve_build_capabilities in setup.py. Update CMakeLists.txt to
accept MAGI_CUDA_ARCHITECTURES and strip PyTorch-injected gencode
flags that may reference unsupported architectures.
Made-with: Cursor
* add global_window_size to infer_attn_mask_from_cu_seqlens
Allow every query in a sample to always attend to the first
global_window_size key tokens in addition to the sliding window,
useful for architectures that require prefix tokens (e.g. sink tokens)
to be globally visible. Update docs with the new parameter.
Made-with: Cursor
* add scm install script
* skip fa4_ffa_precompile
* add distributed roll API for MTP support
Introduce a P2P-based `roll` operation that cyclically shifts dispatched
local tensors along the sequence dimension without materialising the full
global tensor (O(N/P) memory instead of O(N)). Primarily designed for
Multi-Token Prediction (MTP) where labels are shifted relative to inputs.
- New `functional/roll.py` with `roll_p2p` implementation and autograd support
- Expose `roll` in public API (`magi_attention.api`)
- Clean up import paths: import `roll_func` directly from `functional.roll`
instead of re-exporting through `functional.dispatch`
- Add `roll` section to API reference and quickstart docs
- Allow optional `num_heads_q/kv`, `head_dim` override in
`make_flex_key_for_new_mask_after_dispatch`
- Add comprehensive tests (`tests/test_functional/test_roll.py`)
Made-with: Cursor
* polish tests for roll
* dynamic pad token
* ciel div clear
* polish code
* support uneven shard
* refactor: move `ceil_div` to `magi_attention/utils/_utils.py`
Consolidate the `ceil_div` helper into the shared utils module instead
of defining it locally in `api/functools.py`, so that meta/solver code
can reuse it without circular imports.
Made-with: Cursor
* simplify uneven_shard: remove virtual padding, use real chunk sizes
Replace the previous "virtual metadata padding" approach with a simpler
design where `total_seqlen` is used as-is (no padding at all):
- Use `ceil_div` for num_chunks so the last chunk can be smaller
- Remove `actual_total_seqlen_q/k` parameters from all interfaces
- MinHeapDispatchAlg now reports `is_equal_num_workloads=False` and
uses `ceil_div` for the per-bucket job limit
- Simplify dispatch/undispatch: no zero-size virtual chunks, so
`torch.split` works directly with `chunk_actual_sizes`
- Remove virtual padding logic from `magi_attn_flex_key` and
`make_flex_key_for_new_mask_after_dispatch`
Made-with: Cursor
* support variable chunk sizes in roll P2P
Extract `_compute_segments` to handle source-segment calculation for
both uniform and variable (last-chunk-smaller) layouts. Refactor
`_roll_p2p_impl` to iterate segments generically, replacing the
previous special-case branches for r==0 and r>0.
Add comprehensive uneven-shard tests: aligned/non-aligned shifts,
cross-last-chunk wrapping, negative/large shifts, edge cases
(last_chunk_size=1), larger sequences, and backward correctness.
Made-with: Cursor
* update pipeline tests for simplified uneven_shard
Remove virtual metadata padding logic from test_pipeline and
test_pipeline_sdpa: no longer need `compute_pad_size`/`apply_padding`
imports or `actual_total_seqlen_q/k` variables, since the uneven_shard
path now uses original total_seqlen directly.
Made-with: Cursor
* fix rank error in roll
* add caching for DistAttnRuntimeKey hash and infer_attn_mask_from_cu_seqlens
Cache the hash of DistAttnRuntimeKey via __hash__ override to avoid
repeated hashing of all fields on every dict lookup. Also add lru_cache
to infer_attn_mask_from_cu_seqlens to skip redundant mask inference for
repeated cu_seqlens patterns.
Made-with: Cursor
* improve type annotations for DistAttnRuntimeDictManager
Add precise type hints for return types, parameters, and internal data
structures. Import DistAttnRuntimeMgr for proper typing and remove the
resolved TODO comment.
Made-with: Cursor
* refactor dispatch/undispatch with autograd Functions to reduce memory
Replace the concat-all-then-scatter approach with custom autograd
Functions (_DispatchFunc / _UndispatchFunc). Forward dispatch now
selects local chunks directly (O(shard_seqlen) alloc) instead of
building a full permuted tensor (O(total_seqlen)). Backward uses
all_gather_v + unpermute, mirroring the inverse path.
Made-with: Cursor
* fix partial_dsink contiguity before backward communication
Ensure partial_dsink is contiguous before communication in the backward
pass to avoid potential issues with non-contiguous tensor layouts.
Made-with: Cursor
* polish code
* fix last chunk_size for uneven_shard
* fast build
* fast build
* install ffa build
* enhance dist runtime dict
* mem save ag
* simple p2p
* simple p2p
* logging
* reduce log
* sequential dispatch uneven
* fix seq bug
* add tests
* support fa4
* install scm
* fix install on no gpu env
* fix
* fix install scm
* fix ffa fa4 bug
* collect all wheel
* fix build
* fix build
* install wheel
* fix install order
* switch flash-attention submodule to magi-flash-attention and remove build patches
- Point submodule to git@code.byted.org:seed/magi-flash-attention.git (main)
with namespace renamed to magi_flash_attn_3 to avoid TORCH_LIBRARY conflict
- Remove hopper_makefile_wrapper.mk (wheel support now handled in collect step)
- Remove patch_create_block_mask.py (should be upstreamed into submodule)
- Simplify install_flash_attn_cute.sh accordingly
Made-with: Cursor
* update flash-attention submodule and clean up build scripts
- Update submodule to latest main (2c1b058) which includes:
- Rename torch library namespace from flash_attn_3 to magi_flash_attn_3
- Support headless build in create_block_mask setup.py
- Support MAGI_WHEEL_DIR in hopper Makefile
- Remove redundant hopper wheel collection from install_flash_attn_cute.sh
(now handled by upstream Makefile when MAGI_WHEEL_DIR is set)
Made-with: Cursor
* no build ffa
* increace max func
* increace max func
* add set -e to install_flash_attn_cute.sh to fail fast on errors
Made-with: Cursor
* no overlaped impl
* improve no_overlap path: pre-build merged_attn_arg, enhance logging, and fix test filtering
- Pre-build merged_attn_arg in CalcMeta.__post_init__ instead of computing it
on every forward/backward call in the no_overlap path
- Fix seqlen_k_local calculation in DistAttnSolver to use host_k_ranges_global
- Add detailed logging for OverlapConfig, CalcMeta, and DistAttnRuntime;
move verbose remote_attn_args logging to DEBUG level
- Add skip_if_world_size_filtered decorator for proper subprocess-level skip
instead of early-return inside test body
- Change num_heads test filter to underscore-separated format (e.g. 8_8)
and support tuple values in should_run_test_case
Made-with: Cursor
* no overlap support fa4
* add sdpa_online, consolidate pipeline tests, dist_attn and solver updates
Made-with: Cursor
* add is_partial_grad option to undispatch: use reduce_scatter in backward
When is_partial_grad=True, the backward of undispatch uses
dist.reduce_scatter to sum partial gradients across ranks before
scattering, instead of simply selecting local chunks. This supports
scenarios where each rank holds a partial gradient contribution
(e.g. partial attention output gradients) that must be aggregated.
The parameter is threaded through the full API stack:
undispatch_func -> DistAttnRuntimeMgr.undispatch_qo/kv -> undispatch()
Also adds unit tests covering forward round-trip, default backward,
and partial-grad backward (both random and uniform) with even/uneven shards.
Made-with: Cursor
* install ffa
* refactor: merge csrc/utils into extensions, unify C++/Python backend switching, and align interfaces
- Merge csrc/utils/ into csrc/extensions/, eliminating the separate
flexible_flash_attention_utils_cuda module. All FFA utils (argsort_ranges,
unique_consecutive_pairs, compute_sparse_load_metadata, etc.) are now part
of magi_attn_ext built via CMake.
- Centralize C++/Python backend switching in common/__init__.py instead of
scattered if-blocks at the bottom of each implementation file. Add Protocol
definitions (protocols.py) to enforce interface alignment between backends.
- Align all pybind11 bindings with Python ground truth: fix parameter names,
remove C++-only methods (to_string, sort_ranges, reserve, clear,
get_q/k/d_range), remove .export_values() on AttnMaskType, add missing
__iter__ on AttnRectangles, and add Google-style docstrings to all public
methods on both sides.
- Replace ffa_utils alias with direct magi_attn_ext imports across all files.
- Convert test_common/ tests from unittest.TestCase to plain pytest classes
with a conftest.py backend fixture (params=["python", "cpp"]) so every
test automatically runs against both backends.
- Regenerate magi_attn_ext.pyi with updated signatures and docstrings.
Made-with: Cursor
* add readme
* refactor: centralise env vars into magi_attention/env/ package
Move all MAGI_ATTENTION_* environment variable accessors from scattered
locations (__init__.py, comm/__init__.py, common/__init__.py,
functional/*.py, common/jit/*.py) into a dedicated magi_attention/env/
package with three submodules:
- env/general.py — runtime toggles, kernel backend, precision, etc.
- env/comm.py — communication flags (hierarchical, qo_comm, etc.)
- env/build.py — JIT/build settings (cache, workspace, nvcc, etc.)
All ~100 call sites across magi_attention/, tests/, and exps/ are
updated to use the new `env.general.xxx()` / `env.comm.xxx()` style.
The old _env.py is removed. The top-level __init__.py and
comm/__init__.py no longer re-export env-var functions.
Also adds `!magi_attention/env/` to .gitignore so the package is not
caught by the `env/` virtualenv exclusion rule.
Made-with: Cursor
* clear api
* remove redundant __all__ from magi_attn_interface.py
Public exports are managed solely by api/__init__.py; the per-module
__all__ was duplicating that responsibility and adding maintenance burden.
Made-with: Cursor
* feat: add MAGI_ATTENTION_LOG_LEVEL env var to control package-wide logging
- Add log_level() helper in env/general.py supporting DEBUG/INFO/WARN/ERROR/CRITICAL (default: WARN)
- Configure the root magi_attention logger at import time based on the env var
- Replace custom MagiAttentionJITLogger with standard getLogger(__name__) so JIT
logger participates in the magi_attention logger hierarchy
- Add INFO-level logging throughout JIT build pipeline (core.py, _flex_flash_attn_jit.py)
Made-with: Cursor
* support no_chunk_size
* minor fix
* fix: resolve pre-commit lint errors for Python 3.10+ match syntax
- Upgrade flake8 6.1.0 -> 7.3.0 (pyflakes 3.4+ with match support)
- Upgrade ruff v0.1.5 -> v0.11.4, add --target-version=py310
- Add --python-version=3.10 to mypy args
- Add # noqa: F811 for intentional conditional re-imports in common/__init__.py
- Add # noqa: E402 for necessary non-top-level imports
- Add # noqa: F824 for read-only global/nonlocal declarations
Made-with: Cursor
* update submodule
* fix tests
* patch fix
* fix cu131
* add chinese docs
* log env
* scm install fix
* patch fix
* fix scm
* merge main
* lint
* chore: point flash-attention submodule at littsk fork, drop install hotfixes
Use https://github.com/littsk/flexible-flash-attention on branch magi_attn_blackwell_support; multi-arch create_block_mask gencode and hopper Makefile build_ext live in the submodule. Remove redundant runtime patches from install_flash_attn_cute.sh.
Made-with: Cursor
* docs: update MAGI_ATTENTION_BUILD_COMPUTE_CAPABILITY description
Reflect that this env variable now also affects create_block_mask builds, and document comma-separated multi-arch support (e.g. 90,100).
Made-with: Cursor
* chore: bump flash-attention submodule (platform tag fix)
Picks up ce387e5 which fixes get_platform() in hopper/setup.py to use
platform.machine() instead of hardcoded x86_64.
Made-with: Cursor
* feat: support CUSTOM_ARCH for cross-platform wheel builds
Detect host CPU architecture from CUSTOM_ARCH env var (defaults to
uname -m) and derive MAGI_WHEEL_PLAT_NAME (e.g. linux_aarch64).
Pass --plat-name to all bdist_wheel / pip wheel invocations so that
sub-package wheels (create_block_mask_cuda, magi_to_hstu_cuda, ffa_fa3)
and the main magi_attention wheel carry the correct platform tag.
Made-with: Cursor
* fix: use setup.py bdist_wheel for --plat-name instead of pip --build-option
pip wheel dropped --build-option support in newer versions, causing
"no such option" errors on SCM builds. Switch all sub-package wheel
builds to python setup.py bdist_wheel --plat-name + cp to wheel dir.
Made-with: Cursor
* fix: FA4 mask tile size resolution and sink+FA4 infinite loop in tests
1. _resolve_tile_sizes / _resolve_fa4_tile_sizes now return (128, 128)
on SM100+, since the tile_m/tile_n in FA4AttnArg represent the mask
block tile (not the kernel tile). The kernel internally doubles
tile_m via sparse_tile_m in _make_fa4_args_dict. Only SM80/SM90
need to query get_tile_sizes_by_backend for headdim-dependent tiles.
2. Improved the SM10 tile size validation error message in FA4AttnArg
to include actual vs expected values and the sparse_tile_m note.
3. Added BACKENDS (excluding FA4) to all 6 sink-bearing attn_configs
in test_pipeline.py. FA4 does not support sink, so these configs
would cause get_next_valid_comb to loop forever (all flag combos
rejected by _is_valid_flag_comb while the generator never exhausts
due to cycle_times=-1).
Made-with: Cursor
* chore: point flash-attention submodule back to demonatic upstream
PR #9 merged all changes into demonatic/flash-attention, switch
submodule URL back from littsk fork and update pointer to merge commit.
Made-with: Cursor
* lint
* fix ut
* more clear code
* fix ci
* fix: resolve pre-commit lint failures (black, flake8, ruff)
Made-with: Cursor
* fix: resolve CI test failures (dispatch, submask, pipeline alignment)
- test_dispatch_solver: fix wrong assertTrue -> assertFalse for MinHeap
- test_gt_dispatcher: use Python AttnRanges for sub_mask comparisons to
avoid C++/Python cross-type equality failure
- test_pipeline: add native_grpcoll invalidation rules for uneven_shard
and small hidden_size_kv configs; pass num_heads/head_dim in test_config
Made-with: Cursor
* fix: proof-reading corrections (typos, grammar, and phrasing)
Agent-Logs-Url: https://github.com/SandAI-org/MagiAttention/sessions/32611ac4-4154-4596-b276-d3f6d07fdf05
Co-authored-by: Strivin0311 <61719042+Strivin0311@users.noreply.github.com>
* increase timeout
* fix ci
* fix: replace einops.repeat with native ops in sink_bwd for torch.compile compatibility
einops.repeat hashes its axes_lengths kwargs internally, which fails
under torch.compile(dynamic=True) because SymInt is not hashable.
Made-with: Cursor
* fix: replace all einops calls in sink_bwd with native PyTorch ops
einops internally hashes tensor shapes for recipe caching, which is
incompatible with SymInt under torch.compile(dynamic=True). Replace
rearrange, reduce, and repeat with equivalent permute/sum/unsqueeze.
Made-with: Cursor
* fix max logits dtype error
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Strivin0311 <61719042+Strivin0311@users.noreply.github.com>1 parent 1816199 commit 36edcce
156 files changed
Lines changed: 20865 additions & 4198 deletions
File tree
- .cursor/skills
- debug-test-failures
- magi-code-philosophy
- docs
- locale
- zh_CN/LC_MESSAGES
- blog
- user_guide
- source
- _templates
- blog
- user_guide
- exps
- attn/profile_ffa
- dist_attn
- dyn_simulate
- tests
- extensions
- magi_attn_extensions
- tests
- magi_attention
- api
- common
- jit
- comm
- primitive
- grpcoll
- csrc
- extensions
- flexible_flash_attention
- utils
- env
- functional
- meta
- collection
- container
- solver
- testing
- utils
- scripts
- tests
- test_api
- test_attn_solver
- test_attn
- test_common
- test_dispatch
- test_dist_runtime_mgr
- test_functional
- test_utils
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
| 129 | + | |
| 130 | + | |
| 131 | + | |
| 132 | + | |
| 133 | + | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
| 164 | + | |
| 165 | + | |
| 166 | + | |
| 167 | + | |
| 168 | + | |
| 169 | + | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
8 | 8 | | |
9 | 9 | | |
10 | 10 | | |
11 | | - | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
148 | 148 | | |
149 | 149 | | |
150 | 150 | | |
| 151 | + | |
151 | 152 | | |
152 | 153 | | |
153 | 154 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
19 | 19 | | |
20 | 20 | | |
21 | 21 | | |
| 22 | + | |
22 | 23 | | |
23 | 24 | | |
24 | 25 | | |
| |||
58 | 59 | | |
59 | 60 | | |
60 | 61 | | |
61 | | - | |
| 62 | + | |
62 | 63 | | |
63 | 64 | | |
64 | 65 | | |
65 | 66 | | |
66 | | - | |
| 67 | + | |
67 | 68 | | |
68 | 69 | | |
69 | | - | |
| 70 | + | |
70 | 71 | | |
71 | 72 | | |
72 | 73 | | |
| |||
77 | 78 | | |
78 | 79 | | |
79 | 80 | | |
80 | | - | |
| 81 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
4 | 4 | | |
5 | 5 | | |
6 | 6 | | |
7 | | - | |
| 7 | + | |
8 | 8 | | |
9 | | - | |
10 | | - | |
11 | 9 | | |
12 | 10 | | |
13 | 11 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
63 | 63 | | |
64 | 64 | | |
65 | 65 | | |
66 | | - | |
| 66 | + | |
67 | 67 | | |
68 | 68 | | |
69 | 69 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
12 | 12 | | |
13 | 13 | | |
14 | 14 | | |
| 15 | + | |
| 16 | + | |
15 | 17 | | |
16 | 18 | | |
17 | 19 | | |
18 | 20 | | |
19 | 21 | | |
20 | 22 | | |
21 | 23 | | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
22 | 29 | | |
23 | 30 | | |
24 | 31 | | |
25 | 32 | | |
26 | 33 | | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
27 | 38 | | |
28 | 39 | | |
29 | 40 | | |
| |||
0 commit comments