Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
380 commits
Select commit Hold shift + click to select a range
0339ad1
Change PjRt to use new copy of coordination service.
mwhittaker Dec 8, 2025
2a6de35
[Pallas SC] Allow semaphores to be returned by SCS kernels
sharadmv Dec 9, 2025
890ccd2
Deprecate `with mesh:` context manager. Use `with jax.set_mesh(mesh):…
yashk2810 Dec 9, 2025
c74b2a5
Add a jax network transfer benchmark script
Google-ML-Automation Dec 9, 2025
249b7a7
Add ResultHandler.wrap which allows us to avoid exploding PRNG keys.
pschuh Dec 9, 2025
eff53a5
[Pallas] Device Id dict to mesh fastpath for power of twos
Google-ML-Automation Dec 9, 2025
9be13f9
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 9, 2025
3bec82d
[Mosaic GPU] Remove `{Least,Most}ReplicatedExpression` constructors.
bchetioui Dec 9, 2025
e288c27
[Mosaic GPU] Support `FragmentedArray.broadcast_in_dim` for splat to …
allanrenucci Dec 9, 2025
1bfd8dc
[maint] clean up jnp.arange implementation
jakevdp Dec 9, 2025
ea5aee9
Update generated stubs.
cantonios Dec 9, 2025
2f62fb1
Fix back compat test to ignore warnings
yashk2810 Dec 9, 2025
50bdc72
[pallas:mosaic_gpu] Slightly tweaked the error messages in a few places
superbobry Dec 9, 2025
6cd6cc7
[export] Add backwards compatibility tests for serialized exports
gnecula Dec 5, 2025
14f072b
Use ifrt::AttributeMap::Get instead of directly accessing map
krishnaharidasan Dec 9, 2025
64b16f0
Merge pull request #33758 from gnecula:export_back_compat
Google-ML-Automation Dec 9, 2025
f7e3bdb
Mention pmap is in maintenance mode and point to shard_map and the mi…
yashk2810 Dec 9, 2025
6cb91ed
[mosaic] Added documentation and a few useful methods to tpu.tiled at…
superbobry Dec 9, 2025
863e4e7
Remove multiprocess tests from TPU presubmit due to latency.
emilyfertig Dec 9, 2025
3a36656
Adds docs on multi-controller JAX fault tolerance.
mwhittaker Sep 30, 2025
f1bc1a5
[test] fix signatures test for NumPy nightly
jakevdp Dec 9, 2025
eeab9e4
Merge pull request #33819 from jakevdp:arange-followup
Google-ML-Automation Dec 9, 2025
b8cab94
[test] remove pre-NumPy 2.0 API skips
jakevdp Dec 9, 2025
4e15a87
Merge pull request #32218 from mwhittaker:fault_tolerance_docs
Google-ML-Automation Dec 9, 2025
3e56355
Merge pull request #33834 from jakevdp:numpy-nightly-sig
Google-ML-Automation Dec 9, 2025
75697ef
Remove dynamic shapes. Dead weight at this point.
dougalm Nov 23, 2025
c006c30
jnp.arange: deprecate passing complex arguments
jakevdp Dec 9, 2025
b39d43a
Merge pull request #33836 from jakevdp:old-np-apis
Google-ML-Automation Dec 9, 2025
53ca7bb
Merge pull request #33565 from jax-ml:remove-dynamic-shapes
Google-ML-Automation Dec 9, 2025
f374387
Merge pull request #33837 from jakevdp:arange-complex-dep
Google-ML-Automation Dec 9, 2025
548eaa5
[Pallas TPU] Allow closed over scalars in core_map code
sharadmv Dec 9, 2025
4257c62
Remove one sized mesh axis from spmd_axis_name during comparison with…
yashk2810 Dec 10, 2025
8186c19
[mutable-arrays] allow internal ref effects in mlir.lower_fun
mattjj Dec 10, 2025
be1f505
Merge pull request #33841 from mattjj:bjp
Google-ML-Automation Dec 10, 2025
4952b21
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 10, 2025
d9a7388
[Mosaic GPU][NFC] Refactor `IsTransferable.holds` to use pattern matc…
allanrenucci Dec 10, 2025
3aeefe5
jnp.arange: avoid device transfer when possible
jakevdp Dec 10, 2025
723b90e
[export] Add backwards compatibility test for memory_space
gnecula Dec 10, 2025
3a28e93
[Pallas:MGPU] Add a lowering rule for lax.clamp
apaszke Dec 10, 2025
15ba1b7
[Mosaic GPU][NFC] Remove `Tautological` from constraint reduction.
allanrenucci Dec 10, 2025
9ec840f
[Pallas:SC] Adds a test that verifies pl.kernel outputs are placed in…
brianwa84 Dec 10, 2025
e43f4cb
[Pallas:MGPU] Expose `fragmented_array.Replicated` as part of the pub…
allanrenucci Dec 10, 2025
637982e
[MGPU] Add support for broadcast on major dim in WGStridedFragLayout.
golechwierowicz Dec 10, 2025
c7bc9b2
[Pallas:MGPU] Properly restore the pytree structure when unflattening…
apaszke Dec 10, 2025
11bb981
[Pallas:MGPU] Support not tiled transposed loads in `swap` LANE lower…
allanrenucci Dec 10, 2025
1a78757
Add a missing libtpu version skip in SC Pallas tests
apaszke Dec 10, 2025
f837ebc
Merge pull request #32357 from akshay-babbar:fix-local-window-size-ma…
Google-ML-Automation Dec 10, 2025
6a1397e
Merge pull request #33848 from gnecula:export_memory_space
Google-ML-Automation Dec 10, 2025
3a37e92
[pmap] Add `default_pmap_sharding` to migrate users away from `PmapSh…
danielsuo Dec 10, 2025
d774b64
Remove some deprecated BUILD aliases, close visibility of some others.
hawkinsp Dec 10, 2025
6d55ebd
Remove another instance of access to AttributeMap::Map
krishnaharidasan Dec 10, 2025
1681186
Merge pull request #33846 from jakevdp:arange-transfer
Google-ML-Automation Dec 10, 2025
cd4c077
Widen visibility of //jax/experimental:transfer.
hawkinsp Dec 10, 2025
76cb182
[hijax] handle symbolic zeros generically in hijax lin rule
mattjj Dec 10, 2025
64a8e0d
Temporarily pin nightly libtpu version to `0.0.31.dev20251209` to unb…
ybaturina Dec 10, 2025
cdf4a66
Remove of disassemble_into_single_device_arrays in favor of a new
pschuh Dec 10, 2025
93b138c
[pmap] Remove `default_pmap_sharding`
danielsuo Dec 10, 2025
8390648
Merge pull request #33869 from mattjj:hijax-null-accums
Google-ML-Automation Dec 10, 2025
b12fd6b
Modify _batched_device_put_impl to batch cross-host transfers + enabl…
rao-ashish Dec 8, 2025
86fe2ab
Add TPU v7 runners to nightly and continuous job
quoctruong Dec 10, 2025
1e14a15
Add missing description of numpy.concatenate behavior for axis=None.
carlosgmartin Dec 11, 2025
a80c856
Update EnzymeJaX visibility
wsmoses Dec 11, 2025
e186559
Merge pull request #33873 from carlosgmartin:numpy_concatenate_axis_n…
Google-ML-Automation Dec 11, 2025
66b796c
Check that axis_name input to pcast is either a tuple or a str.
rdyro Dec 11, 2025
e45f9d9
If there is a mesh in ctx and operand.mesh is empty, then make sure t…
yashk2810 Dec 11, 2025
aad7325
Fix spmd_axis_name == explicit_mesh_axes assert when there are multip…
yashk2810 Dec 11, 2025
f030b73
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 11, 2025
db65e54
[export] Fix export back compat serialization test.
gnecula Dec 11, 2025
39010da
Merge pull request #33878 from gnecula:export_fix_back_compat
Google-ML-Automation Dec 11, 2025
b6fb016
[mosaic] Add a canonicalization pattern pushing memref.dim through tp…
superbobry Dec 11, 2025
b30c9de
Remove some deprecated BUILD aliases.
hawkinsp Dec 11, 2025
81fe5dd
Integrate Triton up to 8d445186
loislo Dec 11, 2025
03256b3
[Mosaic GPU] Support loading transposed refs to WGMMA_TRANSPOSED layout.
justinjfu Dec 11, 2025
35e2b33
Add a vlog for the launch_id_key JAX chooses to aid debugging.
Google-ML-Automation Dec 11, 2025
1d4be40
[test] assertAllClose: improve error message on failure
jakevdp Dec 11, 2025
7e410a6
Merge pull request #33889 from jakevdp:test-util-msg
Google-ML-Automation Dec 11, 2025
4efd782
Fix _split_transpose_rule to correctly instantiate zeros
yashk2810 Dec 11, 2025
1c0bb94
Colocated python perf optimization.
Google-ML-Automation Dec 11, 2025
d7e026f
Add entropy function to jax.scipy.stats.poisson
Ma-gi-cian Nov 26, 2025
23fc5e8
Merge pull request #33757 from ROCm:register_rocm_to_block_scaled_dot…
Google-ML-Automation Dec 11, 2025
f3d83cd
Support Hijax types in emit_pipeline.
Dec 11, 2025
952e0d6
[Pallas TPU] Add dma_granule_size_bytes to SC info
sharadmv Dec 11, 2025
75e7243
[no-thunks] Implement FlatTree, an internal representation of pytrees.
dougalm Dec 10, 2025
a838950
Add device dict support to Pallas TPU interpret mode
sharadmv Dec 11, 2025
cc17df3
Simplify adding call location to custom options.
krishnaharidasan Dec 12, 2025
6312a47
Fix links in contributing.md to use HTTPS
partev Dec 8, 2025
356ad2b
[test] add test of advanced indexing with empty lists
jakevdp Dec 12, 2025
e99ee84
Merge pull request #33905 from jakevdp:index-empty-list
Google-ML-Automation Dec 12, 2025
ae9a27e
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 12, 2025
6cd2bc6
[Mosaic GPU] Add limited support for reshapes of tiled layouts
apaszke Dec 12, 2025
a5a333f
[MGPU] Split test_broadcast_major_dim into two tests
golechwierowicz Dec 12, 2025
8572231
[Mosaic GPU] Add support for indexing untiled dims
apaszke Dec 12, 2025
8ab5e07
[test] fix old TODO related to scipy v1.17
jakevdp Dec 12, 2025
dfe1de0
[Mosaic] NFC: Move ops in tpu.td to tpu_ops.td.
WindQAQ Dec 12, 2025
32eb215
Reverts cc17df39c680feddef7f12d5ac7e0cb18d39e8f1
krishnaharidasan Dec 12, 2025
a33d837
Remove type parameters. Type checker doesn't like them
dougalm Dec 11, 2025
60a0f0f
[sparse] make caveat at top more prominent
jakevdp Dec 12, 2025
1651178
Merge pull request #33890 from jax-ml:flat-tree
Google-ML-Automation Dec 12, 2025
3d9edef
Simplify adding call location to custom options.
krishnaharidasan Dec 12, 2025
d0fbc68
Merge pull request #33795 from partev:patch-3
Google-ML-Automation Dec 12, 2025
95048f2
Merge pull request #33915 from jakevdp:scipy-test-fix
Google-ML-Automation Dec 12, 2025
3ef63ce
[mutable-arrays] ignroe InternalMutableArrayEffect in partial_eval ha…
mattjj Dec 12, 2025
1a6bb95
Merge pull request #33924 from mattjj:while-loop-internal-ref-effect
Google-ML-Automation Dec 12, 2025
94ae97f
Remove use of deprecated XLA CPU flags.
dsharletg Dec 12, 2025
6fccabe
Reverts 890ccd23c3728a152ad551e187cea3777af9e435
yashk2810 Dec 12, 2025
b5cb67e
Update Shardy to e8435cb5c0b852b0e249b3fbf5f42dd51988afc9. Fix jax typo
yashk2810 Dec 12, 2025
bc63996
Add support and tests for sharded -> unreduced operation.
yashk2810 Dec 12, 2025
a99f76f
Error out if ShapeDtypeStruct gets a sharding that's not compatible w…
yashk2810 Dec 13, 2025
481a569
Add sharding/shape/dtype checks in si_vjp
yashk2810 Dec 13, 2025
0d58415
Make sure correct mesh is maintained on the out_avals of pallas_call
yashk2810 Dec 13, 2025
8ce3512
Fix tree equality error message in si_vjp
yashk2810 Dec 13, 2025
404d644
Remove config.vjp3 and vjp3 API since it's now replaced by jax.vjp
yashk2810 Dec 13, 2025
6ea96b5
[vjp3] remove ad.backward_pass and non-fancy HOP transposes
mattjj Dec 11, 2025
cd50850
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 13, 2025
925512e
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 14, 2025
eb535dd
Automated Code Change
Google-ML-Automation Dec 14, 2025
164d8bb
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 15, 2025
238f05d
[Pallas:MGPU] Support not tiled 3D+ transposed in `swap` LANE lowerin…
allanrenucci Dec 15, 2025
8daf02a
[Pallas:MGPU] Add Pallas lowering for `lax.reshape_p` under WG semantic.
allanrenucci Dec 15, 2025
253ff54
Deprecate a number of rarely-used jax.core APIs
jakevdp Dec 13, 2025
9cfdc46
[test] fix toeplitz test under scipy 1.17
jakevdp Dec 15, 2025
a2e14e4
[Mosaic GPU] Implement GPU module unloading for Mosaic GPU custom calls.
allanrenucci Dec 15, 2025
3863a14
[Pallas:MGPU] Fix `swap_p` lowering rule for splat under LANE semantics.
allanrenucci Dec 15, 2025
8f2c25c
Merge pull request #33931 from jakevdp:core-deps
Google-ML-Automation Dec 15, 2025
d337b27
Add a JAX config to disable the preemption service.
mwhittaker Dec 15, 2025
90007e0
[pallas:mosaic] Allowed specifying tiling for SC_*_SUBCORE kernels
superbobry Dec 15, 2025
f88f6ec
Reverts 64a8e0de42681bdc26b89dba30fc2c979869ea82
ybaturina Dec 15, 2025
01c6db4
[pxla] Deprecate `jax.interpreters.pxla` symbols.
danielsuo Dec 15, 2025
8f431f6
Merge pull request #33956 from jakevdp:fix-scipy-test
Google-ML-Automation Dec 15, 2025
74174af
Merge pull request #33922 from jakevdp:sparse-note
Google-ML-Automation Dec 15, 2025
a8983f8
Expose profiler advanced configuration as a Python dict.
lukebaumann Dec 15, 2025
b4a3e16
Delete si_vjp from JAX. Dead weight at this point. It is now replaced…
yashk2810 Dec 15, 2025
a4bd50b
Add batch_size=0 support to jax.lax.map.
carlosgmartin Dec 15, 2025
d85a95d
Merge pull request #33965 from carlosgmartin:lax_map_batch_size_zero
Google-ML-Automation Dec 15, 2025
52ee821
[Pallas TPU] Add mesh axis info to pallas call metadata
sharadmv Dec 16, 2025
e914ced
[PjRt-IFRT] Create `ifrt::PjRtExecutable` only from `ifrt::PjRtCompil…
hyeontaek Dec 16, 2025
6395b5f
Remove jax_custom_vjp_disable_shape_check config option
yashk2810 Dec 16, 2025
d5df192
[pmap] Add more detailed documentation about `int` array indexing in …
danielsuo Dec 16, 2025
dd62dd7
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 16, 2025
dda9095
Suppress "healthcheck too slow" for OpsTest.test_select_n under ASAN.
belitskiy Dec 16, 2025
b8747f9
Disable the linalg test target under TSAN on CPU due to timeouts.
belitskiy Dec 16, 2025
6860386
[Mosaic GPU][NFC] Remove duplicate lowering rule for `vector.Broadcas…
allanrenucci Dec 16, 2025
3cbe137
[export] Fix the "with mesh" deprecation warning
gnecula Dec 11, 2025
f5a09b8
[Mosaic GPU] Add support for all kinds of TMA reductions.
dimitar-asenov Dec 16, 2025
21b8652
Update `rules_ml_toolchain` version.
ybaturina Dec 16, 2025
58fbb8a
Merge pull request #33885 from gnecula:fix_tests
Google-ML-Automation Dec 16, 2025
06d576b
Remove more deprecated BUILD aliases.
hawkinsp Dec 16, 2025
bb33967
Add Guidance for GCS Fuse for Compilation Cache in JAX
Google-ML-Automation Dec 16, 2025
b841f5b
Refactor indexing code
jakevdp Dec 16, 2025
ed4d825
Fix spmd_axis_name assert with explicit_mesh_axis in presence of mult…
yashk2810 Dec 16, 2025
ba024e3
Remove dynamic grid bounds restriction in Pallas Mosaic with memory s…
subhankarshah Dec 16, 2025
ad2b914
[Pallas] Allow no mesh context if just signaling on core axis
sharadmv Dec 16, 2025
8d4e9c1
Merge pull request #33552 from Ma-gi-cian:poisson-entropy
Google-ML-Automation Dec 17, 2025
9a391d7
Add `to_cotangent_aval` to HiType and use it by default in bwd pass
yashk2810 Dec 17, 2025
691bf92
[pmap] Created `NamedSharding` arrays when `jax_pmap_shmap_merge=True…
danielsuo Dec 17, 2025
4826ca7
Make c-api topology and PjRtClient versions produce identical platfor…
pschuh Dec 17, 2025
27d29b9
Add `reshard` to shard_map under full explicit mode if the input aval…
yashk2810 Dec 17, 2025
871245d
Add shape checks to `ct_check` function too
yashk2810 Dec 17, 2025
4e1d8f9
Merge pull request #33929 from mattjj:remove-backward-pass
Google-ML-Automation Dec 17, 2025
6940903
Dispatch to `reshard` in `broadcast_in_dim` if only sharding is chang…
yashk2810 Dec 17, 2025
9630a2b
Use `to_cotangent_aval()` in `SymbolicZero` check in _flatten_bwd
yashk2810 Dec 17, 2025
e63d2a4
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 17, 2025
b8d3757
[Pallas/interpreter] Refactor re-useable functionality out of the mai…
Google-ML-Automation Dec 17, 2025
b44a511
Reverts 58fbb8a3b7d4ecb860c4cf678cae1dd6f1079846
gnecula Dec 17, 2025
7f7b35e
Skip test_scalar_debug_check in tpu_sparsecore_pallas_debug_check_test.
belitskiy Dec 17, 2025
2040612
Propagate effects in the abstract eval rule for custom_vmap_p.
Google-ML-Automation Dec 17, 2025
39e6ee1
Export pyproject.toml in the BUILD file
superbobry Dec 17, 2025
86116f8
Causing segfaults in python_callback_test
belitskiy Dec 17, 2025
d3a8052
Skip async store reduction tests for int64/uint64 when x64 is disabled.
belitskiy Dec 17, 2025
44f67b0
Add `concrete_mesh` to reshard_p in pallas lowering rules which was m…
yashk2810 Dec 17, 2025
4c671ca
[Pallas MGPU] Adding support for squeezed block dims in the pipeline …
Rifur13 Dec 17, 2025
4408574
Add --dist=loadfile to pytest command in run_pytest_tpu.sh.
belitskiy Dec 17, 2025
579009c
[CI] Modify TPU v7x jobs to include bazel, and exclude some python ve…
MichaelHudgins Dec 17, 2025
254918c
Fix ct_check to account for `None` cotangents too
yashk2810 Dec 17, 2025
7b63d9d
[pallas] Deprecated `pltpu.ANY` in favor of `pl.ANY`
superbobry Dec 17, 2025
c593739
Update `rules_ml_toolchain` version to remove redundant `fake_nvshmem…
ybaturina Dec 17, 2025
5e9d77f
Merge pull request #33752 from jakevdp:indexing-refactor
Google-ML-Automation Dec 17, 2025
0193356
remove batching.primitive_batchers
mattjj Dec 17, 2025
fdef595
Add platform name to xla::ifrt::Device
ICGog Dec 17, 2025
b16514b
dont use primitive_batchers in batching.py
mattjj Dec 17, 2025
3e38924
Merge pull request #34012 from mattjj:remove-primitive-batchers
Google-ML-Automation Dec 17, 2025
3c4dd57
[JAX] Track the layout defaultness more precisely for arrays created …
hyeontaek Dec 17, 2025
4751be5
[Mosaic TPU] Support reshape which folds last two dims when the last …
yueshengys Dec 18, 2025
f81c201
[pallas:mosaic] Access memory spaces via `pltpu` directly
superbobry Dec 18, 2025
286e358
Automated Code Change
Google-ML-Automation Dec 18, 2025
97c557d
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 18, 2025
3349e7a
[Pallas] Allow inferring the backend from the provided compiler params.
bchetioui Dec 18, 2025
b9b2d4f
[pallas:mosaic] Recover `memory_space` from the aval in `aval_to_ir_t…
superbobry Dec 18, 2025
c1188e7
Add __getitem__ to backwards compatible shims.
Google-ML-Automation Dec 18, 2025
315bb93
[MGPU] Doc fix and shape size fix for broadcast in WGSplatFragLayout.
golechwierowicz Dec 18, 2025
69a6cca
[mpmd] Fix stage_id field nanobind compatability
Google-ML-Automation Dec 18, 2025
e2815d5
[pallas:mosaic] `pltpu.emit_pipeline` now accepts block specs in HBM
superbobry Dec 18, 2025
987a025
Update JAX tests for separate input_striding and input_tiling on tran…
hawkinsp Dec 18, 2025
3f12502
Prepare for JAX release 0.8.2
belitskiy Dec 18, 2025
b8cd917
Reverts c1188e740cc019e85fa8a393c3b07eca18c8d7de
yashk2810 Dec 18, 2025
9af7216
[Pallas MGPU] Simplify how we keep track of the current output slices…
Rifur13 Dec 18, 2025
cd1d057
[pallas:mosaic] Use `pl.delay` instead of the deprecated `pltpu.delay`
superbobry Dec 18, 2025
eac6699
Fix `sub` jvp rule to broadcast shardings correctly so that tangent s…
yashk2810 Dec 18, 2025
369cceb
Merge in 'release/0.8.2'.
belitskiy Dec 18, 2025
c617efc
Drop into full manual mode via shard_map in pallas batching rule when…
yashk2810 Dec 18, 2025
2c96bd3
Add jax_disable_bwd_checks to disable bwd pass checks
yashk2810 Dec 18, 2025
317be93
Merge 'main' into postrelease/0.8.2
belitskiy Dec 18, 2025
b4a983c
Pin the NumPy version for the type/lint presubmit more strictly.
belitskiy Dec 18, 2025
8887722
Skip PickleTest.testPickleOfKeyArray0 on python3.11.
belitskiy Dec 18, 2025
6669f55
Merge pull request #34033 from belitskiy:postrelease/0.8.2
Google-ML-Automation Dec 18, 2025
45c0a4c
Suppress FutureWarning from TensorFlow imports regarding `np.object`.
belitskiy Dec 18, 2025
d167399
anselm refs
mattjj Dec 19, 2025
f53e1f3
[JAX] Refresh a custom layout if a buffer is copied across clients or…
hyeontaek Dec 19, 2025
662f93c
Update the requirements' lock files post the JAX 0.8.2 release.
belitskiy Dec 19, 2025
ea14daa
Fix multi_broadcast_in_dim which was doing replicated -> unreduced ca…
yashk2810 Dec 19, 2025
996cbeb
anselm refs
mattjj Dec 19, 2025
b54bdaf
Automated Code Change
Google-ML-Automation Dec 19, 2025
842804d
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 19, 2025
fe0c066
[Pallas:MGPU] Add support for indexing untiled dimensions under WG se…
allanrenucci Dec 19, 2025
b4447d9
[Mosaic GPU] Add support for indexing into splat array.
allanrenucci Dec 19, 2025
8c0c43c
Complete reorganization of tutorials and guides
melissawm Dec 10, 2025
55c215d
[MGPU] Add WGStridedFragLayout broadcast support on major dim.
golechwierowicz Dec 19, 2025
585016c
internal changes
al-bus Dec 19, 2025
d00f43f
[Mosaic GPU] Add a sanity check ensuring that layout inference only e…
bchetioui Dec 19, 2025
7bc428b
Automated Code Change
cushon Dec 19, 2025
e27c8dd
[dep] remove several dozen finalized deprecations for v0.9.0
jakevdp Dec 19, 2025
a26a544
Remove an erroneously added license.
belitskiy Dec 19, 2025
89401bd
Default make_mesh to Explicit axes for the upcoming 0.9.0 release
yashk2810 Dec 19, 2025
a82d51d
Merge pull request #33828 from rao-ashish:asrao/cross_host_transfer_b…
Google-ML-Automation Dec 19, 2025
0d0d491
Merge pull request #33884 from melissawm:advanced-ad
Google-ML-Automation Dec 19, 2025
cfc997c
Enable using custom hermetic NCCL version.
ybaturina Dec 19, 2025
f29de06
Merge pull request #34058 from jakevdp:finalize-deps
Google-ML-Automation Dec 19, 2025
a9dac11
[dep] remove deprecated `jax_safer_randint` configuration for JAX v0.9.0
Dec 19, 2025
f314722
[dep] finalize deprecation of lax.dot positional arguments for JAX v0…
Dec 19, 2025
d6944d3
[dep] remove references to the already-deprecated interpolation argument
jakevdp Dec 19, 2025
1085d3a
Remove pre-0.8.2 version guards.
belitskiy Dec 19, 2025
82f02eb
[dep] Remove deprecated jax.lib submodules for JAX v0.9.0.
Dec 19, 2025
3f8f860
Merge pull request #34060 from jakevdp:interpolation-dep
Google-ML-Automation Dec 19, 2025
ec5015f
[hijax] upgrade VJPHiPrimitive.def_fwd to support symbolic zeros
mattjj Dec 19, 2025
69518c5
Merge pull request #34065 from mattjj:custom-vjp3-revival
Google-ML-Automation Dec 19, 2025
4f0f93e
Integrate LLVM at llvm/llvm-project@7d381f2a5634
Google-ML-Automation Dec 19, 2025
514b7c4
[dep] make jax_default_dtype_bits config a no-op for JAX v0.9.0.
Dec 19, 2025
dd7ff37
Support weakref_lru_cache.evict.
pschuh Dec 20, 2025
d3ed2f4
Add replicated -> unreduced test coverage
yashk2810 Dec 20, 2025
20bdca7
Fix a broken sharded -> unreduced test
yashk2810 Dec 20, 2025
6469f17
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 20, 2025
0ad46d9
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 21, 2025
f29645b
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 22, 2025
2da9713
[Mosaic GPU] Add `nvshmemx_cumodule_finalize` support.
allanrenucci Dec 22, 2025
6d41fa0
[Autotuner] Prepare the CuDNN fusion test for the new autotuner.
derdrdirk Dec 22, 2025
c25a514
Add programmatic profiling test with session_id support in JAX.
subhamsoni-google Dec 22, 2025
9fe0430
[Mosaic] Extend tpu.pack_elementwise to support non-32-bit integers.
WindQAQ Dec 22, 2025
5173d18
Adjust tile_n assert to accommodate 2 cta in tcgen05 blockscale mma.
Google-ML-Automation Dec 22, 2025
82ae1b1
Update XLA dependency to use revision http://github.com/openxla/xla/c…
Google-ML-Automation Dec 23, 2025
5c2e646
Add iree_metal to platforms with buffer donation support
robtaylor Dec 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
13 changes: 10 additions & 3 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -178,20 +178,21 @@ common:clang --copt=-Wno-error=c23-extensions
common:cuda_v12 --repo_env=HERMETIC_CUDA_VERSION="12.9.1"
common:cuda_v12 --repo_env=HERMETIC_CUDNN_VERSION="9.8.0"
common:cuda_v12 --repo_env=HERMETIC_NVSHMEM_VERSION="3.3.9"
common:cuda_v12 --repo_env=HERMETIC_NCCL_VERSION="2.27.7"
# "sm" means we emit only cubin, which is forward compatible within a GPU generation.
# "compute" means we emit both cubin and PTX, which is larger but also forward compatible to future GPU generations.
common:cuda_v12 --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES="sm_50,sm_60,sm_70,sm_80,sm_90,sm_100,compute_120"

common:cuda_v13 --repo_env=HERMETIC_CUDA_VERSION="13.0.0"
common:cuda_v13 --repo_env=HERMETIC_CUDNN_VERSION="9.12.0"
common:cuda_v13 --repo_env=HERMETIC_NVSHMEM_VERSION="3.3.20"
common:cuda_v13 --repo_env=HERMETIC_NCCL_VERSION="2.27.7"
common:cuda_v13 --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES="sm_75,sm_80,sm_90,sm_100,compute_120"

common:cuda_common --repo_env TF_NEED_CUDA=1
common:cuda_common --repo_env TF_NCCL_USE_STUB=1
common:cuda_common --@local_config_cuda//:enable_cuda
common:cuda_common --@local_config_cuda//cuda:include_cuda_libs=true
common:cuda_common --@cuda_driver//:include_cuda_umd_libs=true

# Force the linker to set RPATH, not RUNPATH. When resolving dynamic libraries,
# ld.so prefers in order: RPATH, LD_LIBRARY_PATH, RUNPATH. JAX sets RPATH to
Expand All @@ -218,7 +219,8 @@ common:cuda --config=cuda12

# This config is used for building targets with CUDA/NVSHMEM libraries from stubs.
common:cuda_libraries_from_stubs --@local_config_cuda//cuda:include_cuda_libs=false
common:cuda_libraries_from_stubs --@cuda_driver//:include_cuda_umd_libs=false

common:hermetic_cuda_umd --@cuda_driver//:include_cuda_umd_libs=true

# common CUDA and other C++ targets with Clang
common:build_cuda_with_clang --@local_config_cuda//:cuda_compiler=clang
Expand Down Expand Up @@ -396,6 +398,9 @@ common:rbe --spawn_strategy=remote,worker,standalone,local
common:rbe --remote_download_toplevel
test:rbe --test_env=USER=anon

common:rbe_cpu_pool --repo_env=REMOTE_GPU_TESTING=0
common:rbe_gpu_pool --repo_env=REMOTE_GPU_TESTING=1

# RBE configs for Linux x86
# Set the remote worker pool
common:rbe_linux_x86_64_base --remote_instance_name=projects/tensorflow-testing/instances/default_instance
Expand All @@ -416,7 +421,9 @@ common:rbe_linux_x86_64 --config=rbe_linux_x86_64_base
common:rbe_linux_x86_64 --config=ci_linux_x86_64

common:rbe_linux_x86_64_cuda_common --config=rbe_linux_x86_64_base
common:rbe_linux_x86_64_cuda_common --repo_env=REMOTE_GPU_TESTING=1
common:rbe_linux_x86_64_cuda_common --config=rbe_gpu_pool
# Update UMD version when RBE CUDA driver is updated.
common:rbe_linux_x86_64_cuda_common --repo_env=HERMETIC_CUDA_UMD_VERSION="13.0.1"

common:rbe_linux_x86_64_cuda12 --config=rbe_linux_x86_64_cuda_common
common:rbe_linux_x86_64_cuda12 --config=ci_linux_x86_64_cuda12
Expand Down
1 change: 1 addition & 0 deletions .github/actionlint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,5 @@ self-hosted-runner:
- "linux-x86-ct6e-180-8tpu" # Linux X86 TPU runner using ct6e-hightpu-8t machine with 2x4 topology.
- "linux-x86-ct6e-180-4tpu" # Linux X86 TPU runner using ct6e-hightpu-4t machine with 2x2 topology.
- "linux-x86-ct4p-240-4tpu" # Linux X86 TPU runner using ct4p-hightpu-4t machine with 2x2x1 topology.
- "linux-x86-tpu7x-224-4tpu" # Linux X86 TPU runner using tpu7x-224 machine with 4 TPU chips (8 cores) and 2x2x1 topology.
- "linux-x86_64-cirrascale-64-8gpu-amd-mi250" # AMD runner
10 changes: 6 additions & 4 deletions .github/workflows/bazel_cuda_h100_b200.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ jobs:
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46
with:
files: |
jax/jax/_src/pallas/mosaic_gpu/**
jax/jax/experimental/mosaic/gpu/**
jax/jaxlib/mosaic/dialect/gpu/**
jax/jaxlib/mosaic/gpu/**
jax/_src/pallas/mosaic_gpu/**
jax/experimental/mosaic/gpu/**
jaxlib/mosaic/dialect/gpu/**
jaxlib/mosaic/gpu/**
.github/workflows/bazel_cuda_h100_b200.yml
- name: List all changed files
env:
Expand Down Expand Up @@ -76,6 +76,7 @@ jobs:
bazel test \
--config=ci_linux_x86_64_cuda \
--config=ci_rbe_cache \
--config=hermetic_cuda_umd \
--repo_env=HERMETIC_PYTHON_VERSION="3.14" \
--repo_env=HERMETIC_CUDNN_VERSION="9.11.0" \
--repo_env=HERMETIC_CUDA_UMD_VERSION="13.0.0" \
Expand Down Expand Up @@ -120,6 +121,7 @@ jobs:
bazel test \
--config=ci_linux_x86_64_cuda \
--config=ci_rbe_cache \
--config=hermetic_cuda_umd \
--repo_env=HERMETIC_PYTHON_VERSION="3.14" \
--repo_env=HERMETIC_CUDNN_VERSION="9.11.0" \
--repo_env=HERMETIC_CUDA_UMD_VERSION="13.0.0" \
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/bazel_cuda_presubmit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,9 @@ jobs:
- python: "3.14"
enable-x64: 1
jaxlib-version: "pypi_latest"
# Exclude CUDA 12 on jaxlib head because it's too slow.
- cuda-version: "12"
jaxlib-version: "head"
name: "Bazel single accelerator ${{ format('{0}', 'CUDA tests') }}"
# End Presubmit Naming Check github-cuda-presubmits
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_artifacts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
name: "${{ inputs.artifact }},
${{ (contains(inputs.runner, 'linux-x86') && 'linux x86') ||
(contains(inputs.runner, 'linux-arm64') && 'linux arm64') ||
(contains(inputs.runner, 'windows-x86') && 'windows x86') }}, py ${{ inputs.python }} ${{ (contains(inputs.artifact, 'cuda') && format(', cuda {0}', inputs.cuda-version)) || '' }}, clone main XLA=${{ inputs.clone_main_xla }}"
(contains(inputs.runner, 'windows-x86') && 'windows x86') }}, py ${{ inputs.python }}${{ (contains(inputs.artifact, 'cuda') && format(', cuda {0}', inputs.cuda-version)) || '' }}, clone main XLA=${{ inputs.clone_main_xla }}"

# Map the job outputs to step outputs
outputs:
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/cloud-tpu-ci-nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ jobs:
tpu: [
{type: "v4-8", cores: "4", runner: "linux-x86-ct4p-240-4tpu"},
{type: "v5e-8", cores: "8", runner: "linux-x86-ct5lp-224-8tpu"},
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"}
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"},
{type: "v7x-8", cores: "8", runner: "linux-x86-tpu7x-224-4tpu"}
]
python-version: ["3.11"]
# Exclude v6e-8 tests for pypi_latest for resource constraints.
Expand Down
54 changes: 0 additions & 54 deletions .github/workflows/metal_plugin_ci.yml

This file was deleted.

4 changes: 3 additions & 1 deletion .github/workflows/wheel_tests_continuous.yml
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,8 @@ jobs:
tpu-specs: [
# {type: "v3-8", cores: "4"}, # Enable when we have the v3 type available
{type: "v5e-8", cores: "8", runner: "linux-x86-ct5lp-224-8tpu"},
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"}
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"},
{type: "v7x-8", cores: "8", runner: "linux-x86-tpu7x-224-4tpu"}
]
libtpu-version-type: ["nightly"]
name: "Pytest TPU (JAX artifacts version = ${{ format('{0}', 'head') }})"
Expand All @@ -267,6 +268,7 @@ jobs:
tpu-specs: [
{type: "v4-8", cores: "4", runner: "linux-x86-ct4p-240-4tpu"},
{type: "v5e-8", cores: "8", runner: "linux-x86-ct5lp-224-8tpu"},
{type: "v7x-8", cores: "8", runner: "linux-x86-tpu7x-224-4tpu"},
]
libtpu-version-type: ["nightly"]
name: "Bazel tests TPU (JAX artifacts version = ${{ format('{0}', 'head') }})"
Expand Down
17 changes: 12 additions & 5 deletions .github/workflows/wheel_tests_nightly_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,8 @@ jobs:
tpu-specs: [
# {type: "v3-8", cores: "4"}, # Enable when we have the v3 type available
{type: "v5e-8", cores: "8", runner: "linux-x86-ct5lp-224-8tpu"},
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"}
{type: "v6e-8", cores: "8", runner: "linux-x86-ct6e-180-8tpu"},
{type: "v7x-8", cores: "8", runner: "linux-x86-tpu7x-224-4tpu"}
]
libtpu-version-type: ["pypi_latest", "nightly"]
exclude:
Expand All @@ -192,13 +193,17 @@ jobs:
- tpu-specs:
type: "v6e-8"
python: "3.13-nogil"
# Run min and max Python versions for v5e-8
# Run max Python versions for v5e-8
- tpu-specs:
type: "v5e-8"
python: "3.11"
- tpu-specs:
type: "v5e-8"
python: "3.12"
# Run min and max Python versions for v7x-8
- tpu-specs:
type: "v7x-8"
python: "3.12"

name: "Pytest TPU (JAX artifacts version = ${{ startsWith(github.ref_name, 'release/') && 'latest release' || 'nightly' }})"
with:
Expand All @@ -222,6 +227,7 @@ jobs:
# {type: "v3-8", cores: "4"}, # Enable when we have the v3 type available
{type: "v4-8", cores: "4", runner: "linux-x86-ct4p-240-4tpu"},
{type: "v5e-8", cores: "8", runner: "linux-x86-ct5lp-224-8tpu"},
{type: "v7x-8", cores: "8", runner: "linux-x86-tpu7x-224-4tpu"},
]
libtpu-version-type: ["pypi_latest", "nightly"]
exclude:
Expand All @@ -239,12 +245,13 @@ jobs:
- tpu-specs:
type: "v4-8"
python: "3.13-nogil"
# Run min and max Python versions for v5e-8
# Run max Python versions for v5e-8
- tpu-specs:
type: "v5e-8"
python: "3.11"
python: "3.12"
# Run min and max Python versions for v7x-8
- tpu-specs:
type: "v5e-8"
type: "v7x-8"
python: "3.12"

name: "Bazel tests TPU (JAX artifacts version = ${{ startsWith(github.ref_name, 'release/') && 'latest release' || 'nightly' }})"
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ repos:
- id: mypy
files: (jax/|tests/typing_test\.py)
exclude: jax/_src/basearray.py|jax/numpy/__init__.py|jax/nn/__init__.py|jaxlib/_jax/.* # Use pyi instead
additional_dependencies: [types-requests==2.31.0, numpy>=2.2.0, scipy-stubs]
additional_dependencies: [types-requests==2.31.0, numpy~=2.3.0, scipy-stubs]
args: [--config=pyproject.toml]

- repo: https://github.com/mwouts/jupytext
Expand Down
52 changes: 38 additions & 14 deletions BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -30,20 +30,43 @@ wheel_sources(
data_srcs = ["//jax"],
py_srcs = [
"//jax",
"//jax:compilation_cache",
"//jax:experimental",
"//jax:experimental_colocated_python",
"//jax:experimental_sparse",
"//jax:experimental_buffer_callback",
"//jax:experimental_serialize_executable",
"//jax:pallas_experimental_gpu_ops",
"//jax:pallas_fuser",
"//jax:pallas_gpu_ops",
"//jax:pallas_mosaic_gpu",
"//jax:pallas_tpu_ops",
"//jax:pallas_triton",
"//jax:source_mapper",
"//jax:sparse_test_util",
"//jax/example_libraries:example_libraries",
"//jax/example_libraries:optimizers",
"//jax/example_libraries:stax",
"//jax/experimental:buffer_callback",
"//jax/experimental:checkify",
"//jax/experimental:colocated_python",
"//jax/experimental:compilation_cache",
"//jax/experimental:compute_on",
"//jax/experimental:custom_dce",
"//jax/experimental:custom_partitioning",
"//jax/experimental:fused",
"//jax/experimental:hijax",
"//jax/experimental:jet",
"//jax/experimental:layout",
"//jax/experimental:mesh_utils",
"//jax/experimental:multihost_utils",
"//jax/experimental:ode",
"//jax/experimental:pallas_experimental_gpu_ops",
"//jax/experimental:pallas_fuser",
"//jax/experimental:pallas_gpu_ops",
"//jax/experimental:pallas_mosaic_gpu",
"//jax/experimental:pallas_tpu_ops",
"//jax/experimental:pallas_triton",
"//jax/experimental:pjit",
"//jax/experimental:profiler",
"//jax/experimental:rnn",
"//jax/experimental:scheduling_groups",
"//jax/experimental:serialize_executable",
"//jax/experimental:shard_alike",
"//jax/experimental:shard_map",
"//jax/experimental:source_mapper",
"//jax/experimental:sparse_test_util",
"//jax/experimental:sparse",
"//jax/experimental:topologies",
"//jax/experimental:transfer",
"//jax/experimental:xla_metadata",
"//jax/experimental",
"//jax/_src:lax_reference",
"//jax/_src:internal_export_back_compat_test_util",
"//jax/_src:internal_export_back_compat_test_data",
Expand Down Expand Up @@ -123,6 +146,7 @@ genrule(
"//jax/experimental/mosaic/gpu/examples:flash_attention.py",
"//jax/experimental/mosaic/gpu/examples:matmul.py",
"//jax/_src:test_multiprocess",
"//jax/_src/pallas:pallas_test_util",
],
outs = ["wheel_additives.zip"],
cmd = "$(location @bazel_tools//tools/zip:zipper) c $@ $(SRCS)",
Expand Down
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,19 @@ When releasing, please add the new-release-boilerplate to docs/pallas/CHANGELOG.

## Unreleased

## JAX 0.8.2 (December 18, 2025)

* Deprecations
* `jax.lax.pvary` has been deprecated.
Please use `jax.lax.pcast(..., to='varying')` as the replacement.
* Complex arguments passed to {func}`jax.numpy.arange` now result in a
deprecation warning, because the output is poorly-defined.
* From {mod}`jax.core` a number of symbols are newly deprecated including:
`call_impl`, `get_aval`, `mapped_aval`, `subjaxprs`, `set_current_trace`,
`take_current_trace`, `traverse_jaxpr_params`, `unmapped_aval`,
`AbstractToken`, and `TraceTag`.
* All symbols in {mod}`jax.interpreters.pxla` are deprecated. These are
primarily JAX internal APIs, and users should not rely on them.

* Changes:
* jax's `Tracer` no longer inherits from `jax.Array` at runtime. However,
Expand All @@ -29,6 +39,8 @@ When releasing, please add the new-release-boilerplate to docs/pallas/CHANGELOG.
For the moment, during Python type checking, we continue to declare `Tracer`
as a subclass of `Array`, however we expect to remove this in a future
release.
* `jax.experimental.si_vjp` has been deleted.
`jax.vjp` subsumes it's functionality.

## JAX 0.8.1 (November 18, 2025)

Expand Down
13 changes: 7 additions & 6 deletions WORKSPACE
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
load("//third_party:repo.bzl", "tf_http_archive", "tf_mirror_urls")

# The XLA commit is determined by third_party/xla/revision.bzl.
load("//third_party/xla:workspace.bzl", jax_xla_workspace = "repo")
load("//third_party:repo.bzl", "tf_http_archive", "tf_mirror_urls")

jax_xla_workspace()

Expand All @@ -12,15 +13,15 @@ load("@xla//:workspace3.bzl", "xla_workspace3")

xla_workspace3()

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# Initialize Hermetic toolchains
# Details: https://github.com/google-ml-infra/rules_ml_toolchain
tf_http_archive(
name = "rules_ml_toolchain",
sha256 = "8123d826b0a4c5ceda767abc8092419fcc980c3ce45fb0f438b101fb886c014c",
strip_prefix = "rules_ml_toolchain-552b53a04a86fd5cdb4d5091e7420411d8b2a045",
urls = tf_mirror_urls("https://github.com/google-ml-infra/rules_ml_toolchain/archive/552b53a04a86fd5cdb4d5091e7420411d8b2a045.tar.gz"),
sha256 = "1c2c530a054e9e8b3c811ec21ed8a687fc865bec3abbc8ff65beb829b1d67ae4",
strip_prefix = "rules_ml_toolchain-6734d2a174bf29e731d3f473743d1cc1a86100c3",
urls = tf_mirror_urls(
"https://github.com/google-ml-infra/rules_ml_toolchain/archive/6734d2a174bf29e731d3f473743d1cc1a86100c3.tar.gz",
),
)

load(
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/mosaic/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jax_multiplatform_test(
enable_configs = ["gpu_h100"],
tags = ["notap"],
deps = [
"//jax:mosaic_gpu",
"//jax/experimental:mosaic_gpu",
"//jax/experimental/mosaic/gpu/examples:matmul",
"//third_party/py/google_benchmark",
] + py_deps("absl/testing") + py_deps("numpy"),
Expand Down
2 changes: 1 addition & 1 deletion build/requirements.in
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ wheel
# the requirements files.
jaxlib

jax-cuda12-plugin; sys_platform == "linux" and python_version<"3.14"
jax-cuda12-plugin; sys_platform == "linux"
jax-cuda13-plugin
jax-cuda12-pjrt; sys_platform == "linux"
jax-cuda13-pjrt
Expand Down
Loading
Loading