Masked Autoregressive (MAR) models have emerged as a promising approach in image generation, expected to surpass traditional autoregressive models in computational efficiency by leveraging the capability of parallel decoding. However, their dependence on bidirectional self-attention inherently conflicts with conventional KV caching mechanisms, creating unexpected computational bottlenecks that undermine their expected efficiency.
To address this problem, this paper studies the caching mechanism for MAR by leveraging two types of redundancy:
- Token Redundancy indicates that a large portion of tokens have very similar representations in the adjacent decoding steps, which allows us to first cache them in previous steps and then reuse them in the later steps.
- Condition Redundancy indicates that the difference between conditional and unconditional output in classifier-free guidance exhibits very similar values in adjacent steps.
Based on these two redundancies, we propose LazyMAR, which introduces two caching mechanisms to handle them one by one. LazyMAR is training-free and plug-and-play for all MAR models. Experimental results demonstrate that our method achieves 2.83× acceleration with almost no drop in generation quality.
- Python >= 3.8
- CUDA >= 11.8
- PyTorch >= 2.2.2
Download the code:
git clone https://github.com/feihongyan1/LazyMAR.git
cd LazyMARA suitable conda environment named lazymar can be created and activated with:
conda env create -f environment.yaml
conda activate lazymarDownload pre-trained VAE and LazyMAR models (weights are provided by MAR):
| LazyMAR Model | FID-50K | Inception Score | #params | Download Link |
|---|---|---|---|---|
| LazyMAR-B | 2.45 | 281.3 | 208M | Dropbox |
| LazyMAR-L | 1.93 | 297.4 | 479M | Dropbox |
| LazyMAR-H | 1.69 | 299.2 | 943M | Dropbox |
VAE Model: Download the pre-trained VAE model from Dropbox
Place the downloaded checkpoints in pretrained_models/ directory:
pretrained_models/
├── mar/
│ ├── mar_base/
│ │ └── checkpoint-last.pth
│ ├── mar_large/
│ │ └── checkpoint-last.pth
│ └── mar_huge/
│ └── checkpoint-last.pth
└── vae/
└── kl16.ckpt
Generate 50k images for evaluation on ImageNet-256:
CUDA_VISIBLE_DEVICES=0,1 torchrun --master_port=26586 --nproc_per_node=2 \
--nnodes=1 --node_rank=0 eval.py \
--num_images 50000 \
--eval_bsz 128 \
--num_iter 64 \
--lazy_marTo compare with the baseline MAR model without acceleration:
CUDA_VISIBLE_DEVICES=0,1 torchrun --master_port=26586 --nproc_per_node=2 \
--nnodes=1 --node_rank=0 eval.py \
--num_images 50000 \
--eval_bsz 128 \
--num_iter 64--lazy_mar: Enable LazyMAR acceleration (training-free and plug-and-play)--num_iter: Number of autoregressive iterations (default: 64)--eval_bsz: Batch size for evaluation (default: 128)--num_images: Number of images to generate (default: 50000)--model: Model architecture choice:mar_base,mar_large, ormar_huge(default:mar_huge)--cfg: Classifier-free guidance scale (default: 3.25)
LazyMAR achieves significant speedup while maintaining generation quality on ImageNet 256×256 class-conditional generation.
Testing Hardware: NVIDIA GeForce RTX 3090 GPU
| Model | Latency (GPU) | Latency (CPU) | FLOPs | Speedup | FID ↓ | IS ↑ |
|---|---|---|---|---|---|---|
| MAR-H | 1.74s | 116.61s | 69.06T | 1.00× | 1.59 | 299.1 |
| LazyMAR-H | 0.75s | 43.12s | 24.38T | 2.83× | 1.69 | 299.2 |
| MAR-L | 0.93s | 59.66s | 35.05T | 1.00× | 1.82 | 296.1 |
| LazyMAR-L | 0.40s | 22.82s | 12.52T | 2.80× | 1.93 | 297.4 |
| MAR-B | 0.47s | 28.97s | 15.49T | 1.00× | 2.32 | 281.1 |
| LazyMAR-B | 0.21s | 11.08s | 5.54T | 2.80× | 2.45 | 281.3 |
- ⚡ High Acceleration: Achieves 2.83× speedup with 64.7% FLOPs reduction and minimal quality loss
- 🔌 Training-Free & Plug-and-Play: No retraining required, simple integration via
--lazy_marflag - 💡 Dual Caching Mechanisms: Leverages both token redundancy and condition redundancy
- 💾 Memory Efficient: Intelligent caching reduces redundant computations
- Thanks to MAR for their great work and codebase upon which we build LazyMAR.
- Thanks to the community for the pre-trained VAE models.
If you have any questions, feel free to contact us through email:
If you find our work useful, please consider citing:
@InProceedings{Yan_2025_ICCV,
author = {Yan, Feihong and Wei, Qingyan and Tang, Jiayi and Li, Jiajun and Wang, Yulin and Hu, Xuming and Li, Huiqi and Zhang, Linfeng},
title = {LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {15552-15561}
}This project is licensed under the MIT License - see the LICENSE file for details.
Enjoy!
