Mingluo Su1, Huan Wang1*
1Westlake University
Corresponding author: wanghuan [at] westlake [dot] edu [dot] cn*
Pruning is widely recognized as an effective method for reducing the parameters of large language models (LLMs), potentially leading to more efficient deployment and inference. One classic and prominent path of one-shot LLM pruning is to leverage second-order gradients (i.e., Hessian), represented by the pioneering work SparseGPT. However, the predefined left-to-right pruning order in SparseGPT leads to suboptimal performance when the weights exhibit columnar patterns. This paper studies the effect of pruning order under the SparseGPT framework. The analyses lead us to propose ROSE, a reordered SparseGPT method that prioritizes weight columns with larger potential pruning errors to be processed earlier. ROSE first performs pre-pruning to identify weights that are highly likely to be pruned, and estimates both column-wise and block-wise pruning loss. The relative range of block loss is used as a metric to identify columnar layers and perform adaptive reordering for them. For the reordering operation, columns within each block are reordered in descending order of column loss, while blocks are reordered in descending order of block loss. Substantial empirical results on prevalent LLMs (LLaMA2-7B/13B/70B, LLaMA3-8B, Mistral-7B) demonstrate that ROSE surpasses the original SparseGPT and other counterpart pruning methods.
(a) Illustration of the difference between SparseGPT and ROSE. Orange color represents weight importance, and the darker the color, the greater the importance. In SparseGPT, the number of weights available for error compensation (shown in dark blue) decreases during pruning, limiting recovery if high-error weights are pruned late. ROSE reorders those with potentially large pruning errors to the front to be pruned earlier. In this way, more parameters remain available for larger error compensation. (b) Step of our proposed ROSE. Given the dense weight matrix W, we calculate the importance score matrix S and split it into blocks based on block size BS. The smallest p% of values from each block are selected as the loss matrix L. Column loss and block loss are calculated based on the loss matrix. Columns within one block are reordered in descending order by column scores, and blocks are reordered in descending order by block scores.
git clone https://github.com/sunshine-0903/ROSE
cd ROSE
pip install -r requirements.txtTo prune, you can run the following command:
export HF_ENDPOINT=https://hf-mirror.com
export CUDA_VISIBLE_DEVICES=your/gpu/id
python main.py \
--model_path your/model/path \
--sparsity_type unstructured \
--sparsity_ratio 0.7 \
--prune_method ROSE \
--eval_zero_shot \
--tasks winogrande boolq piqa openbookqa hellaswag arc_easy arc_challenge- This source code is derived from the famous PyTorch reimplementation of SparseGPT, Wanda, DSnoT, Rethinking LLM pruning, and GPTAQ. We thank them for their excellent open-source contributions.
- The README file is inspired by SparseSSM,OBS-Diff and MergeMix
If you find this work useful, please consider citing:
@article{su2026rose,
title={ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning},
author={Su, Mingluo and Wang, Huan},
journal={arXiv preprint arXiv:2602.14751},
year={2026}
}