-
Notifications
You must be signed in to change notification settings - Fork 24
Description
Dear Authors,
Sorry for the intrusion once more.
To the best of my understanding, the original GPTQ algorithm accommodates a range of group-wise quantizations, such as group sizes of -1, 128, and 64. Upon reviewing the code, and assuming my interpretation is correct, it appears that although the batch_GPTQ inherently supports various group sizes, the add_expert function within the Sub1CheckpointManager class and the make function in the Sub1Linear seemingly only support row-wise quantization by default, corresponding to a group size of -1. Consequently, only the row-wise min_max variable is preserved for subsequent packing operations.
Would it be feasible to apply the LWZ algorithm to tensors that have undergone group-wise quantization (for instance, groupsize=128, ternary weights) and to design the sub1 packing process accordingly?