Skip to content

About the Compression #1

@SecretGuardian

Description

@SecretGuardian

Hi thanks for sharing the great work. I run the LM PTB demo and finished the full embed and kdq embed training.
But according to the results, for the full embed, the final results are:

total 110508
5094892 Oct 27 23:21 model-46445.meta
79 Oct 27 23:21 checkpoint
79100812 Oct 27 23:21 model-46445.data-00000-of-00001
465 Oct 27 23:21 model-46445.index
18189654 Oct 27 19:14 graph.pbtxt

while the kdq embed results are:

total 112252
5324218 Oct 27 23:42 model-46445.meta
79 Oct 27 23:42 checkpoint
79767436 Oct 27 23:42 model-46445.data-00000-of-00001
634 Oct 27 23:42 model-46445.index
18567704 Oct 27 19:13 graph.pbtxt

It seems like the kdq model is even larger. How should I estimate the real compression performance? Thanks.
(I used K=128, D=50, type=smx, share subspace=False, additive quantization=False)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions