I conducted an evaluation of the model using the pre-trained weights (humanml_trans_dec_512_bert, 50 steps) on the HumanML3D dataset. The results significantly outperformed the reported figures (my evaluation result is as follows.).

Furthermore, I observed that in the evaluation file (eval_humanml_humanml_trans_dec_bert_512_000600000_gscale2.5.log), located in the same folder as the pre-training weights, the calculated metrics (R-Precision, MM-Dist, Diversity) deviate considerably from the accepted ground truth has a big deviation. Could you please elaborate on the specific reasons behind this?

I would like to express my sincere gratitude for your assistance !!!