I am looking through your [vicuna_benchmark_human_annotations.csv](https://github.com/bcdnlp/PRD/blob/main/data/vicuna80/vicuna_benchmark_human_annotations.csv) csv file - but I am not seeing answer by neither paLm-2 nor claude as mentioned in your paper - only scores given to bard, guanaco models, gpt34, gpt4 and vicuba13b.
Could you explain why this is?