Hello, I have run
python scripts/network_inference_dataset.py -i trained_models/panda_dream_vgg_q.pth -d data/real/panda-3cam_realsense/ -o outputs -b 16 -w 8
and I am now trying to compare the "predicted pose" which I believe is in pnp_results.csv which was generated from the run.
I'm comparing the x y z predicted positions

with the ground truth positions in the .json annotations for each frame.

What pose is being recorded in pnp_results.csv? It doesn't seem to match well with link0 pose or "end-effector/hand" pose, so I am confused.