Hi Shenhan, thank you for your great work! I'm working on refining flame model out of a SMPLX estimation using VHAP. However, I find the photometric loss is always inf, which means nothing is rendered with given camera parameters. I found that global orientations and translate will move the whole body (flame head included) drastically, which makes the flame model far away from its canonical position. My question is: should I first translate the flame model using the global orientation and translate parameters? Or should I change the way of initializing the flame parameters in tracker.py?
self.shape = torch.zeros(self.cfg.model.n_shape).to(self.device)
self.expr = torch.zeros(self.n_timesteps, self.cfg.model.n_expr).to(self.device)
self.neck_pose = torch.zeros(self.n_timesteps, 3).to(self.device)
self.jaw_pose = torch.zeros(self.n_timesteps, 3).to(self.device)
self.eyes_pose = torch.zeros(self.n_timesteps, 6).to(self.device)
self.translation = torch.zeros(self.n_timesteps, 3).to(self.device)
self.rotation = torch.zeros(self.n_timesteps, 3).to(self.device)
Thank you in advance for your help. Again, a fascinating job on VHAP!
Best regards.
Hi Shenhan, thank you for your great work! I'm working on refining flame model out of a SMPLX estimation using VHAP. However, I find the photometric loss is always inf, which means nothing is rendered with given camera parameters. I found that global orientations and translate will move the whole body (flame head included) drastically, which makes the flame model far away from its canonical position. My question is: should I first translate the flame model using the global orientation and translate parameters? Or should I change the way of initializing the flame parameters in tracker.py?
Thank you in advance for your help. Again, a fascinating job on VHAP!
Best regards.