-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Since the pose predictor outputs scale=[1,1,1] by default, we can only adjust the translation to make the rendered object has equal size compared to the one in observed image. If this understanding is correct, then I have the following issues:
- What are the actural meanings of hyper-parameters
opts.depth_offsetandopts.rotation_offset? Or, how to choose proper values for other categories. - There seems to be a lot of numerical tricks dealing with the predicted
translation. In thepose_predictor.py:
trans[:, :2] = trans[:, :2] * 0.1 # why multiply 0.1
trans[:, 2] = trans[:, 2] + self.offset # add the depth_offset=5.0
and in theencoder.py:
translation[:, :2] -= (pp_crop / foc_crop) * translation[:, 2:].detach() # ??
Can you explain the behind logic for better understanding the code snippet.
Moreover, the implement of instance cycle-consistency does not match the text description and Fig.3(a) in the paper.
In compute_match_loss() and compute_imatch_loss(), the loss items are just the difference between the predicted coordinates computed by correspondence and the ground-truth one. Therefore, the implement is not a cycle-style formulation which is obviously different from the paper. Can you explain this ?
Thanks!!!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels