Skip to content

Question about pose translation&scale and instance-cycle ? #7

@Bear-kai

Description

@Bear-kai

Since the pose predictor outputs scale=[1,1,1] by default, we can only adjust the translation to make the rendered object has equal size compared to the one in observed image. If this understanding is correct, then I have the following issues:

  1. What are the actural meanings of hyper-parameters opts.depth_offset and opts.rotation_offset? Or, how to choose proper values for other categories.
  2. There seems to be a lot of numerical tricks dealing with the predicted translation. In the pose_predictor.py:
    trans[:, :2] = trans[:, :2] * 0.1 # why multiply 0.1
    trans[:, 2] = trans[:, 2] + self.offset # add the depth_offset=5.0
    and in the encoder.py:
    translation[:, :2] -= (pp_crop / foc_crop) * translation[:, 2:].detach() # ??
    Can you explain the behind logic for better understanding the code snippet.

Moreover, the implement of instance cycle-consistency does not match the text description and Fig.3(a) in the paper.
In compute_match_loss() and compute_imatch_loss(), the loss items are just the difference between the predicted coordinates computed by correspondence and the ground-truth one. Therefore, the implement is not a cycle-style formulation which is obviously different from the paper. Can you explain this ?

Thanks!!!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions