-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the training set of InterHand26M #16
Comments
Hi, yes you're correct. The model of the original paper is trained only with IH2.6M (H) + MSCOCO. The checkpoint of this repo is trained with IH2.6M (H+M) + MSCOCO, which gives bbox IoU: 86.25 MRRPE: 26.74 mm MPVPE for all hand sequences: 11.55 mm MPJPE for all hand sequences: 13.65 mm Let me update the arxiv paper. |
All experimental results of the paper are from checkpoints trained on IH2.6M (H) + MSCOCO |
Dose batch size affects? I use batch size 32 with 2 GPUs. My MRRPE on HIC is only 40.11 mm |
I haven't tested with different batch size.. sorry |
I use 2 3090 gpus with training batch size 32. So the global batch size is 32 * 2. May I know your pytorch version info and the global batch size you use? |
I reproduce the most recent version model, which trained with InterHand26M (H+M) and COCO dataset. However, I found that reproduced results are better than that in the paper. Did you use human_aid only (H) InterHand26M in the paper?
The text was updated successfully, but these errors were encountered: