Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the training set of InterHand26M #16

Open
aragakiyui611 opened this issue Oct 20, 2023 · 7 comments
Open

About the training set of InterHand26M #16

aragakiyui611 opened this issue Oct 20, 2023 · 7 comments

Comments

@aragakiyui611
Copy link

aragakiyui611 commented Oct 20, 2023

image

I reproduce the most recent version model, which trained with InterHand26M (H+M) and COCO dataset. However, I found that reproduced results are better than that in the paper. Did you use human_aid only (H) InterHand26M in the paper?

@mks0601
Copy link
Collaborator

mks0601 commented Oct 20, 2023

Hi, yes you're correct. The model of the original paper is trained only with IH2.6M (H) + MSCOCO. The checkpoint of this repo is trained with IH2.6M (H+M) + MSCOCO, which gives

bbox IoU: 86.25

MRRPE: 26.74 mm

MPVPE for all hand sequences: 11.55 mm
MPVPE for single hand sequences: 9.58 mm
MPVPE for interacting hand sequences: 12.14 mm

MPJPE for all hand sequences: 13.65 mm
MPJPE for single hand sequences: 12.75 mm
MPJPE for interacting hand sequences: 14.55 mm

Let me update the arxiv paper.

@aragakiyui611
Copy link
Author

aragakiyui611 commented Oct 27, 2023

image

Is the result of HIC on the paper is trained with InterHand26M(H+M) and coco? Left is paper and right is my result (H+M). Thanks!

@mks0601
Copy link
Collaborator

mks0601 commented Oct 27, 2023

All experimental results of the paper are from checkpoints trained on IH2.6M (H) + MSCOCO

@aragakiyui611
Copy link
Author

aragakiyui611 commented Oct 27, 2023

image
I reproduced the result and on HIC is worse especially MRRPE. However on the Interhand is better than on the paper. May be the random seed? or on the final epoch selecting? I test with snapshot_6.pth

@aragakiyui611
Copy link
Author

aragakiyui611 commented Nov 2, 2023

Dose batch size affects? I use batch size 32 with 2 GPUs. My MRRPE on HIC is only 40.11 mm

@mks0601
Copy link
Collaborator

mks0601 commented Nov 2, 2023

I haven't tested with different batch size.. sorry

@aragakiyui611
Copy link
Author

aragakiyui611 commented Nov 18, 2023

addict 2.4.0
certifi 2023.7.22
charset-normalizer 3.3.0
chumpy 0.70
contourpy 1.1.1
cycler 0.12.0
Cython 3.0.4
easydict 1.10
einops 0.7.0
filelock 3.12.4
fonttools 4.43.0
fsspec 2023.9.2
fvcore 0.1.5.post20221221
gitdb 4.0.11
GitPython 3.1.40
huggingface-hub 0.18.0
idna 3.4
importlib-metadata 6.8.0
importlib-resources 6.1.0
iopath 0.1.10
json-tricks 3.17.3
kiwisolver 1.4.5
kornia 0.7.0
matplotlib 3.7.3
mmcv-full 1.7.1
mmpose 0.28.0
munkres 1.1.4
numpy 1.24.4
opencv-python 4.8.1.78
packaging 23.2
pandas 2.0.3
Pillow 10.0.1
pip 23.2.1
platformdirs 3.11.0
plyfile 1.0.1
portalocker 2.8.2
psutil 5.9.6
py-cpuinfo 9.0.0
pycocotools 2.0.7
pyparsing 3.1.1
python-dateutil 2.8.2
pytorch3d 0.7.4
pytz 2023.3.post1
PyYAML 6.0.1
requests 2.31.0
safetensors 0.4.0
scipy 1.10.1
seaborn 0.13.0
setuptools 68.2.2
six 1.16.0
smmap 5.0.1
smplx 0.1.28
tabulate 0.9.0
termcolor 2.3.0
thop 0.1.1.post2209072238
timm 0.9.7
tomli 2.0.1
torch 1.12.1+cu113
torchaudio 0.12.1+cu113
torchgeometry 0.1.2
torchvision 0.13.1+cu113
tqdm 4.66.1
trimesh 4.0.0
typing_extensions 4.8.0
tzdata 2023.3
ultralytics 8.0.200
urllib3 2.0.6
wheel 0.41.2
xtcocotools 1.14.3
yacs 0.1.8
yapf 0.40.2
zipp 3.17.0

I use 2 3090 gpus with training batch size 32. So the global batch size is 32 * 2. May I know your pytorch version info and the global batch size you use?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants