Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reproduce original YOLOv4 AP on coco test-dev2017? #262

Closed
AhmedHisham1 opened this issue Feb 3, 2021 · 4 comments
Closed

Can't reproduce original YOLOv4 AP on coco test-dev2017? #262

AhmedHisham1 opened this issue Feb 3, 2021 · 4 comments

Comments

@AhmedHisham1
Copy link

Hello,
I wanted to reproduce the reported AP of the YOLOv4 model available here (608 image size), which is 43.5.
[I am using the master branch]

I used the following parameters with the test.py file:

  • confidence threshold = 0.001
  • nms iou threshold = 0.5
  • image size = 608

and using the weights and cfg files from the link above.

test.py outputted the detections_test-dev2017_yolov4.weights_results.json

I uploaded the file to the COCO detection evaluation server @ CodaLab, and I got the following results:

overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.262
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.513
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.231
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.139
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.246
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.382
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.250
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.433
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.482
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.307
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.508
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.630
Done (t=318.36s)

So, I got 26.2% instead of the 43.5% expected!
Am I doing something wrong? Is there something wrong with this implementation?

@WongKinYiu
Copy link
Owner

weights file and yolo decoder are different.
to use this repo, plz download weights from https://github.com/WongKinYiu/PyTorch_YOLOv4#pretrained-models--comparison

@AhmedHisham1
Copy link
Author

Thank you, your code is great.
I actually thought it was an implementation of the original yolov4 as is. I think the name of the repo is a bit misleading, plz consider adding one or two sentences in the readme explaining this.

Also, it would be great if this can use the original weights as well. It can actually run them and the mAP50 was actually OK (especially on coco-val2017). Hence, I guess it needs little modifications.

One other question, if I trained using these models on custom data, can I convert the output model to tensorRT? are all activation functions and small details supported in this conversion?

Thanks again for the great code.

@AhmedHisham1
Copy link
Author

I was also confused due to your answer on this issue here: #46

He asked about yolov3 and you said that it should run ok.

@WongKinYiu
Copy link
Owner

for original yolov4 implementation, you could use u3_preview branch

the master branch is same as new_coord=1 for new models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants