rework pycocotools with faster-coco-eval #231
                
     Open
            
            
          
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Hello, I suggest to include the fast validation library "faster-coco-eval" for validating coco & lvis datasets.
The guys from lightning.ai have been using it for a long time as an alternative backend for validation, instead of the dying pycocotools.
https://lightning.ai/docs/torchmetrics/stable/detection/mean_average_precision.html
I prepared examples of this library in advance for you, they are available here:
https://github.com/MiXaiLL76/faster_coco_eval/blob/main/examples/comparison/ultralytics/colab_example.ipynb
or
https://nbviewer.org/github/MiXaiLL76/faster_coco_eval/blob/main/examples/comparison/ultralytics/colab_example.ipynb
For quick comparison, here is a table from the end of this document:
eval compare considering data loading
Additional information:
I recently helped integrate it into D-FINE and RT-DETR. It works great there.
Maybe this will help you not only train fast models, but also train them faster =)
I want to offer you an even easier way to integrate your developments!
I implemented and tested two modules:
Why did I do this? - I'm trying to tidy up a bunch of repositories that can use common developments.
For you I have implemented and tested the validation with the implementation of these functions, to check you need to study my code and update the library to version 1.6.6 (available on pypi & conda)