Here you can find predictions for COCO validation from different freely available pretrained object detection models:
- EfficientDet [1]
- DetectoRS [2]
- Yolo v5
Model | COCO validation mAP(0.5...0.95) | COCO validation mAP(0.5...0.95) Mirror |
---|---|---|
EffNet-B0 | 33.6 | 33.5 |
EffNet-B1 | 39.2 | 39.2 |
EffNet-B2 | 42.5 | 42.6 |
EffNet-B3 | 45.9 | 45.5 |
EffNet-B4 | 49.0 | 48.8 |
EffNet-B5 | 50.5 | 50.2 |
EffNet-B6 | 51.3 | 51.1 |
EffNet-B7 | 52.1 | 51.9 |
DetectoRS + ResNeXt-101 | 51.5 | 51.5 |
DetectoRS + Resnet50 | 49.6 | 49.6 |
Yolo v5x | 50.0 | --- |
There is python code to get high score on COCO validation using WBF method: run_benchmark_coco.py
WBF with weights: [0, 0, 0, 0, 0, 0, 0, 0, 4, 4, 5, 5, 7, 7, 9, 9, 8, 8, 5, 5, 10] and IoU = 0.7 gives 56.1 on COCO validation and 56.4 on COCO test-dev.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.561
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.741
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.621
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.402
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.607
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.704
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.405
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.684
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.755
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.629
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.794
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.878
numpy, pandas, pycocotools