Python影像辨識筆記(十三):計算YOLOv4 Weights的mAP

Original article was published on Deep Learning on Medium

Python影像辨識筆記(十三):計算YOLOv4 Weights的mAP

參考資料:

https://github.com/AlexeyAB/darknet/wiki

How to evaluate AP of YOLOv4 on the MS COCO evaluation server

  1. 設定 width=608 height=608 (or 512×512, or 416×416) in the [net] section of cfg/yolov4.cfg file https://github.com/AlexeyAB/darknet/blob/6f718c257815a984253346bba8fb7aa756c55090/cfg/yolov4.cfg#L8-L9
  2. 下載並解壓縮test-dev2017 dataset: http://images.cocodataset.org/zips/test2017.zip
  3. Download list of images for Detection taks and replace the paths with yours (to the unzipped images from test2017.zip): https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt
  4. 下載yolov4.weights : https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
  5. cfg/coco.data 的內容應該如下所示
classes= 80
train = <replace with your path>/trainvalno5k.txt
valid = <replace with your path>/testdev2017.txt
names = data/coco.names
backup = backup
eval=coco
  1. 在darknet目錄底下建立/results/ 資料夾
  2. 執行驗證: ./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights
  3. /results/coco_results.json 改名成 detections_test-dev2017_yolov4_results.json 並且壓縮成 detections_test-dev2017_yolov4_results.zip
  4. 提交 detections_test-dev2017_yolov4_results.zip 到MS COCO 驗證伺服器,選擇( test-dev2019 (bbox) )項目https://competitions.codalab.org/competitions/20794#participate

取得以下結果 (AP=0.435 and AP50=0.657) View scoring output log

overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.435
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.657
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.473
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.267
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.467
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.533
Average Recall (AR) @[ IoU=0.50:0.95 | area= all |
maxDets= 1 ] = 0.342
Average Recall (AR) @[ IoU=0.50:0.95 | area= all |
maxDets= 10 ] = 0.549
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.580
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.403
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.713
Done (t=334.58s)