-
-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rtdetr_pytorch的coco test/eval如何得到所有类别的总体指标结果之外,每个类别自己的指标? #535
Comments
def summarize_first_40_classes(self):
# 计算前 40 个类别的指标
for iou_type, coco_eval in self.coco_eval.items():
# 保存原始的类别 ID 列表
_origin_catIds = copy.deepcopy(coco_eval.params.catIds)
catIds = coco_eval.params.catIds[:40]
coco_eval.params.catIds = catIds
coco_eval.accumulate()
print(f"IoU metric: {iou_type}, First 40 Categories")
coco_eval.summarize()
# 恢复原始的类别 ID 列表
coco_eval.params.catIds = _origin_catIds |
非常感谢您及时的回应,但好像不是这里的问题,我将coco_eval.py前四十和后四十如您所言改成如下内容,结果仍有误: def summarize_first_40_classes(self):
将det_engine.py的测试部分改成: 可以看到第二次完整类别结果无误,说明恢复无误,但是得到的前40类和后40类结果仍然相等: 为了排除确实可能前40类与后40类的结果一致,我尝试了多种结果,比如前40类[0:40:1]和前70类[0:70:1]输出结果不同,前10类[0:10:1]和前11-20类[10:20:1]结果相同,前10类[0:10:1]和后10类[70:80:1]的结果相同,前40类[0:40:1]和间隔抽取40类[0:80:2]的结果相同,最终发现规律为:不论想求得catid为[n,n+m]的m类的结果,最终得到的输出结果均为[0,m]的指标,也就是说,只要catids切片出m个类做评估,结果都返回80类中的前m个类的指标。 我觉得有可能是我的代码误导了您,也许有别的办法实现我的需求,对于我的发现,您有什么建议吗? |
Star RTDETR
请先在RTDETR主页点击star以支持本项目
Star RTDETR to help more people discover this project.
Describe the bug
A clear and concise description of what the bug is.
If applicable, add screenshots to help explain your problem.
To Reproduce
Steps to reproduce the behavior.
我尝试在RT-DETR/rtdetr_pytorch/tools/../src/data/coco/coco_eval.py中更改代码,为了能够得到除了80个类的平均指标,还想得到每个类的指标以及前40个类的平均指标、后40个类的平均指标,但是跑出来结果发现每个类的指标都相同,前40和后40的结果也相同,不可能是巧合,只能是出问题了,请帮我解答,该如何修改能获得正确结果,下面是输出信息:
Category ID: 88
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=2.50s).
IoU metric: bbox, Category ID: 89
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=2.91s).
IoU metric: bbox, Category ID: 90
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=17.92s).
IoU metric: bbox, First 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885
Accumulating evaluation results...
DONE (t=16.82s).
IoU metric: bbox, Last 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885
下面是我改的代码coco_eval.py:
import os
import contextlib
import copy
import numpy as np
import torch
from pycocotools.cocoeval import COCOeval
from pycocotools.coco import COCO
import pycocotools.mask as mask_util
from src.misc import dist
all = ['CocoEvaluator', ]
class CocoEvaluator(object):
def init(self, coco_gt, iou_types):
assert isinstance(iou_types, (list, tuple))
coco_gt = copy.deepcopy(coco_gt)
self.coco_gt = coco_gt
def convert_to_xywh(boxes):
xmin, ymin, xmax, ymax = boxes.unbind(1)
return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)
def merge(img_ids, eval_imgs):
all_img_ids = dist.all_gather(img_ids)
all_eval_imgs = dist.all_gather(eval_imgs)
def create_common_coco_eval(coco_eval, img_ids, eval_imgs):
img_ids, eval_imgs = merge(img_ids, eval_imgs)
img_ids = list(img_ids)
eval_imgs = list(eval_imgs.flatten())
def evaluate(coco_eval):
'''
Run per image evaluation on given images and store results (a list of dict) in self.evalImgs
:return: None
'''
p = coco_eval.params
# add backward compatibility if useSegm is specified in params
if p.useSegm is not None:
p.iouType = 'segm' if p.useSegm == 1 else 'bbox'
print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))
p.imgIds = list(np.unique(p.imgIds))
if p.useCats:
p.catIds = list(np.unique(p.catIds))
p.maxDets = sorted(p.maxDets)
coco_eval.params = p
并在det_engine代码中加入了:
if coco_evaluator is not None:
coco_evaluator.accumulate()
coco_evaluator.summarize()
coco_evaluator.summarize_per_class() //每个类的
coco_evaluator.summarize_first_40_classes() //前40
coco_evaluator.summarize_last_40_classes() //后40
The text was updated successfully, but these errors were encountered: