Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtdetr_pytorch的coco test/eval如何得到所有类别的总体指标结果之外,每个类别自己的指标? #535

Open
Xxxnh opened this issue Jan 14, 2025 · 2 comments
Assignees

Comments

@Xxxnh
Copy link

Xxxnh commented Jan 14, 2025

Star RTDETR
请先在RTDETR主页点击star以支持本项目
Star RTDETR to help more people discover this project.


Describe the bug
A clear and concise description of what the bug is.
If applicable, add screenshots to help explain your problem.

To Reproduce
Steps to reproduce the behavior.

我尝试在RT-DETR/rtdetr_pytorch/tools/../src/data/coco/coco_eval.py中更改代码,为了能够得到除了80个类的平均指标,还想得到每个类的指标以及前40个类的平均指标、后40个类的平均指标,但是跑出来结果发现每个类的指标都相同,前40和后40的结果也相同,不可能是巧合,只能是出问题了,请帮我解答,该如何修改能获得正确结果,下面是输出信息:
Category ID: 88
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=2.50s).
IoU metric: bbox, Category ID: 89
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=2.91s).
IoU metric: bbox, Category ID: 90
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.620
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.862
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.665
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.412
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.617
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.723
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.555
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.786
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.901
Accumulating evaluation results...
DONE (t=17.92s).
IoU metric: bbox, First 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885
Accumulating evaluation results...
DONE (t=16.82s).
IoU metric: bbox, Last 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885

下面是我改的代码coco_eval.py:
import os
import contextlib
import copy
import numpy as np
import torch

from pycocotools.cocoeval import COCOeval
from pycocotools.coco import COCO
import pycocotools.mask as mask_util

from src.misc import dist

all = ['CocoEvaluator', ]

class CocoEvaluator(object):
def init(self, coco_gt, iou_types):
assert isinstance(iou_types, (list, tuple))
coco_gt = copy.deepcopy(coco_gt)
self.coco_gt = coco_gt

    self.iou_types = iou_types
    self.coco_eval = {}
    for iou_type in iou_types:
        self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)

    self.img_ids = []
    self.eval_imgs = {k: [] for k in iou_types}

def update(self, predictions):
    img_ids = list(np.unique(list(predictions.keys())))
    self.img_ids.extend(img_ids)

    for iou_type in self.iou_types:
        results = self.prepare(predictions, iou_type)

        # suppress pycocotools prints
        with open(os.devnull, 'w') as devnull:
            with contextlib.redirect_stdout(devnull):
                coco_dt = COCO.loadRes(self.coco_gt, results) if results else COCO()
        coco_eval = self.coco_eval[iou_type]

        coco_eval.cocoDt = coco_dt
        coco_eval.params.imgIds = list(img_ids)
        img_ids, eval_imgs = evaluate(coco_eval)

        self.eval_imgs[iou_type].append(eval_imgs)

def synchronize_between_processes(self):
    for iou_type in self.iou_types:
        self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2)
        create_common_coco_eval(self.coco_eval[iou_type], self.img_ids, self.eval_imgs[iou_type])

def accumulate(self):
    for coco_eval in self.coco_eval.values():
        coco_eval.accumulate()

def summarize(self):
    for iou_type, coco_eval in self.coco_eval.items():
        print("IoU metric: {}".format(iou_type))
        coco_eval.summarize()

def summarize_per_class(self):
    # 计算每个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        catIds = coco_eval.params.catIds
        for catId in catIds:
            coco_eval.params.catIds = [catId]
            coco_eval.accumulate()
            print(f"IoU metric: {iou_type}, Category ID: {catId}")
            coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = catIds

def summarize_first_40_classes(self):
    # 计算前 40 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        catIds = coco_eval.params.catIds[:40]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, First 40 Categories")
        coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = coco_eval.params.catIds

def summarize_last_40_classes(self):
    # 计算后 40 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        catIds = coco_eval.params.catIds[-40:]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, Last 40 Categories")
        coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = coco_eval.params.catIds

def summarize_first_70_classes(self):
    # 计算前 70 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        catIds = coco_eval.params.catIds[:70]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, First 70 Categories")
        coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = coco_eval.params.catIds

def summarize_last_10_classes(self):
    # 计算后 10 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        catIds = coco_eval.params.catIds[-10:]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, Last 10 Categories")
        coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = coco_eval.params.catIds


def prepare(self, predictions, iou_type):
    if iou_type == "bbox":
        return self.prepare_for_coco_detection(predictions)
    elif iou_type == "segm":
        return self.prepare_for_coco_segmentation(predictions)
    elif iou_type == "keypoints":
        return self.prepare_for_coco_keypoint(predictions)
    else:
        raise ValueError("Unknown iou type {}".format(iou_type))

def prepare_for_coco_detection(self, predictions):
    coco_results = []
    for original_id, prediction in predictions.items():
        if len(prediction) == 0:
            continue

        boxes = prediction["boxes"]
        boxes = convert_to_xywh(boxes).tolist()
        scores = prediction["scores"].tolist()
        labels = prediction["labels"].tolist()

        coco_results.extend(
            [
                {
                    "image_id": original_id,
                    "category_id": labels[k],
                    "bbox": box,
                    "score": scores[k],
                }
                for k, box in enumerate(boxes)
            ]
        )
    return coco_results

def prepare_for_coco_segmentation(self, predictions):
    coco_results = []
    for original_id, prediction in predictions.items():
        if len(prediction) == 0:
            continue

        scores = prediction["scores"]
        labels = prediction["labels"]
        masks = prediction["masks"]

        masks = masks > 0.5

        scores = prediction["scores"].tolist()
        labels = prediction["labels"].tolist()

        rles = [
            mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0]
            for mask in masks
        ]
        for rle in rles:
            rle["counts"] = rle["counts"].decode("utf-8")

        coco_results.extend(
            [
                {
                    "image_id": original_id,
                    "category_id": labels[k],
                    "segmentation": rle,
                    "score": scores[k],
                }
                for k, rle in enumerate(rles)
            ]
        )
    return coco_results

def prepare_for_coco_keypoint(self, predictions):
    coco_results = []
    for original_id, prediction in predictions.items():
        if len(prediction) == 0:
            continue

        boxes = prediction["boxes"]
        boxes = convert_to_xywh(boxes).tolist()
        scores = prediction["scores"].tolist()
        labels = prediction["labels"].tolist()
        keypoints = prediction["keypoints"]
        keypoints = keypoints.flatten(start_dim=1).tolist()

        coco_results.extend(
            [
                {
                    "image_id": original_id,
                    "category_id": labels[k],
                    'keypoints': keypoint,
                    "score": scores[k],
                }
                for k, keypoint in enumerate(keypoints)
            ]
        )
    return coco_results

def convert_to_xywh(boxes):
xmin, ymin, xmax, ymax = boxes.unbind(1)
return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1)

def merge(img_ids, eval_imgs):
all_img_ids = dist.all_gather(img_ids)
all_eval_imgs = dist.all_gather(eval_imgs)

merged_img_ids = []
for p in all_img_ids:
    merged_img_ids.extend(p)

merged_eval_imgs = []
for p in all_eval_imgs:
    merged_eval_imgs.append(p)

merged_img_ids = np.array(merged_img_ids)
merged_eval_imgs = np.concatenate(merged_eval_imgs, 2)

# keep only unique (and in sorted order) images
merged_img_ids, idx = np.unique(merged_img_ids, return_index=True)
merged_eval_imgs = merged_eval_imgs[..., idx]

return merged_img_ids, merged_eval_imgs

def create_common_coco_eval(coco_eval, img_ids, eval_imgs):
img_ids, eval_imgs = merge(img_ids, eval_imgs)
img_ids = list(img_ids)
eval_imgs = list(eval_imgs.flatten())

coco_eval.evalImgs = eval_imgs
coco_eval.params.imgIds = img_ids
coco_eval._paramsEval = copy.deepcopy(coco_eval.params)

def evaluate(coco_eval):
'''
Run per image evaluation on given images and store results (a list of dict) in self.evalImgs
:return: None
'''
p = coco_eval.params
# add backward compatibility if useSegm is specified in params
if p.useSegm is not None:
p.iouType = 'segm' if p.useSegm == 1 else 'bbox'
print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))
p.imgIds = list(np.unique(p.imgIds))
if p.useCats:
p.catIds = list(np.unique(p.catIds))
p.maxDets = sorted(p.maxDets)
coco_eval.params = p

coco_eval._prepare()
# loop through images, area range, max detection number
catIds = p.catIds if p.useCats else [-1]

if p.iouType == 'segm' or p.iouType == 'bbox':
    computeIoU = coco_eval.computeIoU
elif p.iouType == 'keypoints':
    computeIoU = coco_eval.computeOks
coco_eval.ious = {
    (imgId, catId): computeIoU(imgId, catId)
    for imgId in p.imgIds
    for catId in catIds}

evaluateImg = coco_eval.evaluateImg
maxDet = p.maxDets[-1]
evalImgs = [
    evaluateImg(imgId, catId, areaRng, maxDet)
    for catId in catIds
    for areaRng in p.areaRng
    for imgId in p.imgIds
]
# this is NOT in the pycocotools code, but could be done outside
evalImgs = np.asarray(evalImgs).reshape(len(catIds), len(p.areaRng), len(p.imgIds))
coco_eval._paramsEval = copy.deepcopy(coco_eval.params)
return p.imgIds, evalImgs

并在det_engine代码中加入了:
if coco_evaluator is not None:
coco_evaluator.accumulate()
coco_evaluator.summarize()
coco_evaluator.summarize_per_class() //每个类的
coco_evaluator.summarize_first_40_classes() //前40
coco_evaluator.summarize_last_40_classes() //后40

@lyuwenyu
Copy link
Owner

恢复原始的类别 ID 列表的逻辑不对吧;改成下面试一试

def summarize_first_40_classes(self):
    # 计算前 40 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
         # 保存原始的类别 ID 列表
        _origin_catIds = copy.deepcopy(coco_eval.params.catIds)

        catIds = coco_eval.params.catIds[:40]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, First 40 Categories")
        coco_eval.summarize()

        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = _origin_catIds

@Xxxnh
Copy link
Author

Xxxnh commented Jan 15, 2025

非常感谢您及时的回应,但好像不是这里的问题,我将coco_eval.py前四十和后四十如您所言改成如下内容,结果仍有误:

def summarize_first_40_classes(self):
# 计算前 40 个类别的指标
for iou_type, coco_eval in self.coco_eval.items():
# 保存原始的类别 ID 列表
_origin_catIds = copy.deepcopy(coco_eval.params.catIds)

        catIds = coco_eval.params.catIds[:40]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, First 40 Categories")
        coco_eval.summarize()

        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = _origin_catIds

def summarize_last_40_classes(self):
    # 计算后 40 个类别的指标
    for iou_type, coco_eval in self.coco_eval.items():
        # 保存原始的类别 ID 列表
        _origin_catIds = copy.deepcopy(coco_eval.params.catIds)

        catIds = coco_eval.params.catIds[-40:]
        coco_eval.params.catIds = catIds
        coco_eval.accumulate()
        print(f"IoU metric: {iou_type}, Last 40 Categories")
        coco_eval.summarize()
        # 恢复原始的类别 ID 列表
        coco_eval.params.catIds = _origin_catIds

将det_engine.py的测试部分改成:
if coco_evaluator is not None:
coco_evaluator.accumulate()
coco_evaluator.summarize()
coco_evaluator.summarize_first_40_classes() # 前40
coco_evaluator.accumulate() #为了测试是否恢复,再次进行完整的类别评估
coco_evaluator.summarize() #为了测试是否恢复,再次进行完整的类别评估
coco_evaluator.summarize_last_40_classes() #后40

可以看到第二次完整类别结果无误,说明恢复无误,但是得到的前40类和后40类结果仍然相等:
Accumulating evaluation results...
DONE (t=32.16s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.531
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.712
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.347
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.701
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.390
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.655
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.721
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.547
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.765
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.880
Accumulating evaluation results...
DONE (t=21.92s).
IoU metric: bbox, First 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885
Accumulating evaluation results...
DONE (t=37.68s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.531
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.712
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.347
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.577
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.701
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.390
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.655
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.721
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.547
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.765
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.880
Accumulating evaluation results...
DONE (t=18.06s).
IoU metric: bbox, Last 40 Categories
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.573
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.770
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.617
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.420
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.626
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.393
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.666
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.730
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.592
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.784
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.885

为了排除确实可能前40类与后40类的结果一致,我尝试了多种结果,比如前40类[0:40:1]和前70类[0:70:1]输出结果不同,前10类[0:10:1]和前11-20类[10:20:1]结果相同,前10类[0:10:1]和后10类[70:80:1]的结果相同,前40类[0:40:1]和间隔抽取40类[0:80:2]的结果相同,最终发现规律为:不论想求得catid为[n,n+m]的m类的结果,最终得到的输出结果均为[0,m]的指标,也就是说,只要catids切片出m个类做评估,结果都返回80类中的前m个类的指标。
但是我尝试加入print语句发现,coco_eval.params.catIds的输出就是[n,n+m],但指标结果就是[0,m]的结果。也不知道问题出在何处。又或许是pycocotools.cocoeval本身的问题。

我觉得有可能是我的代码误导了您,也许有别的办法实现我的需求,对于我的发现,您有什么建议吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants