当前位置: 首页 > 工具软件 > xml-eval > 使用案例 >

关于eval.py中MAP,AP计算为0的问题,已经解决!!!(若不是类别名字大小写问题,可尝试参考本文)

文国发
2023-12-01

本篇文章主要记录 eval.py 中遇到的问题!!!

导致MAP,AP 计算为0!!!

本文始于学习SSD时,运用训练的权重做检测发现可行,但是运用eval.py去计算MAP结果只为0.

博主在这一个问题上犯了两个错,本文提供的方法不仅仅适用于SSD的eval.py
同样适用于解决其他项目中的voc_eval.py。
博主思考查询了一晚上,若对你有帮助,给个赞!!!

两个错误:

  • 1、 修改voc_eval函数,出现AP,MAP为零,是由于npos,rec, tp,fp等这几个参数计算出了问题。也就是rec=nan或者0,tp=0了所致
    • 先看voc_eval函数代码:(我修改好的)
def voc_eval(detpath,
             annopath,
             imagesetfile,
             classname,
             cachedir,
             ovthresh=0.5,
             use_07_metric=False):
 if not os.path.isdir(cachedir):
        os.mkdir(cachedir)
    cachefile = os.path.join(cachedir, 'annots.pkl')
    # read list of images
    with open(imagesetfile, 'r') as f:
        lines = f.readlines()
    imagenames = [x.strip() for x in lines]
    # print(imagenames)
    if not os.path.isfile(cachefile):
        # load annots
        recs = {}
        for i, imagename in enumerate(imagenames):
            recs[imagename] = parse_rec(annopath % (imagename))
            if i % 100 == 0:
                print('Reading annotation for {:d}/{:d}'.format(
                   i + 1, len(imagenames)))
        # save
        print('Saving cached annotations to {:s}'.format(cachefile))
        with open(cachefile, 'wb') as f:
            pickle.dump(recs, f)
    else:
        # load
        with open(cachefile, 'rb') as f:
            recs = pickle.load(f)

    # extract gt objects for this class
    class_recs = {}
    npos = 0
    for imagename in imagenames:
        # print(imagename)
        # for obj in recs[imagename]:
        #     if obj['name'] == classname:
        #         print('obj[name]:', obj['name'], classname)

        R = [obj for obj in recs[imagename] if obj['name'] == classname]
        # print('R', len(R))
        # for obj in recs[imagename]:
        #     print('classname:', classname, 'obj[name]:', obj['name'])
        bbox = np.array([x['bbox'] for x in R])
        # difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
        difficult = np.zeros(len(R)).astype(np.bool)
        det = [False] * len(R)
        # npos = npos + sum(~difficult)  # 没有difficult 则nops就为gt
        npos = npos + len(R)  # len(R) gt ground truth的数量
        class_recs[imagename] = {'bbox': bbox,
                                 'difficult': difficult,
                                 'det': det}
    print('npos', npos)

    # read dets
    detfile = detpath.format(classname)
    # print(detfile)
    with open(detfile, 'r') as f:
        lines = f.readlines()
    if any(lines) == 1:

        splitlines = [x.strip().split(' ') for x in lines]
        image_ids = [x[0] for x in splitlines]
        confidence = np.array([float(x[1]) for x in splitlines])
        BB = np.array([[float(z) for z in x[2:]] for x in splitlines])

        # sort by confidence
        sorted_ind = np.argsort(-confidence)
        sorted_scores = np.sort(-confidence)
        BB = BB[sorted_ind, :]
        image_ids = [image_ids[x] for x in sorted_ind]

        # go down dets and mark TPs and FPs
        nd = len(image_ids)
        tp = np.zeros(nd)
        fp = np.zeros(nd)
        for d in range(nd):
            R = class_recs[image_ids[d]]
            bb = BB[d, :].astype(float)
            ovmax = -np.inf
            BBGT = R['bbox'].astype(float)
            if BBGT.size > 0:
                # compute overlaps
                # intersection
                ixmin = np.maximum(BBGT[:, 0], bb[0])
                iymin = np.maximum(BBGT[:, 1], bb[1])
                ixmax = np.minimum(BBGT[:, 2], bb[2])
                iymax = np.minimum(BBGT[:, 3], bb[3])
                iw = np.maximum(ixmax - ixmin, 0.)
                ih = np.maximum(iymax - iymin, 0.)
                inters = iw * ih
                uni = ((bb[2] - bb[0]) * (bb[3] - bb[1]) +
                       (BBGT[:, 2] - BBGT[:, 0]) *
                       (BBGT[:, 3] - BBGT[:, 1]) - inters)
                overlaps = inters / uni
                ovmax = np.max(overlaps)
                jmax = np.argmax(overlaps)

            if ovmax > ovthresh:
                if not R['difficult'][jmax]:
                    if not R['det'][jmax]:
                        tp[d] = 1.
                        R['det'][jmax] = 1
                    else:
                        fp[d] = 1.
                # fp[d] = 1.
            else:
                fp[d] = 1.

        # compute precision recall
        fp = np.cumsum(fp)
        tp = np.cumsum(tp)
        rec = tp / float(npos)
        print(fp, tp, rec)
        # avoid divide by zero in case the first detection matches a difficult
        # ground truth
        prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
        ap = voc_ap(rec, prec, use_07_metric)
    else:
        rec = -1.
        prec = -1.
        ap = -1.

    return rec, prec, ap
    • 这个问题主要是由于xml标签文件中,若不存在difficult这一部分,就会导致difficult = np.array([x[‘difficult’] for x in R]).astype(np.bool)这里出问题,没有difficult的话,那么npos就是gt目标数,如果单纯注释掉的话又会导致npos为0那么在计算rec = tp / float(npos)便会出错,而且会导致:
   if not R['difficult'][jmax]:
                if not R['det'][jmax]:
                    tp[d] = 1.
                    R['det'][jmax] = 1
                else:
                    fp[d] = 1.`

报错,因此需要做出如下更改,因为无difficult,意味着difficult全为0,difficult = np.zeros(len®).astype(np.bool)用其替换就可以了,然后将原来的npos=npos+sum(~difficult)这句可以修改也可以不改,改的话可以直接用len®代替sum部分,因为len®就是gt目标数量。如此基本上就不会出现MAP为0的情况了。

**关于这么修改请先参考如下链接先看大致明白eval.py或者voc_eval.py,的代码解析,知到一些重要语句的意思。再参考本文修改,将非常受益!!!**voc_eval.py和eval.py基本是一样的!!没多少区别

yolo可视化——loss、Avg IOU、P-R、mAP、Recall (没有xml文件的情况)
Faster R-CNN/R-FCN里mAP的计算过程(voc_eval.py解析)

  • 2、xml中类别的大小写和voc中代码里的classnames类别标签不一致,此处问题的解决需要检查xml读取时关于name是如何转化的。
    • 具体方法有重新批量处理xml文件把name全部统一大写或者小写,然后在voc的代码里找到类别标签的那个列表,把每一个类别名字和xml中的name统一格式,关于SSD-pytorch版的voc0712,还需要把其中name = obj.find(‘name’).text.lower().strip()中的.lower()去除,不然xml中的name和类别列表中的名字要分大小写。
 类似资料: