pytorch在pytorch/previous-versions中下载。
conda create -n botsort_env python=3.7
conda activate botsort_env
git clone https://github.com/NirAharon/BoT-SORT.git
cd BoT-SORT
pip3 install -r requirements.txt
python3 setup.py develop
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
# cuda=10.1 库要求pytorch>1.6
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
# Cython-bbox
pip3 install cython_bbox
# faiss cpu / gpu
pip3 install faiss-cpu
pip3 install faiss-gpu
安装过程中遇到
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lap
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
则直接用conda安装lap库:
conda install -c conda-forge lap
退出环境时若出现错误:
deactivate does not accept arguments
remainder_args: ['py37']
则直接使用命令:
conda deactivate
1.用wget指令在后台下载数据集,下载到space/datasets/
nohop wget https://motchallenge.net/data/MOT17.zip &
nohop wget https://motchallenge.net/data/MOT20.zip &
unzip MOT17.zip
unzip MOT20.zip # 解压
解压的时候遇到下面的问题,说明压缩的文件包不完整,需要重新下载。
Archive: instantclient-basic-linux.x64-11.2.0.4.0.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
2.处理数据集,处理完的数据集放在BOT-SORT/fast-reid/datasets/MOT17_reid
cd space/BOT-SORT
# For MOT17
python3 fast_reid/datasets/generate_mot_patches.py --data_path ../datasets --mot 17
# For MOT20
python3 fast_reid/datasets/generate_mot_patches.py --data_path ../datasets --mot 20
预训练模型MOT17-SBS-S50MOT17-SBS-S50, MOT20-SBS-S50MOT20-SBS-S50本地下载后上传到BOT-SORT/pretrained。(直接用wget云端下载速度太慢)
用MOT17/FRCNN-*/的前一半图像/帧训练reid;用MOT20前一半图像训练reid。结果放在./logs/MOT17/sbs-S50
cd <BoT_SORT-dir>
conda activate botsort_env
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT17/sbs_S50.yml MODEL.DEVICE "cuda:0"
1.训练时出现错误:
TypeError: register_buffer()takes 3 positional arguments but 4 were given.
说明上一步预训练模型MOT17-SBS-S50没有下载,需要下载好放入pretrained路径下。
2.数据包找不到的问题
The current data structure is deprecated.
Please ''put data folders such as "bounding_box_train" under ''"MOT17-ReID"
说明数据包没有按照dataloader要求的位置放置。根据’fast_reid/fastreid/data/datasets/mot17.py’和mot20.py,MOT17_ReID放在’fast_reid/datasets’下并改名为’MOT17-ReID’;MOT20_ReID放在’fast_reid/datasets/MOT20’下并改名为’MOT20-ReID’。
3.遇到与训练权重下载太慢的问题:
Downloading: "https://github.com/zhanghang1989/ResNeSt/releases/download/weights_step1/resnest50-528c19ca.pth" to /home/zhw/.cache/torch/hub/checkpoints/resnest50-528c19ca.pth
在本地下载好resnest50.pth后通过xftp上传到云端,再在xshell用mv语句移动文件到’.cache…'路径下:
mv /Users/Desktop/resnest50-528c19ca.pth /home/zhw/.cache/torch/hub/checkpoints
4.遇到训练未完成就总是中断的问题:
Connection closing...Socket close.
Connection closed by foreign host.
利用’fast_reid/fastreid/engine/defaults.py’的resume_or_load(resume=args.resume),令args.resume=True,接下来的训练能接着上一次的checkpoints继续训练。
parser.add_argument(
"--resume",default=True,
action="store_true",
help="whether to attempt to resume from the checkpoint directory",
)
5.'DefaultTrainer has no attribute ‘self.epoch’
未知,直接用原train_net.py运行,就没有该错误。
6.‘Address already in use’
用占用端口,没有权限杀死进程。
关闭xshell重新连接。
7.训练太慢:
使用单机多卡并行,利用多gpu同时训练
# For training MOT17
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT17/sbs_S50.yml --num-gpus=2
# For training MOT20
python3 fast_reid/tools/train_net.py --config-file ./fast_reid/configs/MOT20/sbs_S50.yml --num-gpus=2
用3片gpu时常中断,用2片gpu最佳。
1.MOT17
需要提前在ByteTrack下载bytetrack_x_mot17.pth.tar。
在tools/track.py文件修改–fast-reid-weights=r"logs/MOT17/sbs_S50/model_final.pth"和–fast-reid-config=r"logs/MOT17/sbs_S50/config.yaml"
-test: 用实时网络追踪test数据集
cd <BoT-SORT_dir>
python3 tools/track.py <dataets_dir/MOT17> --default-parameters --with-reid --benchmark "MOT17" --eval "test" --fp16 --fuse
python3 tools/interpolation.py --txt_path <path_to_track_result>
#实例化
python3 tools/track.py ../datasets/MOT17 --default-parameters --with-reid --benchmark "MOT17" --eval "test" --fp16 --fuse --fast-reid-config=r"logs/MOT17/sbs_S50/config.yaml" --fast-reid-weights=r"logs/MOT17/sbs_S50/model_final.pth"
python3 tools/interpolation.py --txt_path ./YOLOX_output/yolo_x_mix_det/track_results
<path_to_track_result>=./YOLOX_output/yolo_x_mix_det/track_results
<dataets_dir/MOT17>=…/datasets/MOT17
-val: 追踪val数据集,val为MOT/train中后一半帧/图像集。提前下载bytetrack_ablation.pth.tar。
cd <BoT-SORT_dir>
# BoT-SORT
python3 tools/track.py <dataets_dir/MOT17> --default-parameters --benchmark "MOT17" --eval "val" --fp16 --fuse
#实例化
python3 tools/track.py ../datasets/MOT17 --default-parameters --benchmark "MOT17" --eval "val" --fp16 --fuse
# BoT-SORT-ReID
python3 tools/track.py ../datasets/MOT17 --default-parameters --with-reid --benchmark "MOT17" --eval "val" --fp16 --fuse
结果放在./YOLOX_output/yolo_x_ablation/track_results
2.MOT20
对于mot20同理,提前下载bytetrack_x_mot20.pth.tar放在pretrained下:
python3 tools/track.py ../datasets/MOT20 --default-parameters --with-reid --benchmark "MOT20" --eval "test" --fp16 --fuse --fast-reid-config=r"logs/MOT20/sbs_S50/config.yaml" --fast-reid-weights=r"logs/MOT20/sbs_S50/model_final.pth"
python3 tools/interpolation.py --txt_path ./YOLOX_output/yolo_x_mix_det/track_results
val:将tracker/GMC-file/MOTchallenge下的MOT20-01、02、03、05移动到tracker/GMC_file/MOT17_ablation
cd <BoT-SORT_dir>
# BoT-SORT
python3 tools/track.py ../datasets/MOT20 --default-parameters --benchmark "MOT20" --eval "val" --fp16 --fuse --fast-reid-config=r"logs/MOT20/sbs_S50/config.yaml" --fast-reid-weights=r"logs/MOT20/sbs_S50/model_final.pth"
# BoT-SORT-ReID
python3 tools/track.py ../datasets/MOT20 --default-parameters --with-reid --benchmark "MOT20" --eval "val" --fp16 --fuse --fast-reid-config=r"logs/MOT20/sbs_S50/config.yaml" --fast-reid-weights=r"logs/MOT20/sbs_S50/model_final.pth"
结果放在./YOLOX_output/yolo_x_mix_mot20_ch/track_results
先将图像集转化为视频,对视频追踪:
##########img2video.py#####################
import os
import cv2
import time
img_path = '../datasets/MOT17/test/MOT17-14-SDP/img1'
# 任意读取一张图片来代表视频中图片的尺寸
img = cv2.imread('../datasets/MOT17/test/MOT17-14-SDP/img1/000001.jpg')
imgInfo = img.shape
size = (imgInfo[1], imgInfo[0])
# 获得文件夹中图片的数量,从而进行循环生成视频文件
img_nums = len(os.listdir(img_path))
fourcc = cv2.VideoWriter_fourcc('M', 'P', '4', 'V') #.mp4
#fourcc = cv2.VideoWriter_fourcc('M', 'J', 'P', 'G') #.avi
# 写入对象 1.file name 2.编码器 3.帧率 4.尺寸大小
video_path='./YOLOX_outputs/yolox_x_mix_det/track_vis/'
videoWrite = cv2.VideoWriter(
os.path.join(video_path, 'test_MOT17-14-SDP_video.mp4'), fourcc, 25, size)
# 读取这个文件夹中的每一张图片(按照顺序)然后组合成视频,帧率是每秒 25 帧
# 由于MOT20-05序列中,图片命名和其它序列不一样,此处借助以下代码来完成
for i in range(len(os.listdir(img_path))):
print(i)
if i < 9:
filename = "00000" + str(i + 1) + ".jpg"
filename = os.path.join(img_path, filename)
# print(filename)
img = cv2.imread(filename)
videoWrite.write(img)
elif 9 <= i < 99:
filename = "0000" + str(i + 1) + ".jpg"
filename = os.path.join(img_path, filename)
# print(filename)
img = cv2.imread(filename)
videoWrite.write(img)
elif 99 <= i < 999:
filename = "000" + str(i + 1) + ".jpg"
filename = os.path.join(img_path, filename)
# print(filename)
img = cv2.imread(filename)
videoWrite.write(img)
else:
filename = "00" + str(i + 1) + ".jpg"
filename = os.path.join(img_path, filename)
# print(filename)
img = cv2.imread(filename)
videoWrite.write(img)
print("Process finish~")
将视频放在./YOLOX_output/yolo_x_ablation/track_vis,bytetrack_x_mot17.pth.tar在Bytetrack下载。
cd <BoT-SORT_dir>
# Original example
python3 tools/demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result
python3 tools/demo.py image --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result
python3 tools/demo_mot20.py video --path ./YOLOX_outputs/yolox_x_mix_mot20_ch/track_vis/test_MOT20_08_video.mp4 -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot20.tar --with-reid --fuse-score --fp16 --fuse --save_result --fast-reid-config=r"logs/MOT20/sbs_S50/config.yaml" --fast-reid-weights=r"logs/MOT20/sbs_S50/model_final.pth"
# Multi-class example
python3 tools/mc_demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result
<path_to_video>必须以.mp4结尾,此处<path_to_video>=./YOLOX_output/yolo_x_ablation/track_vis/test_MOT17-14-SDP_video.mp4
追踪视频放在./YOLOX_output/yolo_x_ablation/track_vis/当前时刻/test_MOT17-14-SDP_video.mp4
对val的追踪结果(./YOLOX_output/yolo_x_ablation/track_results/)对照ground truth进行评价。先对所有MOT17/train后一半图像的ground truth生成gt_val_half.txt,均放在MOT17/train/gt/
###################gen_gt_val_half.py###################
import os.path as osp
import os
import numpy as np
from tqdm import tqdm
import argparse
import os
import sys
import cv2
sys.path.append('.')
def mkdir_if_missing(d):
if osp.isfile(d):
d = osp.dirname(d)
if not osp.exists(d):
os.makedirs(d)
data_root = osp.join('../datasets') # the root directory of the dataset
gt_folder = osp.join(data_root, 'MOT17/train')
seqs_str = '''MOT17-02-DPM
MOT17-04-DPM
MOT17-05-DPM
MOT17-09-DPM
MOT17-10-DPM
MOT17-11-DPM
MOT17-13-DPM
MOT17-02-FRCNN
MOT17-04-FRCNN
MOT17-05-FRCNN
MOT17-09-FRCNN
MOT17-10-FRCNN
MOT17-11-FRCNN
MOT17-13-FRCNN
MOT17-02-SDP
MOT17-04-SDP
MOT17-05-SDP
MOT17-09-SDP
MOT17-10-SDP
MOT17-11-SDP
MOT17-13-SDP
'''
seqs = [seq.strip() for seq in seqs_str.split()]
def gen_gt_val():
for seq in tqdm(seqs):
print('start seq {}'.format(seq))
seq_info = open(osp.join(gt_folder, seq, 'seqinfo.ini')).read()
seqLength = int(seq_info[seq_info.find('seqLength=') + 10:seq_info.find('\nimWidth')])
print(seqLength)
gt_txt = osp.join(gt_folder, seq, 'gt', 'gt.txt')
gt = np.loadtxt(gt_txt, dtype=np.float64, delimiter=',')
gt=sorted(gt, key = lambda x:x[0], reverse = False) ##########################
save_val_gt = osp.join(gt_folder, seq, 'gt', 'gt_val_half.txt')
val_start = seqLength // 2
print(val_start)
with open(save_val_gt, 'w') as f:
for obj in gt:
label_str = '{:d},{:d},{:d},{:d},{:d},{:d},{:d},{:d},{:.6f}\n'.format(
int(obj[0]), int(obj[1]), int(obj[2]), int(obj[3]), int(obj[4]), int(obj[5]), int(obj[6]), int(obj[7]), obj[8])
if obj[0] > val_start+1:##########################################
f.write(label_str)
if __name__ == '__main__':
gen_gt_val()
对MOT17追踪val的结果评价mota:
from loguru import logger
import argparse
import os
import sys
import os.path as osp
import cv2
import numpy as np
import torch
import torch.backends.cudnn as cudnn
from torch.nn.parallel import DistributedDataParallel as DDP
sys.path.append('.')
from yolox.core import launch
from yolox.exp import get_exp
from yolox.utils import configure_nccl, fuse_model, get_local_rank, get_model_info, setup_logger
#from yolox.evaluators import MOTEvaluator
import argparse
import os
import random
import warnings
import glob
import motmetrics as mm
from collections import OrderedDict
from pathlib import Path
def compare_dataframes(gts, ts):
accs = []
names = []
for k, tsacc in ts.items():
if k in gts:
logger.info('Comparing {}...'.format(k))
accs.append(mm.utils.compare_to_groundtruth(gts[k], tsacc, 'iou', distth=0.5))
names.append(k)
else:
logger.warning('No ground truth for {}, skipping.'.format(k))
return accs, names
# evaluate MOTA
results_folder = './YOLOX_outputs/yolox_x_ablation/track_results'
mm.lap.default_solver = 'lap'
gt_type = '_val_half'
#gt_type = ''
print('gt_type', gt_type)
gtfiles = glob.glob(
os.path.join('../datasets/MOT17/train', '*/gt/gt{}.txt'.format(gt_type))) #'*/gt/gt{}.txt'
print('gt_files', gtfiles)
tsfiles = [f for f in glob.glob(os.path.join(results_folder, '*.txt')) if not os.path.basename(f).startswith('eval')]
logger.info('Found {} groundtruths and {} test files.'.format(len(gtfiles), len(tsfiles)))
logger.info('Available LAP solvers {}'.format(mm.lap.available_solvers))
logger.info('Default LAP solver \'{}\''.format(mm.lap.default_solver))
logger.info('Loading files.')
gt = OrderedDict([(Path(f).parts[-3], mm.io.loadtxt(f, fmt='mot15-2D', min_confidence=1)) for f in gtfiles])
ts = OrderedDict([(os.path.splitext(Path(f).parts[-1])[0], mm.io.loadtxt(f, fmt='mot15-2D', min_confidence=-1.0)) for f in tsfiles])
#os.path.splitext()是Python中的一个函数,用于将文件路径分割成文件名和扩展名两部分。
mh = mm.metrics.create()
accs, names = compare_dataframes(gt, ts)
logger.info('Running metrics')
metrics = ['recall', 'precision', 'num_unique_objects', 'mostly_tracked',
'partially_tracked', 'mostly_lost', 'num_false_positives', 'num_misses',
'num_switches', 'num_fragmentations', 'mota', 'motp', 'num_objects']
summary = mh.compute_many(accs, names=names, metrics=metrics, generate_overall=True)
# summary = mh.compute_many(accs, names=names, metrics=mm.metrics.motchallenge_metrics, generate_overall=True)
# print(mm.io.render_summary(
# summary, formatters=mh.formatters,
# namemap=mm.io.motchallenge_metric_names))
div_dict = {
'num_objects': ['num_false_positives', 'num_misses', 'num_switches', 'num_fragmentations'],
'num_unique_objects': ['mostly_tracked', 'partially_tracked', 'mostly_lost']}
for divisor in div_dict:
for divided in div_dict[divisor]:
summary[divided] = (summary[divided] / summary[divisor])
fmt = mh.formatters
change_fmt_list = ['num_false_positives', 'num_misses', 'num_switches', 'num_fragmentations', 'mostly_tracked',
'partially_tracked', 'mostly_lost']
for k in change_fmt_list:
fmt[k] = fmt['mota']
print(mm.io.render_summary(summary, formatters=fmt, namemap=mm.io.motchallenge_metric_names))
metrics = mm.metrics.motchallenge_metrics + ['num_objects']
summary = mh.compute_many(accs, names=names, metrics=metrics, generate_overall=True)
print(mm.io.render_summary(summary, formatters=mh.formatters, namemap=mm.io.motchallenge_metric_names))
logger.info('Completed')
评价数据集用
python tools/gen_gt_val_half.py
python tools/mota_mot17.py
python3 yolox/train.py -f yolox/exps/example/mot/yolox_x_ablation.py -d 2 -b 4 --fp16 -o -c pretrained/yolox_x.pth
python3 tools/demo.py video --path /home/sdc/zhuhuiwen_space/BOT-SORT/YOLOX_outputs/yolox_x_mix_det/track_vis/test_MOT17-14-SDP_video.mp4 -f yolox/exps/example/mot/yolox_x_mix_det.py -c /home/sdc/zhuhuiwen_space/BOT-SORT/YOLOX_outputs/yolox_x_ablation/mix_mot_ch/best_ckpt.pth.tar --fuse-score --fp16 --fuse --save_result