By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh.
Code repo for winning 2016 MSCOCO Keypoints Challenge, 2016 ECCV Best Demo Award, and 2017 CVPR Oral paper.
Watch our video result in YouTube or our website.
We present a bottom-up approach for realtime multi-person pose estimation, without using any person detector. For more details, refer to our CVPR'17 paper, our oral presentation video recording at CVPR 2017 or our presentation slides at ILSVRC and COCO workshop 2016.
This project is licensed under the terms of the license.
Thank you all for the efforts for the reimplementation! If you have new implementation and want to share with others, feel free to make a pull request or email me!
cd testing; get_model.sh
to retrieve our latest MSCOCO model from our web server.config.m
and run demo.m
for an example usage.cd testing/python
ipython notebook
demo.ipynb
and execute the codecd training; bash getData.sh
to obtain the COCO images in dataset/COCO/images/
, keypoints annotations in dataset/COCO/annotations/
and COCO official toolbox in dataset/COCO/coco/
.getANNO.m
in matlab to convert the annotation format from json to mat in dataset/COCO/mat/
.genCOCOMask.m
in matlab to obatin the mask images for unlabeled person. You can use 'parfor' in matlab to speed up the code.genJSON('COCO')
to generate a json file in dataset/COCO/json/
folder. The json files contain raw informations needed for training.python genLMDB.py
to generate your LMDB. (You can also download our LMDB for the COCO dataset (189GB file) by: bash get_lmdb.sh
)python setLayers.py --exp 1
to generate the prototxt and shell file for training.bash train_pose.sh 0,1
(generated by setLayers.py) to start the training with two gpus.Please cite the paper in your publications if it helps your research:
@inproceedings{cao2017realtime,
author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
booktitle = {CVPR},
title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
year = {2017}
}
@inproceedings{wei2016cpm,
author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
booktitle = {CVPR},
title = {Convolutional pose machines},
year = {2016}
}
前言 最近在研究Realtime_Multi-Person_Pose_Estimation的训练和再训练的过程。 参考 https://blog.csdn.net/qq_38469553/article/details/82119292 以及官方github https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation 开始安装 1)R
https://blog.csdn.net/kkae8643150/article/details/102711101 前言 最近在研究Realtime_Multi-Person_Pose_Estimation的训练和再训练的过程。 参考 https://blog.csdn.net/qq_38469553/article/details/82119292 以及官方github https://gi
keras_Realtime_Multi-Person_Pose_Estimation安装 Realtime_Multi-Person_Pose_Estimation有许多版本,包括keras、tensorflow、pytorch等,几乎涵盖了所有深度学习框架,在这里由于进行性能分析,想用到keras中model.summary()的函数,所以采用了keras版本。 安装过程根据官方安装过程,甚至
该部分可以帮助很好的理解论文的实现部分 源码地址:https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation 论文地址:https://arxiv.org/abs/1611.08050 # -*- coding:utf-8 -*- import sys from configobj import ConfigObj caffe_roo