2D Pose人体关键点实时检测(Python/Android /C++ Demo)

2D Pose人体关键点实时检测(Python/Android /C++ Demo)


目录

2D Pose人体关键点检测(Python/Android /C++ Demo)

1.人体关键点数据集

(1)COCO数据集

(2)MPII数据集

(3)关键点示意图

2.人体关键点Demo(Python版本)

3.人体关键点Demo(C++版本)

4.人体关键点Demo(Android版本)

5.人体关键点检测效果


人体关键点检测(Human Keypoints Detection)又称为人体姿态估计,是计算机视觉中一个相对基础的任务,是人体动作识别、行为分析、人机交互等的前置任务。一般情况下可以将人体关键点检测细分为单人/多人关键点检测、2D/3D关键点检测,同时有算法在完成关键点检测之后还会进行关键点的跟踪,也被称为人体姿态跟踪。

本博客提供2D Pose的Python代码,以及C++版本的推理代码,还提供Android Demo APP,Android Demo 已经集成了轻量化版本的人体检测模型人体姿态估计模型,在普通手机可实时检测。

Repo推荐

    个人repo(Python版): https://github.com/PanJinquan/Human-Keypoints-Detection 个人repo(C++版):https://github.com/PanJinquan/Human-Pose-Estimation-Lite-cpp 个人repo(Android版 支持CPU/GPU加速):https://github.com/PanJinquan/Human-Pose-Estimation-Lite-cpp/tree/master/data/app 《Simple Baselines for Human Pose Estimation and Tracking》https://github.com/Microsoft/human-pose-estimation.pytorch 《Deep High-Resolution Representation Learning for Human Pose Estimation》https://github.com/leoxiaobin/deep-high-resolution-net.pytorch

1.人体关键点数据集

(1)COCO数据集

下载地址:http://cocodataset.org/

COCO人体关键点标注,最多标注全身的17个关键点,平均一幅图像2个人,最多有13个人;

人体关键点标注,每个人体关键点个数的分布情况,其中11-15这个范围的人体是最多的,有接近70000人,6-10其次,超过40000人,后面依次为16-17,2-5,1.

(2)MPII数据集

下载地址:http://human-pose.mpi-inf.mpg.de/#download

 人体关键点标注了全身16个关键点及其是否可见的信息,人数:train有28821,test有11701,有409种人类活动;使用mat的struct格式;行人框使用center和scale标注,人体尺度关于200像素高度。也就是除过了200

(3)关键点示意图

数据集 关键点示意图           关键点说明
COCO

# 图像左右翻转时,成对的关键点(训练时用于数据增强)

flip_pairs=[[1, 2], [3, 4], [5, 6], [7, 8],[9, 10], [11, 12], [13, 14], [15, 16]]

# 关键点连接线序号(用于绘制图像)

skeleton =[[15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], [7, 9], [8, 10], [0, 1], [0, 2], [1, 3], [2, 4]]

# 每个关键点序号对应人体关键点的意义

"keypoints": { 0: "nose", 1: "left_eye", 2: "right_eye", 3: "left_ear", 4: "right_ear", 5: "left_shoulder", 6: "right_shoulder", 7: "left_elbow", 8: "right_elbow", 9: "left_wrist", 10: "right_wrist", 11: "left_hip", 12: "right_hip", 13: "left_knee", 14: "right_knee", 15: "left_ankle", 16: "right_ankle" }

MPII

# 图像左右翻转时,成对的关键点(训练时用于数据增强)
flip_pairs = [[0, 5], [1, 4], [2, 3], [10, 15], [11, 14], [12, 13]]

# 关键点连接线序号(用于绘制图像)

skeleton=[[0, 1], [1, 2], [3, 4], [4, 5], [2, 6], [6, 3], [12, 11], [7, 12], [11, 10], [13, 14], [14, 15], [8, 9], [8, 7], [6, 7], [7, 13]]

# 每个关键点序号对应人体关键点的意义
"keypoints": {0: "r_ankle", 1: "r_knee", 2: "r_hip", 3: "l_hip", 4: "l_knee", 5: "l_ankle", 6: "pelvis", 7: "thorax", 8: "upper_neck", 9: "head_top", 10: " r_wrist", 11: "r_elbow", 12: "r_shoulder", 13: "l_shoulder", 14: "l_elbow", 15: "l_wrist" }

human3.6M
kinect
JointType_SpineBase = 0 # 脊柱底
JointType_SpineMid = 1 # 脊柱中间
JointType_Neck = 2 # 脖子
JointType_Head = 3 # 额头
JointType_ShoulderLeft = 4
JointType_ElbowLeft = 5
JointType_WristLeft = 6
JointType_HandLeft = 7
JointType_ShoulderRight = 8
JointType_ElbowRight = 9
JointType_WristRight = 10
JointType_HandRight = 11
JointType_HipLeft = 12
JointType_KneeLeft = 13
JointType_AnkleLeft = 14
JointType_FootLeft = 15
JointType_HipRight = 16
JointType_KneeRight = 17
JointType_AnkleRight = 18
JointType_FootRight = 19
JointType_SpineShoulder = 20
JointType_HandTipLeft = 21
JointType_ThumbLeft = 22
JointType_HandTipRight = 23
JointType_ThumbRight = 24
JointType_Count = 25

2.人体关键点Demo(Python版本)

git clone https://github.com/PanJinquan/Human-Keypoints-Detection  (仅提供了推理代码)

python demo.py

3.人体关键点Demo(C++版本)

git clone https://github.com/PanJinquan/Human-Pose-Estimation-Lite-cpp

bash build.sh

这是轻量化版本的人体姿态估计(2D Pose)C++推理代码,推理框架使用TNN

轻量化模型是基于MobileNet V2的改进版本 使用COCO的数据集进行训练,也可以支持MPII数据 支持OpenCL模型推理加速,在普通手机可实时检测 该仓库并未集成人体检测模型,Pose检测输入是原图,使用人体检测框并进行裁剪,Pose检测效果会更好 关于轻量化版本的人体检测检测模型,可参考Object-Detection-Lite-cpp 仅用于学习交流,并未进行过多的性能优化 模型训练代码暂时未提供

4.人体关键点Demo(Android版本)

Android Demo 已经集成了轻量化版本的人体检测模型人体姿态估计模型,在普通手机可实时检测
Android Demo CPU:70ms,GPU:50ms

  

5.人体关键点检测效果