Skip to content

Commit b565c15

Browse files
[Model] Add tinypose single && pipeline model (#177)
* Add tinypose model * Add PPTinypose python API * Fix picodet preprocess bug && Add Tinypose examples * Update tinypose example code * Update ppseg preprocess if condition * Update ppseg backend support type * Update permute.h * Update README.md * Update code with comments * Move files dir * Delete premute.cc * Add single model pptinypose * Delete pptinypose old code in ppdet * Code format * Add ppdet + pptinypose pipeline model * Fix bug for posedetpipeline * Change Frontend to ModelFormat * Change Frontend to ModelFormat in __init__.py * Add python posedetpipeline/ * Update pptinypose example dir name * Update README.md * Update README.md * Update README.md * Update README.md * Create keypointdetection_result.md * Create README.md * Create README.md * Create README.md * Update README.md * Update README.md * Create README.md * Fix det_keypoint_unite_infer.py bug * Create README.md * Update PP-Tinypose by comment * Update by comment * Add pipeline directory * Add pptinypose dir * Update pptinypose to align accuracy * Addd warpAffine processor * Update GetCpuMat to GetOpenCVMat * Add comment for pptinypose && pipline * Update docs/main_page.md * Add README.md for pptinypose * Add README for det_keypoint_unite * Remove ENABLE_PIPELINE option * Remove ENABLE_PIPELINE option * Change pptinypose default backend * PP-TinyPose Pipeline support multi PP-Detection models * Update pp-tinypose comment * Update by comments * Add single test example Co-authored-by: Jason <[email protected]>
1 parent 49ab773 commit b565c15

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+2583
-20
lines changed

CMakeLists.txt

100755100644
+5-2
Original file line numberDiff line numberDiff line change
@@ -184,10 +184,11 @@ file(GLOB_RECURSE DEPLOY_TRT_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastde
184184
file(GLOB_RECURSE DEPLOY_OPENVINO_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/openvino/*.cc)
185185
file(GLOB_RECURSE DEPLOY_LITE_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/backends/lite/*.cc)
186186
file(GLOB_RECURSE DEPLOY_VISION_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/vision/*.cc)
187+
file(GLOB_RECURSE DEPLOY_PIPELINE_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/pipeline/*.cc)
187188
file(GLOB_RECURSE DEPLOY_VISION_CUDA_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/vision/*.cu)
188189
file(GLOB_RECURSE DEPLOY_TEXT_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/text/*.cc)
189190
file(GLOB_RECURSE DEPLOY_PYBIND_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/pybind/*.cc ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/*_pybind.cc)
190-
list(REMOVE_ITEM ALL_DEPLOY_SRCS ${DEPLOY_ORT_SRCS} ${DEPLOY_PADDLE_SRCS} ${DEPLOY_POROS_SRCS} ${DEPLOY_TRT_SRCS} ${DEPLOY_OPENVINO_SRCS} ${DEPLOY_LITE_SRCS} ${DEPLOY_VISION_SRCS} ${DEPLOY_TEXT_SRCS})
191+
list(REMOVE_ITEM ALL_DEPLOY_SRCS ${DEPLOY_ORT_SRCS} ${DEPLOY_PADDLE_SRCS} ${DEPLOY_POROS_SRCS} ${DEPLOY_TRT_SRCS} ${DEPLOY_OPENVINO_SRCS} ${DEPLOY_LITE_SRCS} ${DEPLOY_VISION_SRCS} ${DEPLOY_TEXT_SRCS} ${DEPLOY_PIPELINE_SRCS})
191192

192193
set(DEPEND_LIBS "")
193194

@@ -389,6 +390,7 @@ if(ENABLE_VISION)
389390
list(APPEND DEPLOY_VISION_SRCS ${DEPLOY_VISION_CUDA_SRCS})
390391
endif()
391392
list(APPEND ALL_DEPLOY_SRCS ${DEPLOY_VISION_SRCS})
393+
list(APPEND ALL_DEPLOY_SRCS ${DEPLOY_PIPELINE_SRCS})
392394
include_directories(${PROJECT_SOURCE_DIR}/third_party/yaml-cpp/include)
393395
include(${PROJECT_SOURCE_DIR}/cmake/opencv.cmake)
394396

@@ -586,7 +588,8 @@ if(BUILD_FASTDEPLOY_PYTHON)
586588

587589
if(NOT ENABLE_VISION)
588590
file(GLOB_RECURSE VISION_PYBIND_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/vision/*_pybind.cc)
589-
list(REMOVE_ITEM DEPLOY_PYBIND_SRCS ${VISION_PYBIND_SRCS})
591+
file(GLOB_RECURSE PIPELINE_PYBIND_SRCS ${PROJECT_SOURCE_DIR}/${CSRCS_DIR_NAME}/fastdeploy/pipeline/*_pybind.cc)
592+
list(REMOVE_ITEM DEPLOY_PYBIND_SRCS ${VISION_PYBIND_SRCS} ${PIPELINE_PYBIND_SRCS})
590593
endif()
591594

592595
if (NOT ENABLE_TEXT)

docs/api/vision_results/README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ FastDeploy根据视觉模型的任务类型,定义了不同的结构体(`fastd
66
| :----- | :--- | :---- | :------- |
77
| ClassifyResult | [C++/Python文档](./classification_result.md) | 图像分类返回结果 | ResNet50、MobileNetV3等 |
88
| SegmentationResult | [C++/Python文档](./segmentation_result.md) | 图像分割返回结果 | PP-HumanSeg、PP-LiteSeg等 |
9-
| DetectionResult | [C++/Python文档](./detection_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 |
9+
| DetectionResult | [C++/Python文档](./detection_result.md) | 目标检测返回结果 | PP-YOLOE、YOLOv7系列模型等 |
1010
| FaceDetectionResult | [C++/Python文档](./face_detection_result.md) | 目标检测返回结果 | SCRFD、RetinaFace系列模型等 |
11+
| KeyPointDetectionResult | [C++/Python文档](./keypointdetection_result.md) | 关键点检测返回结果 | PP-Tinypose系列模型等 |
1112
| FaceRecognitionResult | [C++/Python文档](./face_recognition_result.md) | 目标检测返回结果 | ArcFace、CosFace系列模型等 |
1213
| MattingResult | [C++/Python文档](./matting_result.md) | 目标检测返回结果 | MODNet系列模型等 |
1314
| OCRResult | [C++/Python文档](./ocr_result.md) | 文本框检测,分类和文本识别返回结果 | OCR系列模型等 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# KeyPointDetectionResult 目标检测结果
2+
3+
KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。
4+
5+
## C++ 定义
6+
7+
`fastdeploy::vision::KeyPointDetectionResult`
8+
9+
```c++
10+
struct KeyPointDetectionResult {
11+
std::vector<std::array<float, 2>> keypoints;
12+
std::vector<float> scores;
13+
int num_joints = -1;
14+
void Clear();
15+
std::string Str();
16+
};
17+
```
18+
19+
- **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J * 2`,
20+
- `N`:图片中的目标数量
21+
- `J`:num_joints(一个目标的关键点数量)
22+
- `3`:坐标信息[x, y]
23+
- **scores**: 成员变量,表示识别到的目标行为的关键点坐标的置信度。`scores.size()= N * J`
24+
- `N`:图片中的目标数量
25+
- `J`:num_joints(一个目标的关键点数量)
26+
- **num_joints**: 成员变量,一个目标的关键点数量
27+
28+
- **num_joints**: 成员变量,一个目标的关键点数量
29+
- **Clear()**: 成员函数,用于清除结构体中存储的结果
30+
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
31+
32+
## Python 定义
33+
34+
`fastdeploy.vision.KeyPointDetectionResult`
35+
36+
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。
37+
`keypoints.size()= N * J * 2`
38+
`N`:图片中的目标数量
39+
`J`:num_joints(关键点数量)
40+
`3`:坐标信息[x, y, conf]
41+
- **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。
42+
`scores.size()= N * J`
43+
`N`:图片中的目标数量
44+
`J`:num_joints(一个目标的关键点数量)
45+
- **num_joints**(int): 成员变量,一个目标的关键点数量

docs/api_docs/cpp/main_page.md

100755100644
+1
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,6 @@ Currently, FastDeploy supported backends listed as below,
2626
| Task | Model | API | Example |
2727
| :---- | :---- | :---- | :----- |
2828
| object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](./classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) |
29+
| keypoint detection | PaddleDetection/PPTinyPose | [fastdeploy::vision::keypointdetection::PPTinyPose](./classfastdeploy_1_1vision_1_1keypointdetection_1_1PPTinyPose.html) | [C++](./)/[Python](./) |
2930
| image classification | PaddleClassification serials | [fastdeploy::vision::classification::PaddleClasModel](./classfastdeploy_1_1vision_1_1classification_1_1PaddleClasModel.html) | [C++](./)/[Python](./) |
3031
| semantic segmentation | PaddleSegmentation serials | [fastdeploy::vision::classification::PaddleSegModel](./classfastdeploy_1_1vision_1_1segmentation_1_1PaddleSegModel.html) | [C++](./)/[Python](./) |
+7-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
11
# Keypoint Detection API
22

3-
comming soon...
3+
## fastdeploy.vision.keypointdetection.PPTinyPose
4+
5+
```{eval-rst}
6+
.. autoclass:: fastdeploy.vision.keypointdetection.PPTinyPose
7+
:members:
8+
:inherited-members:
9+
```

docs/api_docs/python/vision_results_cn.md

+13
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,19 @@ API:`fastdeploy.vision.FaceDetectionResult` , 该结果返回:
4040
- **landmarks**(list of list(float)): 成员变量,表示单张图片检测出来的所有人脸的关键点.
4141
- **landmarks_per_face**(int): 成员变量,表示每个人脸框中的关键点的数量.
4242

43+
## KeyPointDetectionResult
44+
KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。
45+
46+
API:`fastdeploy.vision.KeyPointDetectionResult` , 该结果返回:
47+
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J * 2`
48+
- `N`:图片中的目标数量
49+
- `J`:num_joints(一个目标的关键点数量)
50+
- `3`:坐标信息[x, y]
51+
- **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。`scores.size()= N * J`
52+
- `N`:图片中的目标数量
53+
- `J`:num_joints(一个目标的关键点数量)
54+
- **num_joints**(int): 成员变量,表示一个目标的关键点数量
55+
4356

4457
## FaceRecognitionResult
4558
FaceRecognitionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸识别模型对图像特征的embedding.

docs/api_docs/python/vision_results_en.md

+13
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,19 @@ API: `fastdeploy.vision.FaceDetectionResult`, The FaceDetectionResult will retur
4040
- **landmarks**(list of list(float)): Member variables that represent the key points of all faces detected by a single image.
4141
- **landmarks_per_face**(int):Member variable indicating the number of key points in each face frame.
4242

43+
## KeyPointDetectionResult
44+
The KeyPointDetectionResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the coordinates and confidence of each keypoint of the target behavior in the image.
45+
46+
API:`fastdeploy.vision.KeyPointDetectionResult`, The KeyPointDetectionResult will return:
47+
- **keypoints**(list of list(float)): Member variable, representing the key point coordinates of the identified target behavior. `keypoints.size()= N * J * 2`
48+
- `N`: number of objects in the picture
49+
- `J`: num_joints(number of keypoints for a target)
50+
- `3`: 坐标信息[x, y]
51+
- **scores**(list of float): Member variable, representing the confidence of the keypoint coordinates of the recognized target behavior. `scores.size()= N * J`
52+
- `N`: number of objects in the picture
53+
- `J`: num_joints(number of keypoints for a target)
54+
- **num_joints**(int): Member variable, representing the number of keypoints for a target
55+
4356
## FaceRecognitionResult
4457
The FaceRecognitionResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the embedding of the image features by the face recognition model.
4558

examples/vision/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
99
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
1010
| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
11+
| KeypointDetection | 关键点检测,输入图像,返回图像中人物行为的各个关键点坐标和置信度 | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) |
1112
| FaceRecognition | 人脸识别,输入图像,返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
1213
| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
1314
| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# 关键点检测模型
2+
3+
FastDeploy目前支持两种关键点检测任务方式的部署
4+
5+
| 任务 | 说明 | 模型格式 | 示例 | 版本 |
6+
| :---| :--- | :--- | :------- | :--- |
7+
| 单人关键点检测 | 部署PP-TinyPose系列模型,输入图像仅包含单人 | Paddle | 参考[tinypose目录](./tiny_pose/) | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
8+
| 单人/多人关键点检测 | 部署PicoDet + PP-TinyPose的模型串联任务,输入图像先通过检测模型,得到独立的人像子图后,再经过PP-TinyPose模型检测关键点 | Paddle | 参考[det_keypoint_unite目录](./det_keypoint_unite/) |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
9+
10+
# 预训练模型准备
11+
本文档提供了如下预训练模型,开发者可直接下载使用
12+
| 模型 | 说明 | 模型格式 | 版本 |
13+
| :--- | :--- | :------- | :--- |
14+
| [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 单人关键点检测模型 | Paddle | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
15+
| [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 单人关键点检测模型 | Paddle | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
16+
| [PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 单人关键点检测串联配置 | Paddle |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
17+
| [PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 多人关键点检测串联配置 | Paddle |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
# PP-PicoDet + PP-TinyPose 联合部署(Pipeline)
2+
3+
## 模型版本说明
4+
5+
- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
6+
7+
目前FastDeploy支持如下模型的部署
8+
9+
- [PP-PicoDet + PP-TinyPose系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
10+
11+
## 准备PP-TinyPose部署模型
12+
13+
PP-TinyPose以及PP-PicoDet模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
14+
15+
**注意**:导出的推理模型包含`model.pdmodel``model.pdiparams``infer_cfg.yml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息。
16+
17+
## 下载预训练模型
18+
19+
为了方便开发者的测试,下面提供了PP-PicoDet + PP-TinyPose(Pipeline)导出的部分模型,开发者可直接下载使用。
20+
21+
| 应用场景 | 模型 | 参数文件大小 | AP(业务数据集) | AP(COCO Val 单人/多人) | 单人/多人推理耗时 (FP32) | 单人/多人推理耗时(FP16) |
22+
|:-------------------------------|:--------------------------------- |:----- |:----- | :----- | :----- | :----- |
23+
| 单人模型配置 |[PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 4.6MB + 5.3MB | 86.2% | 52.8% | 12.90ms | 9.61ms |
24+
| 多人模型配置 |[PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 4.6M + 5.3MB | 85.7% | 49.9% | 47.63ms | 34.62ms |
25+
26+
**说明**
27+
- 关键点检测模型的精度指标是基于对应行人检测模型检测得到的检测框。
28+
- 精度测试中去除了flip操作,且检测置信度阈值要求0.5。
29+
- 速度测试环境为qualcomm snapdragon 865,采用arm8下4线程推理。
30+
- Pipeline速度包含模型的预处理、推理及后处理部分。
31+
- 精度测试中,为了公平比较,多人数据去除了6人以上(不含6人)的图像。
32+
33+
更多信息请参考:[PP-TinyPose 官方文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
34+
35+
## 详细部署文档
36+
37+
- [Python部署](python)
38+
- [C++部署](cpp)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
PROJECT(infer_demo C CXX)
2+
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
3+
4+
# 指定下载解压后的fastdeploy库路径
5+
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
6+
7+
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
8+
9+
# 添加FastDeploy依赖头文件
10+
include_directories(${FASTDEPLOY_INCS})
11+
12+
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/det_keypoint_unite_infer.cc)
13+
# 添加FastDeploy库依赖
14+
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# PP-PicoDet + PP-TinyPose (Pipeline) C++部署示例
2+
3+
本目录下提供`det_keypoint_unite_infer.cc`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成
4+
>> **注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../../tiny_pose/cpp/README.md)
5+
6+
在部署前,需确认以下两个步骤
7+
8+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
9+
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
10+
11+
12+
以Linux上推理为例,在本目录执行如下命令即可完成编译测试
13+
14+
```bash
15+
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-0.3.0.tgz
16+
tar xvf fastdeploy-linux-x64-gpu-0.3.0.tgz
17+
cd fastdeploy-linux-x64-gpu-0.3.0/examples/vision/keypointdetection/tiny_pose/cpp/
18+
mkdir build
19+
cd build
20+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.3.0
21+
make -j
22+
23+
# 下载PP-TinyPose和PP-PicoDet模型文件和测试图片
24+
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
25+
tar -xvf PP_TinyPose_256x192_infer.tgz
26+
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
27+
tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
28+
wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
29+
30+
# CPU推理
31+
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 0
32+
# GPU推理
33+
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 1
34+
# GPU上TensorRT推理
35+
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 2
36+
```
37+
38+
运行完成可视化结果如下图所示
39+
<div align="center">
40+
<img src="https://user-images.githubusercontent.com/16222477/196393343-eeb6b68f-0bc6-4927-871f-5ac610da7293.jpeg", width=359px, height=423px />
41+
</div>
42+
43+
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
44+
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
45+
46+
## PP-TinyPose C++接口
47+
48+
### PP-TinyPose类
49+
50+
```c++
51+
fastdeploy::pipeline::PPTinyPose(
52+
fastdeploy::vision::detection::PPYOLOE* det_model,
53+
fastdeploy::vision::keypointdetection::PPTinyPose* pptinypose_model)
54+
```
55+
56+
PPTinyPose Pipeline模型加载和初始化。
57+
58+
**参数**
59+
60+
> * **model_det_modelfile**(fastdeploy::vision::detection): 初始化后的检测模型,参考[PP-TinyPose](../../tiny_pose/README.md)
61+
> * **pptinypose_model**(fastdeploy::vision::keypointdetection): 初始化后的检测模型[Detection](../../../detection/paddledetection/README.md),暂时只提供PaddleDetection系列
62+
63+
#### Predict函数
64+
65+
> ```c++
66+
> PPTinyPose::Predict(cv::Mat* im, KeyPointDetectionResult* result)
67+
> ```
68+
>
69+
> 模型预测接口,输入图像直接输出关键点检测结果。
70+
>
71+
> **参数**
72+
>
73+
> > * **im**: 输入图像,注意需为HWC,BGR格式
74+
> > * **result**: 关键点检测结果,包括关键点的坐标以及关键点对应的概率值, KeyPointDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
75+
76+
### 类成员属性
77+
#### 后处理参数
78+
> > * **detection_model_score_threshold**(bool):
79+
输入PP-TinyPose模型前,Detectin模型过滤检测框的分数阈值
80+
81+
- [模型介绍](../../)
82+
- [Python部署](../python)
83+
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
84+
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

0 commit comments

Comments
 (0)