Skip to content

Commit 870551f

Browse files
authored
[Docs] Improve docs related to Ascend inference (#1227)
* Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add comments to create API docs * Improve OCR comments * fix conflict * Fix OCR Readme * Fix PPOCR readme * Fix PPOCR readme * fix conflict * Improve ascend readme * Improve ascend readme * Improve ascend readme * Improve ascend readme
1 parent 522e96b commit 870551f

File tree

12 files changed

+106
-58
lines changed

12 files changed

+106
-58
lines changed

docs/cn/build_and_install/huawei_ascend.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,5 +118,13 @@ FastDeploy现在已经集成FlyCV, 用户可以在支持的硬件平台上使用
118118

119119

120120
## 六.昇腾部署Demo参考
121-
- 华为昇腾NPU 上使用C++部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README.md)
122-
- 华为昇腾NPU 上使用Python部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README.md)
121+
122+
| 模型系列 | C++ 部署示例 | Python 部署示例 |
123+
| :-----------| :-------- | :--------------- |
124+
| PaddleClas | [昇腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README_CN.md) |
125+
| PaddleDetection | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/paddledetection/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/paddledetection/python/README_CN.md) |
126+
| PaddleSeg | [昇腾NPU C++ 部署示例](../../../examples/vision/segmentation/paddleseg/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples//vision/segmentation/paddleseg/python/README_CN.md) |
127+
| PaddleOCR | [昇腾NPU C++ 部署示例](../../../examples/vision/ocr/PP-OCRv3/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision//ocr/PP-OCRv3/python/README_CN.md) |
128+
| Yolov5 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov5/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov5/python/README_CN.md) |
129+
| Yolov6 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov6/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov6/python/README_CN.md) |
130+
| Yolov7 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov7/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov7/python/README_CN.md) |

docs/en/build_and_install/huawei_ascend.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,12 @@ In end-to-end model inference, the pre-processing and post-processing phases are
117117

118118

119119
## Deployment demo reference
120-
- Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: [PaddleClas Huawei Ascend NPU C++ Deployment Example](../../../examples/vision/classification/paddleclas/cpp/README.md)
121-
122-
- Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: [PaddleClas Huawei Ascend NPU Python Deployment Example](../../../examples/vision/classification/paddleclas/python/README.md)
120+
| Model | C++ Example | Python Example |
121+
| :-----------| :-------- | :--------------- |
122+
| PaddleClas | [Ascend NPU C++ Example](../../../examples/vision/classification/paddleclas/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/classification/paddleclas/python/README.md) |
123+
| PaddleDetection | [Ascend NPU C++ Example](../../../examples/vision/detection/paddledetection/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/paddledetection/python/README.md) |
124+
| PaddleSeg | [Ascend NPU C++ Example](../../../examples/vision/segmentation/paddleseg/cpp/README.md) | [Ascend NPU Python Example](../../../examples//vision/segmentation/paddleseg/python/README.md) |
125+
| PaddleOCR | [Ascend NPU C++ Example](../../../examples/vision/ocr/PP-OCRv3/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision//ocr/PP-OCRv3/python/README.md) |
126+
| Yolov5 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov5/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov5/python/README.md) |
127+
| Yolov6 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov6/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov6/python/README.md) |
128+
| Yolov7 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov7/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov7/python/README.md) |

examples/vision/detection/paddledetection/cpp/README.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
English | [简体中文](README_CN.md)
22
# PaddleDetection C++ Deployment Example
33

4-
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
4+
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
55

66
Before deployment, two steps require confirmation
77

@@ -15,13 +15,13 @@ ppyoloe is taken as an example for inference deployment
1515

1616
mkdir build
1717
cd build
18-
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
18+
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
1919
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
2020
tar xvf fastdeploy-linux-x64-x.x.x.tgz
2121
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
2222
make -j
2323

24-
# Download the PPYOLOE model file and test images
24+
# Download the PPYOLOE model file and test images
2525
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
2626
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
2727
tar xvf ppyoloe_crn_l_300e_coco.tgz
@@ -33,12 +33,16 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
3333
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
3434
# TensorRT Inference on GPU
3535
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
36+
# Kunlunxin XPU Inference
37+
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
38+
# Huawei Ascend Inference
39+
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
3640
```
3741

3842
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
3943
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
4044

41-
## PaddleDetection C++ Interface
45+
## PaddleDetection C++ Interface
4246

4347
### Model Class
4448

@@ -56,7 +60,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
5660
5761
**Parameter**
5862
59-
> * **model_file**(str): Model file path
63+
> * **model_file**(str): Model file path
6064
> * **params_file**(str): Parameter file path
6165
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
6266
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
@@ -73,7 +77,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
7377
> **Parameter**
7478
>
7579
> > * **im**: Input images in HWC or BGR format
76-
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
80+
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
7781
7882
- [Model Description](../../)
7983
- [Python Deployment](../python)

examples/vision/detection/paddledetection/python/README.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ Before deployment, two steps require confirmation.
99
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
1010

1111
```bash
12-
# Download deployment example code
12+
# Download deployment example code
1313
git clone https://github.com/PaddlePaddle/FastDeploy.git
1414
cd FastDeploy/examples/vision/detection/paddledetection/python/
1515

16-
# Download the PPYOLOE model file and test images
16+
# Download the PPYOLOE model file and test images
1717
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
1818
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
1919
tar xvf ppyoloe_crn_l_300e_coco.tgz
@@ -24,14 +24,18 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439
2424
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
2525
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
2626
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
27+
# Kunlunxin XPU Inference
28+
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
29+
# Huawei Ascend Inference
30+
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
2731
```
2832

2933
The visualized result after running is as follows
3034
<div align="center">
3135
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
3236
</div>
3337

34-
## PaddleDetection Python Interface
38+
## PaddleDetection Python Interface
3539

3640
```python
3741
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
@@ -52,7 +56,7 @@ PaddleDetection model loading and initialization, among which model_file and par
5256

5357
**Parameter**
5458

55-
> * **model_file**(str): Model file path
59+
> * **model_file**(str): Model file path
5660
> * **params_file**(str): Parameter file path
5761
> * **config_file**(str): Inference configuration yaml file path
5862
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)

examples/vision/detection/yolov5/cpp/README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,12 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
1212
```bash
1313
mkdir build
1414
cd build
15-
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
15+
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
1616
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
1717
tar xvf fastdeploy-linux-x64-x.x.x.tgz
1818
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
1919
make -j
20-
# Download the official converted yolov5 Paddle model files and test images
20+
# Download the official converted yolov5 Paddle model files and test images
2121
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
2222
tar -xvf yolov5s_infer.tar
2323
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
@@ -31,11 +31,13 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
3131
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
3232
# KunlunXin XPU inference
3333
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
34+
# Huawei Ascend Inference
35+
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
3436
```
3537

3638
The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:
3739
```bash
38-
# 1. Download the official converted yolov5 ONNX model files and test images
40+
# 1. Download the official converted yolov5 ONNX model files and test images
3941
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
4042
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
4143

@@ -53,7 +55,7 @@ The visualized result after running is as follows
5355
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
5456
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
5557

56-
## YOLOv5 C++ Interface
58+
## YOLOv5 C++ Interface
5759

5860
### YOLOv5 Class
5961

@@ -69,7 +71,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported
6971
7072
**Parameter**
7173
72-
> * **model_file**(str): Model file path
74+
> * **model_file**(str): Model file path
7375
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
7476
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
7577
> * **model_format**(ModelFormat): Model format. ONNX format by default

examples/vision/detection/yolov5/python/README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,19 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
2222
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
2323
# GPU inference
2424
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
25-
# TensorRT inference on GPU
25+
# TensorRT inference on GPU
2626
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
2727
# KunlunXin XPU inference
2828
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
29+
# Huawei Ascend Inference
30+
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
2931
```
3032

3133
The visualized result after running is as follows
3234

3335
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
3436

35-
## YOLOv5 Python Interface
37+
## YOLOv5 Python Interface
3638

3739
```python
3840
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
@@ -42,7 +44,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported
4244

4345
**Parameter**
4446

45-
> * **model_file**(str): Model file path
47+
> * **model_file**(str): Model file path
4648
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
4749
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
4850
> * **model_format**(ModelFormat): Model format. ONNX format by default

examples/vision/detection/yolov6/python/README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,9 @@ python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --d
2323
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
2424
# KunlunXin XPU inference
2525
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
26+
# Huawei Ascend Inference
27+
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device ascend
28+
2629
```
2730
If you want to verify the inference of ONNX models, refer to the following command:
2831
```bash
@@ -34,15 +37,15 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
3437
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu
3538
# GPU inference
3639
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu
37-
# TensorRT inference on GPU
40+
# TensorRT inference on GPU
3841
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True
3942
```
4043

4144
The visualized result after running is as follows
4245

4346
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">
4447

45-
## YOLOv6 Python Interface
48+
## YOLOv6 Python Interface
4649

4750
```python
4851
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
@@ -52,7 +55,7 @@ YOLOv6 model loading and initialization, among which model_file is the exported
5255

5356
**Parameter**
5457

55-
> * **model_file**(str): Model file path
58+
> * **model_file**(str): Model file path
5659
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
5760
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
5861
> * **model_format**(ModelFormat): Model format. ONNX format by default

examples/vision/detection/yolov7/cpp/README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
English | [简体中文](README_CN.md)
22
# YOLOv7 C++ Deployment Example
33

4-
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.
4+
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.
55

66
Before deployment, two steps require confirmation
77

@@ -13,7 +13,7 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
1313
```bash
1414
mkdir build
1515
cd build
16-
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
16+
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
1717
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
1818
tar xvf fastdeploy-linux-x64-x.x.x.tgz
1919
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
@@ -29,10 +29,12 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
2929
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 1
3030
# KunlunXin XPU inference
3131
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 2
32+
# Huawei Ascend inference
33+
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 3
3234
```
3335
If you want to verify the inference of ONNX models, refer to the following command:
3436
```bash
35-
# Download the official converted yolov7 ONNX model files and test images
37+
# Download the official converted yolov7 ONNX model files and test images
3638
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
3739
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
3840

@@ -52,7 +54,7 @@ The visualized result after running is as follows
5254
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
5355
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
5456

55-
## YOLOv7 C++ Interface
57+
## YOLOv7 C++ Interface
5658

5759
### YOLOv7 Class
5860

@@ -68,7 +70,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
6870
6971
**Parameter**
7072
71-
> * **model_file**(str): Model file path
73+
> * **model_file**(str): Model file path
7274
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
7375
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
7476
> * **model_format**(ModelFormat): Model format. ONNX format by default
@@ -86,7 +88,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
8688
> **Parameter**
8789
>
8890
> > * **im**: Input images in HWC or BGR format
89-
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
91+
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
9092
> > * **conf_threshold**: Filtering threshold of detection box confidence
9193
> > * **nms_iou_threshold**: iou threshold during NMS processing
9294

0 commit comments

Comments
 (0)