Skip to content

Commit e2de3f3

Browse files
authored
[Doc] Add English version of some documents (#1221)
* Update README_CN.md * Create README.md * Update README.md * Create README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Create README.md * Update README.md * Update README_CN.md * Create README.md * Update README.md * Update README_CN.md * Rename examples/vision/faceid/insightface/rknpu2/cpp/README.md to examples/vision/faceid/insightface/rknpu2/README_EN.md * Rename README_CN.md to README.md * Rename README.md to README_EN.md * Rename README.md to README_CN.md * Rename README_EN.md to README.md * Create build.md * Create environment.md * Create issues.md * Update build.md * Update environment.md * Update issues.md * Update build.md * Update environment.md * Update issues.md
1 parent cfc7af2 commit e2de3f3

File tree

14 files changed

+689
-57
lines changed

14 files changed

+689
-57
lines changed

docs/cn/faq/rknpu2/build.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
[English](../../../en/faq/rknpu2/build.md) | 中文
12
# FastDeploy RKNPU2引擎编译
23

34
## FastDeploy后端支持详情

docs/cn/faq/rknpu2/environment.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
[English](../../../en/faq/rknpu2/environment.md) | 中文
12
# FastDeploy RKNPU2推理环境搭建
23

34
## 简介

docs/cn/faq/rknpu2/issues.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
[English](../../../en/faq/rknpu2/issues.md) | 中文
12
# RKNPU2常见问题合集
23

34
在使用FastDeploy的过程中大家可能会碰到很多的问题,这个文档用来记录已经解决的共性问题,方便大家查阅。

docs/en/faq/rknpu2/build.md

+78
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
English | [中文](../../../cn/faq/rknpu2/build.md)
2+
# FastDeploy RKNPU2 Engine Compilation
3+
4+
## FastDeploy supported backends
5+
FastDeploy currently supports the following backends on the RK platform:
6+
7+
| Backend | Platform | Supported model formats | Notes |
8+
|:------------------|:---------------------|:-------|:-------------------------------------------|
9+
| ONNX&nbsp;Runtime | RK356X <br> RK3588 | ONNX | Compile switch `ENABLE_ORT_BACKEND` is controlled by ON or OFF. Default OFF |
10+
| RKNPU2 | RK356X <br> RK3588 | RKNN | Compile switch `ENABLE_RKNPU2_BACKEND` is controlled by ON or OFF. Default OFF |
11+
12+
## Compile FastDeploy SDK
13+
14+
### Compile FastDeploy C++ SDK on board side
15+
16+
Currently, RKNPU2 is only available on linux. The following tutorial is completed on RK3568(debian 10) and RK3588(debian 11).
17+
18+
```bash
19+
git clone https://github.com/PaddlePaddle/FastDeploy.git
20+
cd FastDeploy
21+
22+
# If you are using the develop branch, type the following command
23+
git checkout develop
24+
25+
mkdir build && cd build
26+
cmake .. -DENABLE_ORT_BACKEND=ON \
27+
-DENABLE_RKNPU2_BACKEND=ON \
28+
-DENABLE_VISION=ON \
29+
-DRKNN2_TARGET_SOC=RK3588 \
30+
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.0
31+
make -j8
32+
make install
33+
```
34+
35+
### Cross-compile FastDeploy C++ SDK
36+
```bash
37+
git clone https://github.com/PaddlePaddle/FastDeploy.git
38+
cd FastDeploy
39+
40+
# If you are using the develop branch, type the following command
41+
git checkout develop
42+
43+
mkdir build && cd build
44+
cmake .. -DCMAKE_C_COMPILER=/home/zbc/opt/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc \
45+
-DCMAKE_CXX_COMPILER=/home/zbc/opt/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++ \
46+
-DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
47+
-DTARGET_ABI=arm64 \
48+
-DENABLE_ORT_BACKEND=OFF \
49+
-DENABLE_RKNPU2_BACKEND=ON \
50+
-DENABLE_VISION=ON \
51+
-DRKNN2_TARGET_SOC=RK3588 \
52+
-DENABLE_FLYCV=ON \
53+
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.0
54+
make -j8
55+
make install
56+
```
57+
58+
### Compile the Python SDK on the board
59+
60+
Currently, RKNPU2 is only available on linux. The following tutorial is completed on RK3568(debian 10) and RK3588(debian 11). Packing Python is dependent on `wheel`, so run `pip install wheel` before compiling.
61+
62+
```bash
63+
git clone https://github.com/PaddlePaddle/FastDeploy.git
64+
cd FastDeploy
65+
66+
# If you are using the develop branch, type the following command
67+
git checkout develop
68+
69+
cd python
70+
export ENABLE_ORT_BACKEND=ON
71+
export ENABLE_RKNPU2_BACKEND=ON
72+
export ENABLE_VISION=ON
73+
export RKNN2_TARGET_SOC=RK3588
74+
python3 setup.py build
75+
python3 setup.py bdist_wheel
76+
cd dist
77+
pip3 install fastdeploy_python-0.0.0-cp39-cp39-linux_aarch64.whl
78+
```

docs/en/faq/rknpu2/environment.md

+92
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
English | [中文](../../../cn/faq/rknpu2/environment.md)
2+
# FastDeploy RKNPU2 inference environment setup
3+
4+
## Introduction
5+
6+
We need to set up the development environment before deploying models on FastDeploy. The environment setup of FastDeploy is divided into two parts: the board-side inference environment setup and the PC-side model conversion environment setup.
7+
8+
## Board-side inference environment setup
9+
10+
Based on the feedback from developers, we provide two ways to set up the inference environment on the board: one-click script installation script and command line installation of development board dirver.
11+
12+
### Install via script
13+
14+
Most developers don't like complex command lines for installation, so FastDeploy provides a one-click way for developers to install stable RKNN. Refer to the following command to set up the board side environment
15+
16+
```bash
17+
# Download and unzip rknpu2_device_install_1.4.0
18+
wget https://bj.bcebos.com/fastdeploy/third_libs/rknpu2_device_install_1.4.0.zip
19+
unzip rknpu2_device_install_1.4.0.zip
20+
21+
cd rknpu2_device_install_1.4.0
22+
# RK3588 runs the following code
23+
sudo rknn_install_rk3588.sh
24+
# RK356X runs the following code
25+
sudo rknn_install_rk356X.sh
26+
```
27+
28+
### Install via the command line
29+
30+
For developers who want to try out the latest RK drivers, we provide a method to install them from scratch using the following command line.
31+
32+
```bash
33+
# Install the required packages
34+
sudo apt update -y
35+
sudo apt install -y python3
36+
sudo apt install -y python3-dev
37+
sudo apt install -y python3-pip
38+
sudo apt install -y gcc
39+
sudo apt install -y python3-opencv
40+
sudo apt install -y python3-numpy
41+
sudo apt install -y cmake
42+
43+
# Download rknpu2
44+
# RK3588 runs the following code
45+
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
46+
sudo cp ./rknpu2/runtime/RK3588/Linux/librknn_api/aarch64/* /usr/lib
47+
sudo cp ./rknpu2/runtime/RK3588/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
48+
49+
# RK356X runs the following code
50+
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
51+
sudo cp ./rknpu2/runtime/RK356X/Linux/librknn_api/aarch64/* /usr/lib
52+
sudo cp ./rknpu2/runtime/RK356X/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
53+
```
54+
55+
## Install rknn_toolkit2
56+
57+
There are dependency issues when installing the rknn_toolkit2. Here are the installation tutorial.
58+
rknn_toolkit2 depends on a few specific packages, so it is recommended to create a virtual environment using conda. The way to install conda is omitted and we mainly introduce how to install rknn_toolkit2.
59+
60+
61+
### Download rknn_toolkit2
62+
rknn_toolkit2 can usually be downloaded from git
63+
```bash
64+
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
65+
```
66+
67+
### Download and install the required packages
68+
```bash
69+
sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 \
70+
libsm6 libgl1-mesa-glx libprotobuf-dev gcc g++
71+
```
72+
73+
### Install rknn_toolkit2 environment
74+
```bash
75+
# Create virtual environment
76+
conda create -n rknn2 python=3.6
77+
conda activate rknn2
78+
79+
# Install numpy==1.16.6 first because rknn_toolkit2 has a specific numpy dependency
80+
pip install numpy==1.16.6
81+
82+
# Install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
83+
cd ~/Download /rknn-toolkit2-master/packages
84+
pip install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
85+
```
86+
87+
## Resource links
88+
89+
* [RKNPU2, rknntoolkit2 development board download Password:rknn](https://eyun.baidu.com/s/3eTDMk6Y)
90+
91+
## Other documents
92+
- [RKNN model conversion document](./export.md)

docs/en/faq/rknpu2/issues.md

+47
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
English | [中文](../../../cn/faq/rknpu2/issues.md)
2+
# RKNPU2 FAQs
3+
4+
This document collects the common problems when using FastDeploy.
5+
6+
## Navigation
7+
8+
- [Link issues in dynamic link library](#动态链接库链接问题)
9+
10+
## Link issues in dynamic link library
11+
12+
### Association issue
13+
14+
- [Issue 870](https://github.com/PaddlePaddle/FastDeploy/issues/870)
15+
16+
### Problem Description
17+
18+
No problem during compiling, but the following error is reported when running the program
19+
```text
20+
error while loading shared libraries: libfastdeploy.so.0.0.0: cannot open shared object file: No such file or directory
21+
```
22+
23+
### Analysis
24+
25+
The linker ld indicates that the library file cannot be found. The default directories for ld are /lib and /usr/lib.
26+
Other directories are also OK, but you need to let ld know where the library files are located.
27+
28+
29+
### Solutions
30+
31+
**Temporary solution**
32+
33+
This solution has no influence on the system, but it only works on the current terminal and fails when closing this terminal.
34+
35+
```bash
36+
source PathToFastDeploySDK/fastdeploy_init.sh
37+
```
38+
39+
**Permanent solution**
40+
41+
The temporary solution fails because users need to retype the command each time they reopen the terminal to run the program. If you don't want to constantly run the code, execute the following code:
42+
```bash
43+
source PathToFastDeploySDK/fastdeploy_init.sh
44+
sudo cp PathToFastDeploySDK/fastdeploy_libs.conf /etc/ld.so.conf.d/
45+
sudo ldconfig
46+
```
47+
After execution, the configuration file is written to the system. Refresh to let the system find the library location.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
English | [简体中文](README_CN.md)
2+
# YOLOv8 C++ Deployment Example
3+
4+
This directory provides the example that `infer.cc` fast finishes the deployment of YOLOv8 on CPU/GPU and GPU through TensorRT.
5+
6+
Two steps before deployment
7+
8+
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
9+
- 2. Download the precompiled deployment library and samples code based on your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
10+
11+
Taking the CPU inference on Linux as an example, FastDeploy version 1.0.3 or above (x.x.x>=1.0.3) is required to support this model.
12+
13+
```bash
14+
mkdir build
15+
cd build
16+
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
17+
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
18+
tar xvf fastdeploy-linux-x64-x.x.x.tgz
19+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
20+
make -j
21+
22+
# 1. Download the official converted YOLOv8 ONNX model files and test images
23+
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
24+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
25+
26+
# CPU inference
27+
./infer_demo yolov8s.onnx 000000014439.jpg 0
28+
# GPU inference
29+
./infer_demo yolov8s.onnx 000000014439.jpg 1
30+
# TensorRT inference on GPU
31+
./infer_demo yolov8s.onnx 000000014439.jpg 2
32+
```
33+
The visualized result is as follows
34+
35+
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
36+
37+
he above command works for Linux or MacOS. For SDK in Windows, refer to:
38+
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
39+
40+
If you use Huawei Ascend NPU deployment, refer to the following document to initialize the deployment environment:
41+
- [How to use Huawei Ascend NPU deployment](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
42+
43+
## YOLOv8 C++ Interface
44+
45+
### YOLOv8
46+
47+
```c++
48+
fastdeploy::vision::detection::YOLOv8(
49+
const string& model_file,
50+
const string& params_file = "",
51+
const RuntimeOption& runtime_option = RuntimeOption(),
52+
const ModelFormat& model_format = ModelFormat::ONNX)
53+
```
54+
55+
YOLOv8 model loading and initialization, among which model_file is the exported ONNX model format
56+
57+
**Parameter**
58+
59+
> * **model_file**(str): Model file path
60+
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
61+
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
62+
> * **model_format**(ModelFormat): Model format. ONNX format by default
63+
64+
#### Predict function
65+
66+
> ```c++
67+
> YOLOv8::Predict(cv::Mat* im, DetectionResult* result)
68+
> ```
69+
>
70+
> Model prediction interface. Input images and output detection results
71+
>
72+
> **Parameter**
73+
>
74+
> > * **im**: Input images in HWC or BGR format
75+
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult.
76+
77+
### Class Member Variable
78+
#### Pre-processing Parameter
79+
Users can modify the following preprocessing parameters based on actual needs to change the final inference and deployment results
80+
81+
> > * **size**(vector&lt;int&gt;): This parameter changes the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
82+
> > * **padding_value**(vector&lt;float&gt;): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
83+
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
84+
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
85+
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
86+
87+
- [Model Description](../../)
88+
- [Python Deployment](../python)
89+
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
90+
- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)

examples/vision/detection/yolov8/cpp/README_CN.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ YOLOv8模型加载和初始化,其中model_file为导出的ONNX模型格式。
8181
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
8282
> > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
8383
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
84-
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
84+
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高设置为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
8585
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
8686
8787
- [模型介绍](../../)

0 commit comments

Comments
 (0)