|
| 1 | +English | [简体中文](README_CN.md) |
| 2 | +# YOLOv8 C++ Deployment Example |
| 3 | + |
| 4 | +This directory provides the example that `infer.cc` fast finishes the deployment of YOLOv8 on CPU/GPU and GPU through TensorRT. |
| 5 | + |
| 6 | +Two steps before deployment |
| 7 | + |
| 8 | +- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) |
| 9 | +- 2. Download the precompiled deployment library and samples code based on your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) |
| 10 | + |
| 11 | +Taking the CPU inference on Linux as an example, FastDeploy version 1.0.3 or above (x.x.x>=1.0.3) is required to support this model. |
| 12 | + |
| 13 | +```bash |
| 14 | +mkdir build |
| 15 | +cd build |
| 16 | +# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above |
| 17 | +wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz |
| 18 | +tar xvf fastdeploy-linux-x64-x.x.x.tgz |
| 19 | +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x |
| 20 | +make -j |
| 21 | + |
| 22 | +# 1. Download the official converted YOLOv8 ONNX model files and test images |
| 23 | +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx |
| 24 | +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg |
| 25 | + |
| 26 | +# CPU inference |
| 27 | +./infer_demo yolov8s.onnx 000000014439.jpg 0 |
| 28 | +# GPU inference |
| 29 | +./infer_demo yolov8s.onnx 000000014439.jpg 1 |
| 30 | +# TensorRT inference on GPU |
| 31 | +./infer_demo yolov8s.onnx 000000014439.jpg 2 |
| 32 | +``` |
| 33 | +The visualized result is as follows |
| 34 | + |
| 35 | +<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg"> |
| 36 | + |
| 37 | +he above command works for Linux or MacOS. For SDK in Windows, refer to: |
| 38 | +- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) |
| 39 | + |
| 40 | +If you use Huawei Ascend NPU deployment, refer to the following document to initialize the deployment environment: |
| 41 | +- [How to use Huawei Ascend NPU deployment](../../../../../docs/cn/faq/use_sdk_on_ascend.md) |
| 42 | + |
| 43 | +## YOLOv8 C++ Interface |
| 44 | + |
| 45 | +### YOLOv8 |
| 46 | + |
| 47 | +```c++ |
| 48 | +fastdeploy::vision::detection::YOLOv8( |
| 49 | + const string& model_file, |
| 50 | + const string& params_file = "", |
| 51 | + const RuntimeOption& runtime_option = RuntimeOption(), |
| 52 | + const ModelFormat& model_format = ModelFormat::ONNX) |
| 53 | +``` |
| 54 | +
|
| 55 | +YOLOv8 model loading and initialization, among which model_file is the exported ONNX model format |
| 56 | +
|
| 57 | +**Parameter** |
| 58 | +
|
| 59 | +> * **model_file**(str): Model file path |
| 60 | +> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format |
| 61 | +> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration |
| 62 | +> * **model_format**(ModelFormat): Model format. ONNX format by default |
| 63 | +
|
| 64 | +#### Predict function |
| 65 | +
|
| 66 | +> ```c++ |
| 67 | +> YOLOv8::Predict(cv::Mat* im, DetectionResult* result) |
| 68 | +> ``` |
| 69 | +> |
| 70 | +> Model prediction interface. Input images and output detection results |
| 71 | +> |
| 72 | +> **Parameter** |
| 73 | +> |
| 74 | +> > * **im**: Input images in HWC or BGR format |
| 75 | +> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult. |
| 76 | +
|
| 77 | +### Class Member Variable |
| 78 | +#### Pre-processing Parameter |
| 79 | +Users can modify the following preprocessing parameters based on actual needs to change the final inference and deployment results |
| 80 | +
|
| 81 | +> > * **size**(vector<int>): This parameter changes the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640] |
| 82 | +> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114] |
| 83 | +> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false` |
| 84 | +> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false` |
| 85 | +> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32` |
| 86 | +
|
| 87 | +- [Model Description](../../) |
| 88 | +- [Python Deployment](../python) |
| 89 | +- [Vision Model Prediction Results](../../../../../docs/api/vision_results/) |
| 90 | +- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md) |
0 commit comments