You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Detection | Target detection. Input the image, detect the object’s position in the image, and return the detected box coordinates, category, and confidence coefficient |[DetectionResult](../../docs/api/vision_results/detection_result.md)|
10
+
| Segmentation | Semantic segmentation. Input the image and output the classification and confidence coefficient of each pixel |[SegmentationResult](../../docs/api/vision_results/segmentation_result.md)|
11
+
| Classification | Image classification. Input the image and output the classification result and confidence coefficient of the image |[ClassifyResult](../../docs/api/vision_results/classification_result.md)|
12
+
| FaceDetection | Face detection. Input the image, detect the position of faces in the image, and return detected box coordinates and key points of faces |[FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md)|
13
+
| FaceAlignment | Face alignment(key points detection).Input the image and return face key points |[FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md)|
14
+
| KeypointDetection | Key point detection. Input the image and return the coordinates and confidence coefficient of the key points of the person's behavior in the image |[KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md)|
15
+
| FaceRecognition | Face recognition. Input the image and return an embedding of facial features that can be used for similarity calculation |[FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md)|
16
+
| Matting | Matting. Input the image and return the Alpha value of each pixel in the foreground of the image |[MattingResult](../../docs/api/vision_results/matting_result.md)|
17
+
| OCR | Text box detection, classification, and text box content recognition. Input the image and return the text box’s coordinates, orientation category, and content |[OCRResult](../../docs/api/vision_results/ocr_result.md)|
18
+
| MOT | Multi-objective tracking. Input the image and detect the position of objects in the image, and return detected box coordinates, object id, and class confidence |[MOTResult](../../docs/api/vision_results/mot_result.md)|
19
+
| HeadPose | Head posture estimation. Return head Euler angle |[HeadPoseResult](../../docs/api/vision_results/headpose_result.md)|
20
+
21
+
## FastDeploy API Design
18
22
19
-
## FastDeploy API设计
23
+
Generally, visual models have a uniform task paradigm. When designing API (including C++/Python), FastDeploy conducts four steps to deploy visual models
Targeted at the vision suite of PaddlePaddle and external popular models, FastDeploy provides an end-to-end deployment service. Users merely prepare the model and follow these steps to complete the deployment
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
For developers' testing, some models exported by PaddleClas (including the inference_cls.yaml file) are provided below. Developers can download them directly.
0 commit comments