You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Doc]Update English version of some documents (#1083)
* 第一次提交
* 补充一处漏翻译
* deleted: docs/en/quantize.md
* Update one translation
* Update en version
* Update one translation in code
* Standardize one writing
* Standardize one writing
* Update some en version
* Fix a grammer problem
* Update en version for api/vision result
* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop
* Checkout the link in README in vision_results/ to the en documents
* Modify a title
* Add link to serving/docs/
* Finish translation of demo.md
* Update english version of serving/docs/
* Update title of readme
* Update some links
* Modify a title
* Update some links
* Update en version of java android README
* Modify some titles
* Modify some titles
* Modify some titles
* modify article to document
* update some english version of documents in examples
* Add english version of documents in examples/visions
* Sync to current branch
* Add english version of documents in examples
* Add english version of documents in examples
* Add english version of documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
* Update some documents in examples
Copy file name to clipboardexpand all lines: examples/text/ernie-3.0/cpp/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
3
3
4
4
Before deployment, two steps require confirmation.
5
5
6
-
-1. Environment of software and hardware should meet the requirements. Please refer to[FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
7
-
-2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
6
+
-1. Environment of software and hardware should meet the requirements. Please refer to[FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
7
+
-2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
8
8
9
9
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
Copy file name to clipboardexpand all lines: examples/text/ernie-3.0/python/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
4
4
5
5
Before deployment, two steps require confirmation.
6
6
7
-
-1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
8
-
-2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
7
+
-1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
8
+
-2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
9
9
10
10
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
Copy file name to clipboardexpand all lines: examples/text/ernie-3.0/serving/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
4
4
5
5
Before serving deployment, you need to confirm
6
6
7
-
-1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands of serving images.
7
+
-1. Refer to [FastDeploy Serving Deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands of serving images.
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/zh_CN/model_configuration.md)
177
+
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/EN/model_configuration-en.md)
Copy file name to clipboardexpand all lines: examples/text/uie/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ English | [简体中文](README_CN.md)
19
19
20
20
## Export Deployment Models
21
21
22
-
Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
22
+
Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).
Copy file name to clipboardexpand all lines: examples/text/uie/python/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
4
4
5
5
Before deployment, two steps need to be confirmed.
6
6
7
-
-1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.
UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
351
+
UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py).
Copy file name to clipboardexpand all lines: examples/text/uie/serving/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
4
4
5
5
Before serving deployment, you need to confirm:
6
6
7
-
-1. You can refer to [FastDeploy serving deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
7
+
-1. You can refer to [FastDeploy serving deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands for serving images.
8
8
9
9
## Prepare models
10
10
@@ -143,4 +143,4 @@ results:
143
143
144
144
## Configuration Modification
145
145
146
-
The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/zh_CN/model_configuration.md).
146
+
The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/EN/model_configuration-en.md).
Copy file name to clipboardexpand all lines: examples/vision/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -32,5 +32,5 @@ Targeted at the vision suite of PaddlePaddle and external popular models, FastDe
32
32
- Model Loading
33
33
- Calling the `predict`interface
34
34
35
-
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
35
+
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/en/faq/how_to_change_backend.md).
Copy file name to clipboardexpand all lines: examples/vision/classification/paddleclas/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Now FastDeploy supports the deployment of the following models
21
21
22
22
## Prepare PaddleClas Deployment Model
23
23
24
-
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
24
+
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA).
25
25
26
26
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
1.For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
1.You can directly use the quantized model provided by FastDeploy for deployment.
11
+
2.You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
11
12
12
-
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
13
+
For more information, please refer to [Model Quantization](../../quantize/README.md).
## Deploying the Quantized ResNet50_Vd Segmentation model on A311D
16
+
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.
17
+
1.Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
17
18
18
-
2.将编译后的库拷贝到当前目录,可使用如下命令:
19
+
2.Copy the compiled library to the current directory. You can run this line:
0 commit comments