Skip to content

Commit 3db0016

Browse files
committedOct 8, 2022
add qg example
1 parent d6f460e commit 3db0016

38 files changed

+5195
-0
lines changed
 

‎examples/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ PaddleNLP provides rich application examples covering mainstream NLP task to hel
2020
| text_correction |[文本纠错 (Text Correction)](./text_correction/):star: |
2121
| semantic_indexing | [语义索引 (Semantic Indexing)](./semantic_indexing/)|
2222
| information_extraction | [信息抽取 (Information Extraction)](./information_extraction/) |
23+
| question_generation | [问题生成 (Question Generation)](./question_generation/) |
2324

2425
## NLP 系统应用 (NLP System Applications)
2526

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# 问题生成
2+
3+
Question Generation(QG),即问题生成,指的是给定一段上下文和答案,自动生成一个流畅且符合上下文主题的问句。问题生成技术在教育、咨询、搜索、问答等多个领域均有着巨大的应用价值。
4+
5+
PaddleNLP提供英文和中文问题生成任务示例,分别基于英文预训练语言模型[t5](./t5)和中文预训练语言模型[unimo-text](./unimo-text)
+208
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,208 @@
1+
# 问题生成(Question Generation)
2+
3+
## 简介
4+
5+
Question Generation(QG),即问题生成,指的是给定一段上下文(passage或sentence),自动生成一个流畅且符合上下文主题的问句。问题生成通常可以分为两个分支,即无答案问题生成(answer-agnostic question generation)和有答案问题生成(answer-aware question generation)。
6+
7+
本项目是T5在 PaddlePaddle上开源实现的有答案问题生成的例子,包含了在SQuAD数据集上微调和生成的代码。
8+
9+
## 快速开始
10+
11+
### 环境依赖
12+
13+
- nltk
14+
- evaluate
15+
16+
17+
安装方式:`pip install -r requirements.txt`
18+
19+
### 代码结构说明
20+
21+
以下是本项目主要代码结构及说明:
22+
23+
```text
24+
.
25+
├── finetune.py # 模型微调主程序入口
26+
├── generate.py # 模型生成主程序入口
27+
├── utils.py # 定义参数及一些工具函数
28+
├── requirements.txt # 环境依赖文件
29+
└── README.md # 文档说明
30+
```
31+
32+
### 数据准备
33+
34+
#### 数据加载
35+
**SQuAD**(Stanford Question Answering Dataset)数据集是一个英文问答数据集,现有的问题生成研究主要在该数据集上进行评价。**SQuAD**中的数据由段落、问题、答案3个主要部分组成,其中段落从维基百科中获取,问题和答案通过众包的方式由人工标注。
36+
37+
为了方便用户快速测试,PaddleNLP Dataset API内置了Squad数据集,一键即可完成数据集加载,示例代码如下:
38+
39+
```python
40+
from paddlenlp.datasets import load_dataset
41+
train_set, dev_set, test_set = load_dataset("squad", splits=["train_v1", "dev_v1"])
42+
```
43+
44+
#### 数据处理
45+
针对**SQuAD**数据集,我们需要将QA任务格式的数据进行转换从而得到text2text形式的数据,默认构造方式如下,其他形式输入数据用户可以在convert_example函数中自行定义
46+
```text
47+
answer: {answer_text} context: {context_text}
48+
question: {question_text}
49+
```
50+
具体案例如下,
51+
```text
52+
answer: the Miller–Rabin primality test context: The property of being prime (or not) is called primality. A simple but slow method of verifying the primality of a given number n is known as trial division. It consists of testing whether n is a multiple of any integer between 2 and . Algorithms much more efficient than trial division have been devised to test the primality of large numbers. These include the Miller–Rabin primality test, which is fast but has a small probability of error, and the AKS primality test, which always produces the correct answer in polynomial time but is too slow to be practical. Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. As of January 2016[update], the largest known prime number has 22,338,618 decimal digits.
53+
54+
question: What is the name of the process which confirms the primality of a number n?
55+
```
56+
57+
### 模型训练
58+
59+
运行如下命令即可在训练集上进行finetune,并在验证集上进行验证
60+
61+
```shell
62+
# GPU启动,参数`--gpus`指定训练所用的GPU卡号,可以是单卡,也可以多卡
63+
# 例如使用1号和2号卡,则:`--gpu 1,2`
64+
unset CUDA_VISIBLE_DEVICES
65+
python -m paddle.distributed.launch --gpus 1,2 finetune.py \
66+
--model_name_or_path=t5-base \
67+
--dataset_name=squad \
68+
--output_dir=output \
69+
--max_source_length=1024 \
70+
--max_target_length=142 \
71+
--learning_rate=1e-4 \
72+
--num_train_epochs=6 \
73+
--logging_steps=100 \
74+
--save_steps=1000 \
75+
--seed=42 \
76+
--train_batch_size=20 \
77+
--eval_batch_size=64 \
78+
--warmup_proportion=0.1 \
79+
--ignore_pad_token_for_loss=True \
80+
--device=gpu
81+
```
82+
83+
其中参数释义如下:
84+
- `gpus` 指示了训练所用的GPU
85+
86+
- `model_name_or_path` 指示了finetune使用的预训练模型,可以是PaddleNLP提供的预训练模型,或者是本地的模型。如果使用本地的模型,则配置为本地模型的目录地址,例如: ./checkpoints/model_xx/,目录中需包含paddle模型参数model_state.pdparams。如果使用PaddleNLP提供的预训练模型,可以选择下面其中之一。
87+
88+
| PaddleNLP提供的预训练模型 |
89+
|---------------------------------|
90+
| t5-base |
91+
| t5-large |
92+
93+
- `dataset_name` 表示训练的数据集。
94+
95+
- `output_dir` 表示模型的保存路径。
96+
97+
- `max_source_length` 表示输入序列的长度,超过该长度将被截断。
98+
99+
- `max_target_length` 表示输出的最大长度。
100+
101+
- `learning_rate` 表示基础学习率大小,将与learning rate scheduler产生的值相乘作为当前学习率。
102+
103+
- `num_train_epochs` 表示训练轮数。
104+
105+
- `epochs` 表示训练轮数。
106+
107+
- `logging_steps` 表示日志打印间隔。
108+
109+
- `save_steps` 表示模型保存及评估间隔。
110+
111+
- `seed` 表示随机数生成器的种子。
112+
113+
- `train_batch_size` 表示训练每张卡上的样本数目。
114+
115+
- `eval_batch_size` 表示预测单卡上的样本数目。
116+
117+
- `warmup_proportion` 表示warmup_steps所占总步数的比例。学习率逐渐升高到基础学习率(即上面配置的learning_rate)所需要的迭代数。
118+
119+
- `device` 表示使用的设备。
120+
121+
程序运行时将会自动进行训练和验证,训练过程中会自动保存模型在指定的`output_dir`中。如:
122+
123+
```text
124+
./output/
125+
├── t5_model_1000.pdparams
126+
│ ├── model_config.json
127+
│ ├── model_state.pdparams
128+
│ ├── special_tokens_map.json
129+
│ ├── spiece.model
130+
│ └── tokenizer_config.json
131+
└── ...
132+
```
133+
134+
**NOTE:** 如需恢复模型训练,只需指定`model_name_or_path`为本地微调模型的路径即可。
135+
136+
### 模型预测
137+
138+
运行如下命令即可在验证集上进行测试
139+
140+
```shell
141+
# GPU启动,预测仅支持单卡
142+
export CUDA_VISIBLE_DEVICES=0
143+
python generate.py \
144+
--model_name_or_path=t5-base-finetuned-question-generation-ap \
145+
--dataset_name=squad \
146+
--output_path=generate.txt \
147+
--max_source_length=1024 \
148+
--max_target_length=142 \
149+
--decode_strategy=greedy_search \
150+
--top_k=2 \
151+
--top_p=1.0 \
152+
--num_beams=1 \
153+
--length_penalty=0.0 \
154+
--batch_size=64 \
155+
--seed=42 \
156+
--ignore_pad_token_for_loss=True \
157+
--logging_steps=100 \
158+
--device=gpu
159+
```
160+
161+
其中参数释义如下:
162+
- `model_name_or_path` 指示了预测使用的模型,可以是PaddleNLP提供的预训练模型,或者是本地的模型。如果使用本地的模型,则配置为本地模型的目录地址,例如: ./checkpoints/model_xx/,目录中需包含paddle模型参数model_state.pdparams。如果使用PaddleNLP提供的预训练模型,可以选择下面其中之一。
163+
164+
| PaddleNLP提供的预训练模型 |
165+
|---------------------------------|
166+
| t5-base |
167+
| t5-large |
168+
| mrm8488/t5-base-finetuned-question-generation-ap |
169+
170+
- `dataset_name` 表示预测的数据集。
171+
172+
- `output_path` 表示预测结果的保存路径。
173+
174+
- `max_source_length` 表示输入序列的长度,超过该长度将被截断。
175+
176+
- `max_target_length` 表示输出的最大长度。
177+
178+
- `decode_strategy` 表示预测解码时采取的策略,可选"sampling"、"greedy_search"和"beam_search"之一。
179+
180+
- `top_k` 表示采用"sampling"解码策略时,token的概率按从大到小排序,生成的token只从前`top_k`个中进行采样。
181+
182+
- `top_p` 表示采用"sampling"解码策略时,从词表中采样并选择概率之和大于给定阈值`top_p`的token。
183+
184+
- `num_beams` 表示besm search的beam size。
185+
186+
- `length_penalty` 表示besm search生成长度的指数惩罚。
187+
188+
- `batch_size` 表示每次迭代**单卡**上的样本数目。
189+
190+
- `seed` 表示随机数生成器的种子。
191+
192+
- `logging_steps` 表示日志打印间隔。
193+
194+
- `device` 表示使用的设备。
195+
196+
程序运行结束后会将预测生成的问题保存在`output_path`中。同时终端中会输出评估结果。
197+
198+
采用社区微调模型mrm8488/t5-base-finetuned-question-generation-ap在验证集上有如下结果:
199+
200+
| model_name_or_path | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 |
201+
| :----------------------: | :-------------: | :-------------: |:-------------: |:-------------: |
202+
| [mrm8488/t5-base-finetuned-question-generation-ap](https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap ) | 50.11 | 35.83 | 27.68 | 22.03 |
203+
204+
205+
206+
207+
## 参考文献
208+
1. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. and Liu, P.J., 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), pp.1-67.
+324
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,324 @@
1+
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
import os
15+
import argparse
16+
import random
17+
import time
18+
import distutils.util
19+
from pprint import pprint
20+
from functools import partial
21+
from tqdm import tqdm
22+
import numpy as np
23+
24+
import paddle
25+
import paddle.nn as nn
26+
from paddle.io import BatchSampler, DistributedBatchSampler, DataLoader
27+
from paddlenlp.transformers import T5ForConditionalGeneration, T5Tokenizer
28+
from paddlenlp.transformers import LinearDecayWithWarmup
29+
from paddlenlp.utils.log import logger
30+
from paddlenlp.datasets import load_dataset
31+
from paddlenlp.data import Tuple, Stack, Pad
32+
from utils import convert_example, compute_metrics
33+
34+
35+
def parse_args():
36+
parser = argparse.ArgumentParser()
37+
# Required parameters
38+
parser.add_argument("--model_name_or_path",
39+
default="t5-base",
40+
type=str,
41+
required=True,
42+
help="Path to pre-trained model. ")
43+
parser.add_argument(
44+
"--dataset_name",
45+
default="squad",
46+
type=str,
47+
required=True,
48+
help="The name of the dataset to use. Selected in the list: " + "squad")
49+
parser.add_argument(
50+
"--output_dir",
51+
default="output",
52+
type=str,
53+
required=True,
54+
help=
55+
"The output directory where the model predictions and checkpoints will be written.",
56+
)
57+
parser.add_argument(
58+
"--max_source_length",
59+
default=1024,
60+
type=int,
61+
help="The maximum total input sequence length after "
62+
"tokenization.Sequences longer than this will be truncated, sequences shorter will be padded.",
63+
)
64+
parser.add_argument(
65+
"--min_target_length",
66+
default=0,
67+
type=int,
68+
help=
69+
"The minimum total sequence length for target text when generating. ")
70+
parser.add_argument(
71+
"--max_target_length",
72+
default=142,
73+
type=int,
74+
help="The maximum total sequence length for target text after "
75+
"tokenization. Sequences longer than this will be truncated, sequences shorter will be padded."
76+
"during ``evaluate`` and ``predict``.",
77+
)
78+
parser.add_argument("--learning_rate",
79+
default=1e-4,
80+
type=float,
81+
help="The initial learning rate for Adam.")
82+
parser.add_argument(
83+
"--num_train_epochs",
84+
default=3,
85+
type=int,
86+
help="Total number of training epochs to perform.",
87+
)
88+
parser.add_argument("--logging_steps",
89+
type=int,
90+
default=100,
91+
help="Log every X updates steps.")
92+
parser.add_argument("--save_steps",
93+
type=int,
94+
default=100,
95+
help="Save checkpoint every X updates steps.")
96+
parser.add_argument(
97+
"--train_batch_size",
98+
default=20,
99+
type=int,
100+
help="Batch size per GPU/CPU for training.",
101+
)
102+
parser.add_argument(
103+
"--eval_batch_size",
104+
default=12,
105+
type=int,
106+
help="Batch size per GPU/CPU for evaluation.",
107+
)
108+
parser.add_argument("--weight_decay",
109+
default=0.0,
110+
type=float,
111+
help="Weight decay if we apply some.")
112+
parser.add_argument(
113+
"--warmup_steps",
114+
default=0,
115+
type=int,
116+
help=
117+
"Linear warmup over warmup_steps. If > 0: Override warmup_proportion")
118+
parser.add_argument("--warmup_proportion",
119+
default=0.1,
120+
type=float,
121+
help="Linear warmup proportion over total steps.")
122+
parser.add_argument("--adam_epsilon",
123+
default=1e-6,
124+
type=float,
125+
help="Epsilon for Adam optimizer.")
126+
parser.add_argument(
127+
"--max_steps",
128+
default=-1,
129+
type=int,
130+
help=
131+
"If > 0: set total number of training steps to perform. Override num_train_epochs.",
132+
)
133+
parser.add_argument("--seed",
134+
default=42,
135+
type=int,
136+
help="random seed for initialization")
137+
parser.add_argument(
138+
"--device",
139+
default="gpu",
140+
type=str,
141+
choices=["cpu", "gpu", "xpu"],
142+
help="The device to select to train the model, is must be cpu/gpu/xpu.")
143+
parser.add_argument("--use_amp",
144+
default=False,
145+
type=distutils.util.strtobool,
146+
help="Enable mixed precision training.")
147+
parser.add_argument("--scale_loss",
148+
default=2**15,
149+
type=float,
150+
help="The value of scale_loss for fp16.")
151+
args = parser.parse_args()
152+
return args
153+
154+
155+
def set_seed(args):
156+
# Use the same data seed(for data shuffle) for all procs to guarantee data
157+
# consistency after sharding.
158+
random.seed(args.seed)
159+
np.random.seed(args.seed)
160+
# Maybe different op seeds(for dropout) for different procs is better. By:
161+
# `paddle.seed(args.seed + paddle.distributed.get_rank())`
162+
paddle.seed(args.seed)
163+
164+
165+
@paddle.no_grad()
166+
def evaluate(model, data_loader, tokenizer, ignore_pad_token_for_loss,
167+
min_target_length, max_target_length):
168+
model.eval()
169+
all_preds = []
170+
all_labels = []
171+
model = model._layers if isinstance(model, paddle.DataParallel) else model
172+
for batch in tqdm(data_loader, total=len(data_loader), desc="Eval step"):
173+
input_ids, _, _, labels = batch
174+
preds = model.generate(input_ids=input_ids,
175+
min_length=min_target_length,
176+
max_length=max_target_length,
177+
use_cache=True)[0]
178+
all_preds.extend(preds.numpy())
179+
all_labels.extend(labels.numpy())
180+
bleu_result, decoded_preds, decoded_labels = compute_metrics(
181+
all_preds, all_labels, tokenizer, ignore_pad_token_for_loss)
182+
logger.info(bleu_result)
183+
model.train()
184+
185+
186+
def do_train(args):
187+
paddle.set_device(args.device)
188+
if paddle.distributed.get_world_size() > 1:
189+
paddle.distributed.init_parallel_env()
190+
191+
set_seed(args)
192+
tokenizer = T5Tokenizer.from_pretrained(args.model_name_or_path)
193+
model = T5ForConditionalGeneration.from_pretrained(args.model_name_or_path)
194+
trans_func = partial(
195+
convert_example,
196+
tokenizer=tokenizer,
197+
decoder_start_token_id=model.t5.bos_token_id,
198+
max_source_length=args.max_source_length,
199+
max_target_length=args.max_target_length,
200+
ignore_pad_token_for_loss=args.ignore_pad_token_for_loss)
201+
logger.info("Loading train and dev dataset: %s" % args.dataset_name)
202+
train_set, dev_set = load_dataset(args.dataset_name,
203+
splits=["train_v1", "dev_v1"])
204+
logger.info("Loaded train and dev dataset: %s" % args.dataset_name)
205+
train_set = train_set.map(trans_func, lazy=True)
206+
train_batch_sampler = DistributedBatchSampler(
207+
train_set, batch_size=args.train_batch_size, shuffle=True)
208+
209+
batchify_fn = lambda samples, fn=Tuple(
210+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"), # input_ids
211+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"
212+
), # attention_mask
213+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"
214+
), # decoder_input_ids
215+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"), # labels
216+
): fn(samples)
217+
train_data_loader = DataLoader(dataset=train_set,
218+
batch_sampler=train_batch_sampler,
219+
num_workers=0,
220+
collate_fn=batchify_fn,
221+
return_list=True)
222+
dev_set = dev_set.map(trans_func, lazy=True)
223+
dev_batch_sampler = BatchSampler(dev_set,
224+
batch_size=args.eval_batch_size,
225+
shuffle=False)
226+
dev_data_loader = DataLoader(dataset=dev_set,
227+
batch_sampler=dev_batch_sampler,
228+
num_workers=0,
229+
collate_fn=batchify_fn,
230+
return_list=True)
231+
232+
if paddle.distributed.get_world_size() > 1:
233+
model = paddle.DataParallel(model)
234+
235+
num_training_steps = args.max_steps if args.max_steps > 0 else (
236+
len(train_data_loader) * args.num_train_epochs)
237+
warmup = args.warmup_steps if args.warmup_steps > 0 else args.warmup_proportion
238+
239+
lr_scheduler = LinearDecayWithWarmup(args.learning_rate, num_training_steps,
240+
warmup)
241+
242+
# Generate parameter names needed to perform weight decay.
243+
# All bias and LayerNorm parameters are excluded.
244+
decay_params = [
245+
p.name for n, p in model.named_parameters()
246+
if not any(nd in n for nd in ["bias", "norm"])
247+
]
248+
optimizer = paddle.optimizer.AdamW(
249+
learning_rate=lr_scheduler,
250+
beta1=0.9,
251+
beta2=0.999,
252+
epsilon=args.adam_epsilon,
253+
parameters=model.parameters(),
254+
weight_decay=args.weight_decay,
255+
apply_decay_param_fun=lambda x: x in decay_params)
256+
257+
if args.use_amp:
258+
scaler = paddle.amp.GradScaler(init_loss_scaling=args.scale_loss)
259+
global_step = 0
260+
tic_train = time.time()
261+
for epoch in tqdm(range(args.num_train_epochs), desc="Epoch"):
262+
for step, batch in tqdm(enumerate(train_data_loader),
263+
desc="Train step",
264+
total=len(train_data_loader)):
265+
global_step += 1
266+
input_ids, attention_mask, decoder_input_ids, labels = batch
267+
with paddle.amp.auto_cast(
268+
args.use_amp,
269+
custom_white_list=["layer_norm", "softmax", "gelu"]):
270+
output = model(input_ids,
271+
attention_mask,
272+
decoder_input_ids,
273+
labels=labels)
274+
loss = output[0]
275+
if args.use_amp:
276+
scaled_loss = scaler.scale(loss)
277+
scaled_loss.backward()
278+
scaler.minimize(optimizer, scaled_loss)
279+
else:
280+
loss.backward()
281+
optimizer.step()
282+
lr_scheduler.step()
283+
optimizer.clear_grad()
284+
if global_step % args.logging_steps == 0:
285+
logger.info(
286+
"global step %d/%d, epoch: %d, batch: %d, rank_id: %s, loss: %f, lr: %.10f, speed: %.4f step/s"
287+
% (global_step, num_training_steps, epoch, step,
288+
paddle.distributed.get_rank(), loss, optimizer.get_lr(),
289+
args.logging_steps / (time.time() - tic_train)))
290+
tic_train = time.time()
291+
if global_step % args.save_steps == 0 or global_step == num_training_steps:
292+
tic_eval = time.time()
293+
evaluate(model, dev_data_loader, tokenizer,
294+
args.ignore_pad_token_for_loss, args.min_target_length,
295+
args.max_target_length)
296+
logger.info("eval done total : %s s" % (time.time() - tic_eval))
297+
if paddle.distributed.get_rank() == 0:
298+
output_dir = os.path.join(
299+
args.output_dir, "t5_model_%d.pdparams" % global_step)
300+
if not os.path.exists(output_dir):
301+
os.makedirs(output_dir)
302+
# Need better way to get inner model of DataParallel
303+
model_to_save = model._layers if isinstance(
304+
model, paddle.DataParallel) else model
305+
model_to_save.save_pretrained(output_dir)
306+
tokenizer.save_pretrained(output_dir)
307+
if global_step >= num_training_steps:
308+
return
309+
if paddle.distributed.get_rank() == 0:
310+
output_dir = os.path.join(args.output_dir,
311+
"t5_model_final_%d.pdparams" % global_step)
312+
if not os.path.exists(output_dir):
313+
os.makedirs(output_dir)
314+
# Need better way to get inner model of DataParallel
315+
model_to_save = model._layers if isinstance(
316+
model, paddle.DataParallel) else model
317+
model_to_save.save_pretrained(output_dir)
318+
tokenizer.save_pretrained(output_dir)
319+
320+
321+
if __name__ == "__main__":
322+
args = parse_args()
323+
pprint(args)
324+
do_train(args)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
python -m paddle.distributed.launch --gpus 4,5,6,7 finetune.py \
16+
--model_name_or_path=t5-base \
17+
--dataset_name=squad \
18+
--output_dir=output \
19+
--max_source_length=1024 \
20+
--max_target_length=142 \
21+
--learning_rate=1e-4 \
22+
--num_train_epochs=6 \
23+
--logging_steps=100 \
24+
--save_steps=1000 \
25+
--seed=42 \
26+
--train_batch_size=8 \
27+
--eval_batch_size=64 \
28+
--warmup_proportion=0.1 \
29+
--device=gpu
+240
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,240 @@
1+
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
import sys
15+
import argparse
16+
import random
17+
import time
18+
from functools import partial
19+
from pprint import pprint
20+
import numpy as np
21+
import paddle
22+
from paddle.io import BatchSampler, DataLoader
23+
from paddlenlp.datasets import load_dataset
24+
from paddlenlp.data import Tuple, Stack, Pad
25+
from paddlenlp.transformers import T5ForConditionalGeneration, T5Tokenizer
26+
from utils import convert_example, compute_metrics
27+
28+
29+
def parse_args():
30+
parser = argparse.ArgumentParser()
31+
# Required parameters
32+
parser.add_argument("--model_name_or_path",
33+
default="t5-base",
34+
type=str,
35+
required=True,
36+
help="Path to pre-trained model. ")
37+
parser.add_argument(
38+
"--dataset_name",
39+
default="squad",
40+
type=str,
41+
required=True,
42+
help="The name of the dataset to use. Selected in the list: " + "squad")
43+
parser.add_argument(
44+
'--output_path',
45+
type=str,
46+
default='generate.txt',
47+
help='The file path where the infer result will be saved.')
48+
parser.add_argument(
49+
"--max_source_length",
50+
default=1024,
51+
type=int,
52+
help="The maximum total input sequence length after "
53+
"tokenization.Sequences longer than this will be truncated, sequences shorter will be padded.",
54+
)
55+
parser.add_argument(
56+
"--min_target_length",
57+
default=0,
58+
type=int,
59+
help=
60+
"The minimum total sequence length for target text when generating. ")
61+
parser.add_argument(
62+
"--max_target_length",
63+
default=142,
64+
type=int,
65+
help="The maximum total sequence length for target text after "
66+
"tokenization. Sequences longer than this will be truncated, sequences shorter will be padded."
67+
"during ``evaluate`` and ``predict``.",
68+
)
69+
parser.add_argument('--decode_strategy',
70+
default='greedy_search',
71+
type=str,
72+
help='The decode strategy in generation.')
73+
parser.add_argument(
74+
'--top_k',
75+
default=2,
76+
type=int,
77+
help=
78+
'The number of highest probability vocabulary tokens to keep for top-k sampling.'
79+
)
80+
parser.add_argument('--top_p',
81+
default=1.0,
82+
type=float,
83+
help='The cumulative probability for top-p sampling.')
84+
parser.add_argument('--num_beams',
85+
default=1,
86+
type=int,
87+
help='The number of beams for beam search.')
88+
parser.add_argument(
89+
'--length_penalty',
90+
default=0.6,
91+
type=float,
92+
help='The exponential penalty to the sequence length for beam search.')
93+
parser.add_argument(
94+
'--early_stopping',
95+
default=False,
96+
type=eval,
97+
help=
98+
'Whether to stop the beam search when at least `num_beams` sentences are finished per batch or not.'
99+
)
100+
parser.add_argument("--diversity_rate",
101+
default=0.0,
102+
type=float,
103+
help="The diversity of beam search. ")
104+
parser.add_argument(
105+
'--faster',
106+
action='store_true',
107+
help='Whether to process inference using faster transformer. ')
108+
parser.add_argument(
109+
'--use_fp16_decoding',
110+
action='store_true',
111+
help=
112+
'Whether to use fp16 when using faster transformer. Only works when using faster transformer. '
113+
)
114+
parser.add_argument(
115+
"--batch_size",
116+
default=64,
117+
type=int,
118+
help="Batch size per GPU/CPU for testing or evaluation.")
119+
parser.add_argument("--seed",
120+
default=42,
121+
type=int,
122+
help="random seed for initialization")
123+
parser.add_argument(
124+
"--device",
125+
default="gpu",
126+
type=str,
127+
choices=["cpu", "gpu", "xpu"],
128+
help="The device to select to train the model, is must be cpu/gpu/xpu.")
129+
parser.add_argument("--logging_steps",
130+
type=int,
131+
default=100,
132+
help="Log every X updates steps.")
133+
parser.add_argument("--is_debug",
134+
default=False,
135+
type=bool,
136+
help="Whether to debug.")
137+
args = parser.parse_args()
138+
return args
139+
140+
141+
def set_seed(args):
142+
# Use the same data seed(for data shuffle) for all procs to guarantee data
143+
# consistency after sharding.
144+
random.seed(args.seed)
145+
np.random.seed(args.seed)
146+
# Maybe different op seeds(for dropout) for different procs is better. By:
147+
# `paddle.seed(args.seed + paddle.distributed.get_rank())`
148+
paddle.seed(args.seed)
149+
150+
151+
@paddle.no_grad()
152+
def generate(args):
153+
paddle.set_device(args.device)
154+
set_seed(args)
155+
tokenizer = T5Tokenizer.from_pretrained(args.model_name_or_path)
156+
model = T5ForConditionalGeneration.from_pretrained(args.model_name_or_path)
157+
dataset = load_dataset(args.dataset_name, splits=["dev_v1"])
158+
# dataset = load_dataset(args.dataset_name, splits=["dev_v2"])
159+
trans_func = partial(
160+
convert_example,
161+
tokenizer=tokenizer,
162+
decoder_start_token_id=model.t5.bos_token_id,
163+
max_source_length=args.max_source_length,
164+
max_target_length=args.max_target_length,
165+
ignore_pad_token_for_loss=args.ignore_pad_token_for_loss,
166+
is_train=False)
167+
168+
batchify_fn = lambda samples, fn=Tuple(
169+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"), # input_ids
170+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"
171+
), # attention_mask
172+
Pad(axis=0, pad_val=-100, dtype="int64"), # mem_seq_lens
173+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"
174+
), # decoder_input_ids
175+
Pad(axis=0, pad_val=tokenizer.pad_token_id, dtype="int64"), # labels
176+
): fn(samples)
177+
178+
dataset = dataset.map(trans_func, lazy=True)
179+
180+
# debug
181+
if args.is_debug:
182+
dataset.data = dataset.data[:20]
183+
dataset.new_data = dataset.new_data[:20]
184+
185+
batch_sampler = BatchSampler(dataset,
186+
batch_size=args.batch_size,
187+
shuffle=False)
188+
data_loader = DataLoader(dataset=dataset,
189+
batch_sampler=batch_sampler,
190+
num_workers=0,
191+
collate_fn=batchify_fn,
192+
return_list=True)
193+
data_loader.pin_memory = False
194+
195+
model.eval()
196+
total_time = 0.0
197+
start_time = time.time()
198+
all_preds = []
199+
all_labels = []
200+
for step, batch in enumerate(data_loader):
201+
input_ids, _, mem_seq_lens, _, labels = batch
202+
preds, _ = model.generate(input_ids=input_ids,
203+
max_length=args.max_target_length,
204+
min_length=args.min_target_length,
205+
decode_strategy=args.decode_strategy,
206+
top_k=args.top_k,
207+
top_p=args.top_p,
208+
num_beams=args.num_beams,
209+
length_penalty=args.length_penalty,
210+
early_stopping=args.early_stopping,
211+
diversity_rate=args.diversity_rate,
212+
use_faster=args.faster)
213+
total_time += (time.time() - start_time)
214+
if step % args.logging_steps == 0:
215+
print('step %d - %.3fs/step' %
216+
(step, total_time / args.logging_steps))
217+
total_time = 0.0
218+
all_preds.extend(preds.numpy())
219+
all_labels.extend(labels.numpy())
220+
start_time = time.time()
221+
222+
bleu_result, decoded_preds, decoded_labels = compute_metrics(
223+
all_preds, all_labels, tokenizer, args.ignore_pad_token_for_loss)
224+
print("BLEU result: ", bleu_result)
225+
with open(args.output_path, 'w', encoding='utf-8') as fout:
226+
for decoded_pred in decoded_preds:
227+
fout.write(' '.join(decoded_pred) + '\n')
228+
print('Save generated result into: %s' % args.output_path)
229+
with open(args.output_path + '.reference.txt', 'w',
230+
encoding='utf-8') as fout:
231+
for decoded_label in decoded_labels:
232+
fout.write(' '.join(decoded_label) + '\n')
233+
print('Save referenced labels into: %s' % args.output_path +
234+
'.reference.txt')
235+
236+
237+
if __name__ == '__main__':
238+
args = parse_args()
239+
pprint(args)
240+
generate(args)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
python generate.py \
16+
--model_name_or_path=mrm8488/t5-base-finetuned-question-generation-ap \
17+
--dataset_name=squad \
18+
--output_path=generate.txt \
19+
--max_source_length=1024 \
20+
--max_target_length=142 \
21+
--decode_strategy=greedy_search \
22+
--top_k=2 \
23+
--top_p=1.0 \
24+
--num_beams=1 \
25+
--length_penalty=0.0 \
26+
--batch_size=64 \
27+
--seed=42 \
28+
--logging_steps=20 \
29+
--device=gpu
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
nltk==3.6.2
2+
evaluate==0.2.2
+187
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
import numpy as np
15+
import nltk
16+
from paddlenlp.metrics import BLEU
17+
import evaluate
18+
19+
20+
def convert_example(example,
21+
tokenizer,
22+
decoder_start_token_id,
23+
max_source_length,
24+
max_target_length,
25+
ignore_pad_token_for_loss=True,
26+
is_train=True):
27+
"""
28+
Convert a example into necessary features.
29+
"""
30+
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
31+
# in one example possible giving several features when a context is long, each of those features having a
32+
# context that overlaps a bit the context of the previous feature.
33+
#NOTE: Almost the same functionality as HuggingFace's prepare_train_features function. The main difference is
34+
# that HugggingFace uses ArrowTable as basic data structure, while we use list of dictionary instead.
35+
context = example['context']
36+
question = example['question']
37+
try:
38+
answer = example['answers'][0]
39+
except:
40+
print(example['context'])
41+
print(example['question'])
42+
print(example['answers'])
43+
print(example['answer_starts'])
44+
print(example['is_impossible'])
45+
46+
input_seq = f'answer: {answer} context: {context} </s>'
47+
output_seq = f'question: {question} </s>'
48+
49+
labels = tokenizer(
50+
output_seq,
51+
max_seq_len=max_target_length,
52+
pad_to_max_seq_len=True,
53+
truncation_strategy="longest_first",
54+
)
55+
56+
output_ids = [decoder_start_token_id] + labels["input_ids"][:-1]
57+
58+
if ignore_pad_token_for_loss:
59+
labels["input_ids"] = [(l if l != tokenizer.pad_token_id else -100)
60+
for l in labels["input_ids"]]
61+
62+
if is_train:
63+
input_ids = tokenizer(input_seq,
64+
max_seq_len=max_source_length,
65+
pad_to_max_seq_len=True,
66+
truncation_strategy="longest_first",
67+
return_attention_mask=True,
68+
return_length=False)
69+
return input_ids["input_ids"], input_ids[
70+
"attention_mask"], output_ids, labels["input_ids"]
71+
else:
72+
input_ids = tokenizer(input_seq,
73+
max_seq_len=max_source_length,
74+
pad_to_max_seq_len=True,
75+
truncation_strategy="longest_first",
76+
return_attention_mask=True,
77+
return_length=True)
78+
return input_ids["input_ids"], input_ids["attention_mask"], \
79+
input_ids["length"], output_ids, labels["input_ids"]
80+
81+
82+
def compute_metrics(preds, labels, tokenizer, ignore_pad_token_for_loss=True):
83+
84+
def compute_bleu(predictions,
85+
references,
86+
rouge_types=None,
87+
use_stemmer=True):
88+
bleu1 = BLEU(n_size=1)
89+
bleu2 = BLEU(n_size=2)
90+
bleu3 = BLEU(n_size=3)
91+
bleu4 = BLEU(n_size=4)
92+
assert len(predictions) == len(references)
93+
for i in range(len(predictions)):
94+
bleu1.add_inst(predictions[i], [references[i]])
95+
bleu2.add_inst(predictions[i], [references[i]])
96+
bleu3.add_inst(predictions[i], [references[i]])
97+
bleu4.add_inst(predictions[i], [references[i]])
98+
result = {
99+
'BLEU-1': bleu1.score() * 100,
100+
'BLEU-2': bleu2.score() * 100,
101+
'BLEU-3': bleu3.score() * 100,
102+
'BLEU-4': bleu4.score() * 100
103+
}
104+
return result
105+
106+
def compute_bleu_hf(predictions,
107+
references,
108+
rouge_types=None,
109+
use_stemmer=True):
110+
predictions = [' '.join(prediction) for prediction in predictions]
111+
references = [[' '.join(reference)] for reference in references]
112+
113+
bleu = evaluate.load("bleu")
114+
assert len(predictions) == len(references)
115+
bleu1_results = bleu.compute(predictions=predictions,
116+
references=references,
117+
max_order=1)
118+
bleu2_results = bleu.compute(predictions=predictions,
119+
references=references,
120+
max_order=2)
121+
bleu3_results = bleu.compute(predictions=predictions,
122+
references=references,
123+
max_order=3)
124+
bleu4_results = bleu.compute(predictions=predictions,
125+
references=references,
126+
max_order=4)
127+
128+
result = {
129+
'BLEU-1': bleu1_results['bleu'] * 100,
130+
'BLEU-2': bleu2_results['bleu'] * 100,
131+
'BLEU-3': bleu3_results['bleu'] * 100,
132+
'BLEU-4': bleu4_results['bleu'] * 100
133+
}
134+
return result
135+
136+
def post_process_text(preds, labels):
137+
preds = [pred.strip() for pred in preds]
138+
labels = [label.strip() for label in labels]
139+
preds = [pred.strip('question:') for pred in preds]
140+
labels = [label.strip('question:') for label in labels]
141+
spreds = [pred.strip() for pred in preds]
142+
labels = [label.strip() for label in labels]
143+
144+
# expects newline after each sentence
145+
preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
146+
labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
147+
148+
preds = [pred.split() for pred in preds]
149+
labels = [label.split() for label in labels]
150+
151+
return preds, labels
152+
153+
def post_process_seq(seq,
154+
bos_idx,
155+
eos_idx,
156+
output_bos=False,
157+
output_eos=False):
158+
"""
159+
Post-process the decoded sequence.
160+
"""
161+
eos_pos = len(seq) - 1
162+
for i, idx in enumerate(seq):
163+
if idx == eos_idx:
164+
eos_pos = i
165+
break
166+
seq = [
167+
idx for idx in seq[:eos_pos + 1]
168+
if (output_bos or idx != bos_idx) and (output_eos or idx != eos_idx)
169+
]
170+
return seq
171+
172+
if ignore_pad_token_for_loss:
173+
labels = np.asarray(labels)
174+
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
175+
decoded_preds, decoded_labels = [], []
176+
for pred, label in zip(preds, labels):
177+
pred_id = post_process_seq(pred, tokenizer.bos_token_id,
178+
tokenizer.eos_token_id)
179+
label_id = post_process_seq(label, tokenizer.bos_token_id,
180+
tokenizer.eos_token_id)
181+
decoded_preds.append(tokenizer.decode(pred_id))
182+
decoded_labels.append(tokenizer.decode(label_id))
183+
decoded_preds, decoded_labels = post_process_text(decoded_preds,
184+
decoded_labels)
185+
# bleu_result = compute_bleu(decoded_preds, decoded_labels)
186+
bleu_result = compute_bleu_hf(decoded_preds, decoded_labels)
187+
return bleu_result, decoded_preds, decoded_labels
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,309 @@
1+
# 问题生成
2+
3+
4+
**目录**
5+
- [问题生成](#问题生成)
6+
- [简介](#简介)
7+
- [基于预训练语言模型的问题生成](#基于预训练语言模型的问题生成)
8+
<!-- - [效果展示](#效果展示)
9+
- [开箱即用](#开箱即用) -->
10+
- [训练定制](#训练定制)
11+
- [环境依赖](#环境依赖)
12+
- [代码结构说明](#代码结构说明)
13+
- [问题生成应用定制训练全流程介绍](#问题生成定制训练全流程介绍)
14+
- [数据准备](#数据准备)
15+
- [数据加载](#数据加载)
16+
- [数据处理](#数据处理)
17+
- [从本地文件创建数据集(可选)](#从本地文件创建数据集(可选))
18+
- [模型训练](#模型训练)
19+
- [模型预测](#模型预测)
20+
- [模型转换部署](#模型转换部署)
21+
- [FasterTransformer加速及模型静态图导出](#fastertransformer加速及模型静态图导出)
22+
- [模型部署](#模型部署)
23+
- [References](#references)
24+
25+
## 简介
26+
Question Generation(QG),即问题生成,指的是给定一段上下文,自动生成一个流畅且符合上下文主题的问句。问题生成通常可以分为,无答案问题生成和有答案问题生成,这里只关注应用更广的有答案问题生成。
27+
28+
问题生成技术在教育、咨询、搜索、推荐等多个领域均有着巨大的应用价值。具体来说,问题生成可广泛应用于问答系统语料库构建,事实性问题生成,教育行业题库生成,对话提问,聊天机器人意图理解,对话式搜索意图提问,闲聊机器人主动提问等等场景。
29+
30+
### 基于预训练语言模型的问题生成
31+
32+
基于预训练语言模型(Pretrained Language Models, PLMs)范式的问题生成是目前最常用、效果最好(SOTA)的方式。
33+
预训练模型是在超大规模的语料采用无监督或者弱监督的方式进行预训练,能够学习如何准确地理解自然语言并以自然语言的形式流畅表达,这两项都是完成文本生成任务的重要能力。
34+
35+
PaddleNLP提供了方便易用的接口,可指定模型名或模型参数文件路径通过from_pretrained()方法加载不同网络结构的预训练模型,且相应预训练模型权重下载速度快速、稳定。
36+
Transformer预训练模型汇总包含了如 ERNIE、BERT、T5、UNIMO等主流预训练模型。下面以中文unimo-text-1.0模型为例,演示如何加载预训练模型和分词器:
37+
```
38+
from paddlenlp.transformers import ErnieForGeneration, ErnieTokenizer
39+
model_name = "ernie-1.0"
40+
model = UNIMOLMHeadModel.from_pretrained(model_name)
41+
tokenizer = UNIMOTokenizer.from_pretrained(model_name)
42+
```
43+
<!--
44+
## 效果展示
45+
46+
## 开箱即用 -->
47+
48+
## 训练定制
49+
50+
### 环境依赖
51+
- nltk
52+
- evaluate
53+
- tqdm
54+
55+
安装方式:`pip install -r requirements.txt`
56+
57+
### 代码结构说明
58+
59+
以下是本项目主要代码结构及说明:
60+
61+
```text
62+
├── deploy # 部署
63+
│ ├── paddle_inference # PaddleInference高性能推理部署
64+
│ │ ├── inference_unimo_text.py # 推理部署脚本
65+
│ │ └── README.md # 说明文档
66+
│ └── paddle_serving
67+
│ ├── config.yml # 配置文件
68+
│ ├── pipeline_client.py # 客户端程序
69+
│ ├── pipeline_service.py # 服务器程序
70+
│ └── README.md # 说明文档
71+
├── export_model.py # 动态图参数导出静态图参数脚本
72+
├── train.py # 训练评估脚本
73+
├── utils.py # 工具函数脚本
74+
└── README.md # 说明文档
75+
```
76+
77+
### 问题生成定制训练全流程介绍
78+
接下来,我们将按数据准备、训练、预测、推理部署等四个阶段对问题生成应用的全流程进行介绍。
79+
1. **数据准备**
80+
- 如果没有已标注的数据集,我们推荐doccano数据标注工具([doccano](https://github.com/doccano/doccano))。
81+
- 如果已有标注好的本地数据集,我们需要根据将数据集整理为文档要求的格式,请参考[从本地文件创建数据集](###从本地文件创建数据集)
82+
83+
2. **模型训练**
84+
85+
- 数据准备完成后,可以开始使用我们的数据集对预训练模型进行微调训练。我们可以根据任务需求,调整可配置参数,选择使用GPU或CPU进行模型训练,脚本默认保存在开发集最佳表现模型。中文任务默认使用"unimo-text-1.0"模型,unimo-text-1.0还支持large模型,详见[UNIMO模型汇总](https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/UNIMO/contents.html),可以根据任务和设备需求进行选择。
86+
87+
88+
3. **模型预测**
89+
90+
- 训练结束后,我们可以加载保存的最佳模型进行模型测试,打印模型预测结果。
91+
92+
4. **模型转换部署**
93+
- 在现实部署场景中,我们通常不仅对模型的精度表现有要求,也需要考虑模型性能上的表现。我们可以使用模型裁剪进一步压缩模型体积,问题生成应用已提供裁剪API对上一步微调后的模型进行裁剪,模型裁剪之后会默认导出静态图模型。
94+
95+
- 模型部署需要将保存的最佳模型参数(动态图)导出成静态图参数,用于后续的推理部署。
96+
97+
- 问题生成应用提供了基于Paddle Serving的本地部署predictor,并且支持在GPU设备使用Faster Generation进行加速。
98+
99+
- 问题生成应用提供了基于Paddle Serving的服务端部署方案。
100+
101+
### 数据准备
102+
#### 数据加载
103+
[**DuReader_QG**数据集](https://www.luge.ai/#/luge/dataDetail?id=8)是一个中文问答数据集,我们使用该数据集作为应用案例进行实验。**DuReader_QG**中的数据主要由由上下文、问题、答案3个主要部分组成,其任务描述为给定上下文p和答案a,生成自然语言表述的问题q,且该问题符合段落和上下文的限制。
104+
105+
为了方便用户快速测试,PaddleNLP Dataset API内置了DuReader_QG数据集,一键即可完成数据集加载,示例代码如下:
106+
107+
```python
108+
from paddlenlp.datasets import load_dataset
109+
train_ds, dev_ds = load_dataset('dureader_qg', splits=('train', 'dev'))
110+
```
111+
112+
#### 数据处理
113+
针对**DuReader_QG**数据集,我们需要将QA任务格式的数据进行转换从而得到text2text形式的数据,我们默认使用模版的方式构造输入数据,默认模版如下,其他形式输入数据用户可以在convert_example函数中自行定义。
114+
```text
115+
答案: <answer_text> 上下文: <context_text>
116+
问题: <question_text>
117+
```
118+
119+
#### 从本地文件创建数据集(可选)
120+
在许多情况下,我们需要使用本地数据集来训练我们的文本分类模型,本项目支持使用固定格式本地数据集文件进行训练。
121+
使用本地文件,只需要在模型训练时指定`train_file` 为本地训练数据地址,`predict_file` 为本地测试数据地址即可。
122+
123+
本地数据集目录结构如下:
124+
125+
```text
126+
data/
127+
├── train.json # 训练数据集文件
128+
├── dev.json # 开发数据集文件
129+
└── test.json # 可选,待预测数据文件
130+
```
131+
本地数据集文件格式如下:
132+
- train.json/dev.json/test.json 文件格式:
133+
```text
134+
{
135+
"source": <context_text>,
136+
"title": <answer_text>,
137+
"target": <question_text>,
138+
}
139+
...
140+
```
141+
- train.txt/dev.txt/test.txt 文件样例:
142+
```text
143+
{
144+
"source": "欠条是永久有效的,未约定还款期限的借款合同纠纷,诉讼时效自债权人主张债权之日起计算,时效为2年。 根据《中华人民共和国民法通则》第一百三十五条:向人民法院请求保护民事权利的诉讼时效期间为二年,法律另有规定的除外。 第一百三十七条:诉讼时效期间从知道或者应当知道权利被侵害时起计算。但是,从权利被侵害之日起超过二十年的,人民法院不予保护。有特殊情况的,人民法院可以延长诉讼时效期间。 第六十二条第(四)项:履行期限不明确的,债务人可以随时履行,债权人也可以随时要求履行,但应当给对方必要的准备时间。",
145+
"title": "永久有效",
146+
"target": "欠条的有效期是多久"
147+
}
148+
...
149+
```
150+
151+
更多数据集读取格式详见[数据集加载](https://paddlenlp.readthedocs.io/zh/latest/data_prepare/dataset_load.html#)[自定义数据集](https://paddlenlp.readthedocs.io/zh/latest/data_prepare/dataset_self_defined.html)
152+
153+
### 模型训练
154+
运行如下命令即可在样例训练集上进行finetune,并在样例验证集上进行验证。
155+
```shell
156+
# GPU启动,参数`--gpus`指定训练所用的GPU卡号,可以是单卡,也可以多卡
157+
# 例如使用1号和2号卡,则:`--gpu 1,2`
158+
unset CUDA_VISIBLE_DEVICES
159+
python -m paddle.distributed.launch --gpus "1,2" --log_dir ./unimo/finetune/log run_gen.py \
160+
--dataset_name=dureader_qg \
161+
--model_name_or_path="unimo-text-1.0" \
162+
--save_dir=./unimo/finetune/checkpoints \
163+
--output_path ./unimo/finetune/predict.txt \
164+
--logging_steps=100 \
165+
--save_steps=500 \
166+
--epochs=20 \
167+
--batch_size=16 \
168+
--learning_rate=1e-5 \
169+
--warmup_propotion=0.02 \
170+
--weight_decay=0.01 \
171+
--max_seq_len=512 \
172+
--max_target_len=30 \
173+
--do_train \
174+
--do_predict \
175+
--max_dec_len=20 \
176+
--min_dec_len=3 \
177+
--num_return_sequences=1 \
178+
--adversarial_training=None \
179+
--template=1 \
180+
--device=gpu
181+
```
182+
183+
184+
关键参数释义如下:
185+
- `gpus` 指示了训练所用的GPU,使用多卡训练可以指定多个GPU卡号,例如 --gpus "0,1"。
186+
- `dataset_name` 数据集名称,默认为`dureader_qg`
187+
- `train_file` 本地训练数据地址,数据格式必须与`dataset_name`所指数据集格式相同,默认为None。
188+
- `predict_file` 本地测试数据地址,数据格式必须与`dataset_name`所指数据集格式相同,默认为None。
189+
- `model_name_or_path` 指示了finetune使用的具体预训练模型,可以是PaddleNLP提供的预训练模型,或者是本地的预训练模型。如果使用本地的预训练模型,可以配置本地模型的目录地址,例如: ./checkpoints/model_xx/,目录中需包含paddle预训练模型model_state.pdparams。如果使用PaddleNLP提供的预训练模型,可以选择下面其中之一。
190+
| 可选预训练模型 |
191+
|---------------------------------|
192+
| unimo-text-1.0 |
193+
| unimo-text-1.0-large |
194+
195+
<!-- | T5-PEGASUS |
196+
| ernie-1.0 |
197+
| ernie-gen-base-en |
198+
| ernie-gen-large-en |
199+
| ernie-gen-large-en-430g | -->
200+
201+
- `save_dir` 表示模型的保存路径。
202+
- `output_path` 表示预测结果的保存路径。
203+
- `logging_steps` 表示日志打印间隔。
204+
- `save_steps` 表示模型保存及评估间隔。
205+
- `seed` 表示随机数生成器的种子。
206+
- `epochs` 表示训练轮数。
207+
- `batch_size` 表示每次迭代**每张卡**上的样本数目。
208+
- `learning_rate` 表示基础学习率大小,将于learning rate scheduler产生的值相乘作为当前学习率。
209+
- `weight_decay` 表示AdamW优化器中使用的weight_decay的系数。
210+
- `warmup_propotion` 表示学习率逐渐升高到基础学习率(即上面配置的learning_rate)所需要的迭代数占总步数的比例。
211+
- `max_seq_len` 模型输入序列的最大长度。
212+
- `max_target_len` 模型训练时标签的最大长度。
213+
- `min_dec_len` 模型生成序列的最小长度。
214+
- `max_dec_len` 模型生成序列的最大长度。
215+
- `do_train` 是否进行训练。
216+
- `do_predict` 是否进行预测,在验证集上会自动评估。
217+
- `device` 表示使用的设备,从gpu和cpu中选择。
218+
- `adversarial_training` 表示使用何种对抗训练策略,从['None', 'fgm', 'pgd']中选择。
219+
- `template` 表示使用的设备,从[0, 1, 2, 3]中选择,0表示不选择模版,1表示使用默认模版。
220+
221+
程序运行时将会自动进行训练和验证,训练过程中会自动保存模型在指定的`save_dir`中。如:
222+
223+
```text
224+
./unimo/finetune/checkpoints
225+
├── model_1000
226+
│ ├── model_config.json
227+
│ ├── model_state.pdparams
228+
│ ├── special_tokens_map.json
229+
│ ├── tokenizer_config.json
230+
│ └── vocab.txt
231+
└── ...
232+
```
233+
234+
**NOTE:** 如需恢复模型训练,`model_name_or_path`配置本地模型的目录地址即可。
235+
236+
### 模型预测
237+
238+
运行下方脚本可以使用训练好的模型进行预测。
239+
240+
```shell
241+
export CUDA_VISIBLE_DEVICES=0
242+
python -u run_gen.py \
243+
--dataset_name=dureader_qg \
244+
--model_name_or_path=your_model_path \
245+
--output_path=./predict.txt \
246+
--logging_steps=100 \
247+
--batch_size=16 \
248+
--max_seq_len=512 \
249+
--max_target_len=30 \
250+
--do_predict \
251+
--max_dec_len=20 \
252+
--min_dec_len=3 \
253+
--template=1 \
254+
--device=gpu
255+
```
256+
关键参数释义如下:
257+
- `output_path` 表示预测输出结果保存的文件路径,默认为./predict.txt。
258+
259+
260+
Finetuned baseline的模型在xxx任务验证集上有如下结果(指标为BLEU-4):
261+
262+
| model_name | DuReaderQG |
263+
| :-----------------------------: | :-----------: |
264+
| finetuned unimo-text-1.0 | 41.08 |
265+
266+
### 模型转换部署
267+
268+
#### FasterTransformer加速及模型静态图导出
269+
270+
使用动态图训练结束之后,可以通过[静态图导出脚本](export_model.py)实现基于FasterTransformer的高性能预测加速,并将动态图参数导出成静态图参数,静态图参数保存在`output_path`指定路径中。运行方式:
271+
272+
```shell
273+
python export_model.py \
274+
--model_name_or_path ./checkpoint \
275+
--inference_model_dir ./export_checkpoint \
276+
--max_out_len 64 \
277+
--use_fp16_decoding
278+
```
279+
关键参数释义如下:
280+
281+
* `model_name_or_path`:动态图训练保存的参数路径;默认为"./checkpoint"。
282+
* `inference_model_dir`:静态图图保存的参数路径;默认为"./export_checkpoint"。
283+
* `max_out_len`:最大输出长度。
284+
* `use_fp16_decoding`:是否使用fp16解码进行预测。
285+
286+
执行命令后将会自动导出模型到指定的 `inference_model_dir` 中,保存模型文件结构如下所示:
287+
288+
```text
289+
├── unimo_text.pdiparams
290+
├── unimo_text.pdiparams.info
291+
└── unimo_text.pdmodel
292+
```
293+
294+
#### 模型部署
295+
本项目提供多种不同场景的部署方案,请根据实际情况进行选择:
296+
|部署方案|特色|场景|硬件|
297+
|-|-|-|-|
298+
|Paddle Inference<br>服务端/云端|通用性|模型算法复杂<br>硬件高性能|X86 CPU<br>NVIDIA 全系列 GPU<br>龙芯/飞腾等国产CPU<br>昆仑/昇腾/海光DCU等AI加速芯片
299+
|Paddle Serving<br>服务化|高并发|大流量、高并发、低延时、高吞吐<br>资源弹性调控应对服务流量变化<br>支持模型组合、加密、热更新等|X86/Arm CPU<br>NVIDIA GPU<br>昆仑/昇腾等
300+
301+
302+
问题生成应用已打通多种场景部署方案,点击链接获取具体的使用教程。
303+
- [Paddle Inference 推理 (Python)](./deploy/paddle_inference/README.md)
304+
- [Paddle Serving 服务化部署(Python)](./deploy/paddle_serving/README.md)
305+
306+
307+
## References
308+
Zheng, Chujie, and Minlie Huang. "Exploring prompt-based few-shot learning for grounded dialog generation." arXiv preprint arXiv:2109.06513 (2021).
309+
Li, Wei, et al. "Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning." arXiv preprint arXiv:2012.15409 (2020).
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Paddle Inference部署
2+
本文档将介绍如何使用[Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/guides/introduction/index_intro.html#paddle-inference)工具进行问题生成应用高性能推理推理部署。
3+
4+
**目录**
5+
* [背景介绍](#背景介绍)
6+
* [导出预测部署模型](#导出预测部署模型)
7+
* [基于Python预测](#基于Python预测)
8+
9+
10+
## 背景介绍
11+
Paddle inference和主框架的Model.predict均可实现推理预测,Paddle Inference 是飞桨的原生推理库, 作用于服务器端和云端,提供高性能的推理能力,主框架的Model 对象是一个具备训练、测试、推理的神经网络。相比于Model.predict,inference可使用MKLDNN、CUDNN、TensorRT进行预测加速。Model.predict适用于训练好的模型直接进行预测,paddle inference适用于对推理性能、通用性有要求的用户,针对不同平台不同的应用场景进行了深度的适配优化,保证模型在服务器端即训即用,快速部署。由于 Paddle Inference 能力直接基于飞桨的训练算子,因此它支持飞桨训练出的所有模型的推理。
12+
13+
14+
15+
Paddle Inference Python端预测部署主要包含两个步骤:
16+
- 导出预测部署模型
17+
- 基于Python预测
18+
19+
20+
## 导出预测部署模型
21+
部署时需要使用预测格式的模型(即动态图转静态图操作)。预测格式模型相对训练格式模型而言,在拓扑上裁剪掉了预测不需要的算子,并且会做特定部署优化。具体操作详见[FasterTransformer加速及模型静态图导出](../../README.md)
22+
23+
## 基于Python预测
24+
<!-- 同上,高性能预测的默认输入和输出形式也为文件,可分别通过 test_path 和 save_path 进行指定,通过如下命令便可以基于Paddle Inference 进行高性能预测: -->
25+
26+
在终端输入以下命令可在GPU上进行预测:
27+
```shell
28+
python deploy/paddle_inference/inference.py \
29+
--inference_model_dir ./export_checkpoint \
30+
--model_name_or_path "unimo-text-1.0" \
31+
--predict_file predict_file_name \
32+
--output_path output_path_name \
33+
--device gpu \
34+
```
35+
36+
<!-- 在终端输入以下命令可在CPU上进行预测:
37+
```shell
38+
python deploy/paddle_inference/inference_unimo_text.py --inference_model_dir ./export_checkpoint --device cpu
39+
``` -->
40+
经静态图转换,FastTransformer性能优化,Paddle Inference加速后的部署模型在dureader_qg devset的预测时间为27.74秒,相较于未优化前169.24秒,耗时缩减为原来的16.39%。
41+
关键参数释义如下:
42+
* `inference_model_dir`:用于高性能推理的静态图模型参数路径,默认为"./export_checkpoint"。
43+
* `model_name_or_path`:tokenizer对应模型或路径,默认为"unimo-text-1.0"。
44+
* `dataset_name`:数据集名称,默认为`dureader_qg`
45+
* `predict_file`:本地预测数据地址,数据格式必须与`dataset_name`所指数据集格式相同,默认为None,当为None时默认加载`dataset_name`的dev集。
46+
* `output_path`:表示预测结果的保存路径。
47+
* `device`:推理时使用的设备,可选项["gpu"],默认为"gpu"。
48+
* `batch_size`:进行推理时的批大小,默认为16。
49+
* `precision`:当使用TensorRT进行加速推理时,所使用的TensorRT精度,可选项["fp32", "fp16"],默认为"fp32"。
50+
<!-- * `precision`:当使用TensorRT进行加速推理时,所使用的TensorRT精度,可选项["fp32", "fp16", "int8"],默认为"fp32"。 -->
51+
<!-- * `device`:推理时使用的设备,可选项["gpu", "cpu", "xpu"],默认为"gpu"。 -->
52+
<!-- * `enable_mkldnn`:当使用cpu时,选择是否使用MKL-DNN(oneDNN)进行加速推理,默认为False。 -->
53+
<!-- * `cpu_threads`:当使用cpu时,推理所用的进程数,默认为10。 -->
54+
<!-- * `use_tensorrt`:当使用gpu时,选择是否使用TensorRT进行加速推理,默认为False。 -->
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,289 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
import random
16+
from functools import partial
17+
18+
import numpy as np
19+
from numpy import array
20+
21+
import paddle
22+
import paddle.distributed as dist
23+
from paddle.io import DataLoader, DistributedBatchSampler, BatchSampler
24+
from paddlenlp.data import Pad
25+
26+
27+
def postprocess_response(token_ids, tokenizer):
28+
"""Post-process the decoded sequence. Truncate from the first <eos>."""
29+
eos_pos = len(token_ids)
30+
for i, tok_id in enumerate(token_ids):
31+
if tok_id == tokenizer.mask_token_id:
32+
eos_pos = i
33+
break
34+
token_ids = token_ids[:eos_pos]
35+
tokens = tokenizer.convert_ids_to_tokens(token_ids)
36+
tokens = tokenizer.merge_subword(tokens)
37+
return tokens
38+
39+
40+
def print_args(args):
41+
print('----------- Configuration Arguments -----------')
42+
for arg, value in sorted(vars(args).items()):
43+
print('%s: %s' % (arg, value))
44+
print('------------------------------------------------')
45+
46+
47+
def set_seed(seed):
48+
# Use the same data seed(for data shuffle) for all procs to guarantee data
49+
# consistency after sharding.
50+
random.seed(seed)
51+
np.random.seed(seed)
52+
# Maybe different op seeds(for dropout) for different procs is better.
53+
paddle.seed(seed + dist.get_rank())
54+
55+
56+
def convert_example(example,
57+
tokenizer,
58+
max_seq_len=512,
59+
max_target_len=128,
60+
max_title_len=256,
61+
mode='test',
62+
template=0):
63+
"""Convert all examples into necessary features."""
64+
if mode == 'pretrain' or mode == 'pretrain_test':
65+
context = example['context']
66+
answer = example['answer']
67+
target = example['target']
68+
69+
source = '答案:' + answer + tokenizer.sep_token + '上下文:' + context
70+
title = None
71+
72+
elif mode == 'train' or mode == 'test':
73+
target = None
74+
if 'source' in example and 'title' in example:
75+
source = example['source']
76+
title = None
77+
if 'title' in example.keys():
78+
title = example['title']
79+
elif 'context' in example and 'answer' in example:
80+
source = example['context']
81+
title = None
82+
if 'answer' in example.keys():
83+
title = example['answer']
84+
else:
85+
assert False, "Source and title are not in the input dictionary, nor are context and answer."
86+
if 'target' in example.keys():
87+
target = example['target']
88+
89+
if template == 1:
90+
source = '答案:' + title + tokenizer.sep_token + '上下文:' + source
91+
title = None
92+
if target:
93+
target = '问题:' + target
94+
elif template == 2:
95+
source = '答案:' + title + tokenizer.sep_token + '上下文:' + source
96+
title = None
97+
if target:
98+
target = '在已知答案的前提下,问题:' + target
99+
elif template == 3:
100+
source = '这是一个问题生成任务,根据提供的答案和上下文,来生成问题。' + title + tokenizer.sep_token + '上下文:' + source
101+
title = None
102+
if target:
103+
target = '问题:' + target
104+
105+
if mode == 'train' or mode == 'pretrain':
106+
tokenized_example = tokenizer.gen_encode(source,
107+
title=title,
108+
target=target,
109+
max_seq_len=max_seq_len,
110+
max_target_len=max_target_len,
111+
max_title_len=max_title_len,
112+
return_position_ids=True,
113+
return_length=True)
114+
target_start = tokenized_example['input_ids'].index(
115+
tokenizer.cls_token_id, 1)
116+
target_end = tokenized_example['seq_len']
117+
# Use to gather the logits corresponding to the labels during training
118+
tokenized_example['masked_positions'] = list(
119+
range(target_start, target_end - 1))
120+
tokenized_example['labels'] = tokenized_example['input_ids'][
121+
target_start + 1:target_end]
122+
123+
return tokenized_example
124+
125+
elif mode == 'test' or mode == 'pretrain_test':
126+
tokenized_example = tokenizer.gen_encode(
127+
source,
128+
title=title,
129+
max_seq_len=max_seq_len,
130+
max_title_len=max_title_len,
131+
add_start_token_for_decoding=True,
132+
return_position_ids=True,
133+
return_length=True,
134+
)
135+
136+
if 'target' in example and example['target']:
137+
tokenized_example['target'] = example['target']
138+
return tokenized_example
139+
140+
141+
def batchify_fn(batch_examples, pad_val, mode='test'):
142+
143+
def pad_mask(batch_attention_mask):
144+
batch_size = len(batch_attention_mask)
145+
max_len = max(map(len, batch_attention_mask))
146+
attention_mask = np.ones(
147+
(batch_size, max_len, max_len), dtype='float32') * -1e9
148+
for i, mask_data in enumerate(attention_mask):
149+
seq_len = len(batch_attention_mask[i])
150+
mask_data[-seq_len:, -seq_len:] = np.array(batch_attention_mask[i],
151+
dtype='float32')
152+
# In order to ensure the correct broadcasting mechanism, expand one
153+
# dimension to the second dimension (n_head of Transformer).
154+
attention_mask = np.expand_dims(attention_mask, axis=1)
155+
return attention_mask
156+
157+
pad_func = Pad(pad_val=pad_val, pad_right=False, dtype='int64')
158+
159+
input_ids = pad_func([example['input_ids'] for example in batch_examples])
160+
token_type_ids = pad_func(
161+
[example['token_type_ids'] for example in batch_examples])
162+
position_ids = pad_func(
163+
[example['position_ids'] for example in batch_examples])
164+
165+
attention_mask = pad_mask(
166+
[example['attention_mask'] for example in batch_examples])
167+
168+
seq_len = np.asarray([example['seq_len'] for example in batch_examples],
169+
dtype='int32')
170+
171+
if mode == 'train' or mode == 'pretrain':
172+
max_len = max([example['seq_len'] for example in batch_examples])
173+
masked_positions = np.concatenate([
174+
np.array(example['masked_positions']) +
175+
(max_len - example['seq_len']) + i * max_len
176+
for i, example in enumerate(batch_examples)
177+
])
178+
labels = np.concatenate([
179+
np.array(example['labels'], dtype='int64')
180+
for example in batch_examples
181+
])
182+
return input_ids, token_type_ids, position_ids, attention_mask, masked_positions, labels
183+
elif mode == 'test' or mode == 'pretrain_test':
184+
return input_ids, token_type_ids, position_ids, attention_mask, seq_len
185+
186+
187+
def create_data_loader(dataset, tokenizer, args, mode='test'):
188+
trans_func = partial(convert_example,
189+
tokenizer=tokenizer,
190+
mode='test',
191+
template=1)
192+
dataset = dataset.map(trans_func, lazy=True)
193+
if mode == 'pretrain':
194+
batch_sampler = DistributedBatchSampler(dataset,
195+
batch_size=args.batch_size,
196+
shuffle=True)
197+
elif mode == 'train':
198+
batch_sampler = DistributedBatchSampler(dataset,
199+
batch_size=args.batch_size,
200+
shuffle=True)
201+
elif mode == 'test' or mode == 'pretrain_test':
202+
batch_sampler = BatchSampler(dataset,
203+
batch_size=args.batch_size // 2,
204+
shuffle=False)
205+
collate_fn = partial(batchify_fn, pad_val=tokenizer.pad_token_id, mode=mode)
206+
data_loader = DataLoader(dataset,
207+
batch_sampler=batch_sampler,
208+
collate_fn=collate_fn,
209+
return_list=True)
210+
return dataset, data_loader
211+
212+
213+
def post_process_sum(token_ids, tokenizer):
214+
"""Post-process the decoded sequence. Truncate from the first <eos>."""
215+
eos_pos = len(token_ids)
216+
for i, tok_id in enumerate(token_ids):
217+
if tok_id == tokenizer.mask_token_id:
218+
eos_pos = i
219+
break
220+
token_ids = token_ids[:eos_pos]
221+
tokens = tokenizer.convert_ids_to_tokens(token_ids)
222+
tokens = tokenizer.merge_subword(tokens)
223+
special_tokens = ['[UNK]']
224+
tokens = [token for token in tokens if token not in special_tokens]
225+
return token_ids, tokens
226+
227+
228+
def remove_template(instr):
229+
"""Remove template prefix of decoded sequence."""
230+
outstr = instr.strip('问题:')
231+
outstr = instr.strip('在已知答案的前提下,问题:')
232+
return outstr
233+
234+
235+
def select_sum(ids,
236+
scores,
237+
tokenizer,
238+
max_dec_len=None,
239+
num_return_sequences=1):
240+
results = []
241+
group = []
242+
tmp = []
243+
if scores is not None:
244+
ids = ids.numpy()
245+
scores = scores.numpy()
246+
247+
if len(ids) != len(scores) or (len(ids) % num_return_sequences) != 0:
248+
raise ValueError(
249+
"the length of `ids` is {}, but the `num_return_sequences` is {}"
250+
.format(len(ids), num_return_sequences))
251+
252+
for pred, score in zip(ids, scores):
253+
pred_token_ids, pred_tokens = post_process_sum(pred, tokenizer)
254+
num_token = len(pred_token_ids)
255+
256+
target = "".join(pred_tokens)
257+
target = remove_template(target)
258+
259+
# not ending
260+
if max_dec_len is not None and num_token >= max_dec_len:
261+
score -= 1e3
262+
263+
tmp.append([target, score])
264+
if len(tmp) == num_return_sequences:
265+
group.append(tmp)
266+
tmp = []
267+
268+
for preds in group:
269+
preds = sorted(preds, key=lambda x: -x[1])
270+
results.append(preds[0][0])
271+
else:
272+
ids = ids.numpy()
273+
274+
for pred in ids:
275+
pred_token_ids, pred_tokens = post_process_sum(pred, tokenizer)
276+
num_token = len(pred_token_ids)
277+
response = "".join(pred_tokens)
278+
response = remove_template(response)
279+
280+
# TODO: Support return scores in FT.
281+
tmp.append([response])
282+
if len(tmp) == num_return_sequences:
283+
group.append(tmp)
284+
tmp = []
285+
286+
for preds in group:
287+
results.append(preds[0][0])
288+
289+
return results
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,266 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
import argparse
16+
import numpy as np
17+
from pprint import pprint
18+
19+
import paddle
20+
from paddle import inference
21+
from paddlenlp.datasets import load_dataset
22+
23+
from paddlenlp.transformers import UNIMOLMHeadModel, UNIMOTokenizer
24+
from paddlenlp.ops.ext_utils import load
25+
from infer_utils import print_args, set_seed, create_data_loader, select_sum, postprocess_response, convert_example
26+
import os
27+
import time
28+
29+
30+
def setup_args():
31+
"""Setup arguments."""
32+
parser = argparse.ArgumentParser()
33+
parser.add_argument("--inference_model_dir",
34+
default="./infer_model",
35+
type=str,
36+
help="Path to save inference model of UNIMOText. ")
37+
parser.add_argument('--model_name_or_path',
38+
type=str,
39+
default='unimo-text-1.0',
40+
help='The path or shortcut name of the tokenizer.')
41+
parser.add_argument("--device",
42+
default="gpu",
43+
choices=["gpu", "cpu", "xpu"],
44+
help="Device selected for inference.")
45+
parser.add_argument(
46+
"--use_tensorrt",
47+
default=False,
48+
type=eval,
49+
choices=[True, False],
50+
help="Whether to use inference engin TensorRT when using gpu.")
51+
parser.add_argument('--enable_mkldnn',
52+
default=False,
53+
type=eval,
54+
choices=[True, False],
55+
help='Enable to use mkldnn to speed up when using cpu.')
56+
parser.add_argument('--cpu_threads',
57+
default=10,
58+
type=int,
59+
help='Number of threads to predict when using cpu.')
60+
parser.add_argument("--precision",
61+
default="fp32",
62+
type=str,
63+
choices=["fp32", "fp16", "int8"],
64+
help='The tensorrt precision.')
65+
parser.add_argument("--batch_size",
66+
type=int,
67+
default=16,
68+
help="Batch size per GPU/CPU for training.")
69+
parser.add_argument(
70+
'--output_path',
71+
type=str,
72+
default='./predict.txt',
73+
help='The file path where the infer result will be saved.')
74+
parser.add_argument('--logging_steps',
75+
type=int,
76+
default=100,
77+
help='Log every X updates steps.')
78+
parser.add_argument('--dataset_name',
79+
type=str,
80+
default='dureader_qg',
81+
help='The name of the dataset to load.')
82+
parser.add_argument("--predict_file",
83+
type=str,
84+
required=False,
85+
default=None,
86+
help="Predict data path.")
87+
parser.add_argument('--max_dec_len',
88+
type=int,
89+
default=20,
90+
help='The maximum sequence length of decoding.')
91+
parser.add_argument(
92+
'--num_return_sequences',
93+
type=int,
94+
default=1,
95+
help='The numbers of returned sequences for one input in generation.')
96+
97+
args = parser.parse_args()
98+
return args
99+
100+
101+
def setup_predictor(args):
102+
"""Setup inference predictor."""
103+
# Load FasterTransformer lib.
104+
load("FasterTransformer", verbose=True)
105+
model_file = os.path.join(args.inference_model_dir, "unimo_text.pdmodel")
106+
params_file = os.path.join(args.inference_model_dir, "unimo_text.pdiparams")
107+
if not os.path.exists(model_file):
108+
raise ValueError("not find model file path {}".format(model_file))
109+
if not os.path.exists(params_file):
110+
raise ValueError("not find params file path {}".format(params_file))
111+
config = inference.Config(model_file, params_file)
112+
if args.device == "gpu":
113+
config.enable_use_gpu(100, 0)
114+
config.switch_ir_optim()
115+
config.enable_memory_optim()
116+
config.disable_glog_info()
117+
118+
precision_map = {
119+
"fp16": inference.PrecisionType.Half,
120+
"fp32": inference.PrecisionType.Float32,
121+
"int8": inference.PrecisionType.Int8
122+
}
123+
precision_mode = precision_map[args.precision]
124+
if args.use_tensorrt:
125+
config.enable_tensorrt_engine(max_batch_size=args.batch_size,
126+
min_subgraph_size=30,
127+
precision_mode=precision_mode)
128+
elif args.device == "cpu":
129+
config.disable_gpu()
130+
if args.enable_mkldnn:
131+
config.enable_mkldnn()
132+
config.set_mkldnn_cache_capacity(10)
133+
134+
config.set_cpu_math_library_num_threads(args.cpu_threads)
135+
elif args.device == "xpu":
136+
config.enable_xpu(100)
137+
predictor = inference.create_predictor(config)
138+
return predictor
139+
140+
141+
@paddle.no_grad()
142+
def infer_one(args, predictor, inputs=None):
143+
"""Use predictor to inference."""
144+
tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0')
145+
146+
if not inputs:
147+
inputs = {
148+
"context":
149+
"奇峰黄山千米以上的山峰有77座,整座黄山就是一座花岗岩的峰林,自古有36大峰,36小峰,最高峰莲花峰、最险峰天都峰和观日出的最佳点光明顶构成黄山的三大主峰。",
150+
"answer": "莲花峰"
151+
}
152+
153+
inputs = '答案:' + inputs['answer'] + tokenizer.sep_token + '上下文:' + inputs[
154+
'context']
155+
data = tokenizer.gen_encode(inputs,
156+
add_start_token_for_decoding=True,
157+
return_length=True,
158+
is_split_into_words=False)
159+
160+
input_handles = {}
161+
for name in predictor.get_input_names():
162+
input_handles[name] = predictor.get_input_handle(name)
163+
if name == "attention_mask":
164+
input_handles[name].copy_from_cpu(
165+
np.expand_dims(np.asarray(data[name], dtype="float32"),
166+
axis=(0, 1)))
167+
else:
168+
input_handles[name].copy_from_cpu(
169+
np.asarray(data[name], dtype="int32").reshape([1, -1]))
170+
171+
output_handles = [
172+
predictor.get_output_handle(name)
173+
for name in predictor.get_output_names()
174+
]
175+
176+
predictor.run()
177+
178+
output = [output_handle.copy_to_cpu() for output_handle in output_handles]
179+
180+
for sample in output[0][:, :, 0].tolist():
181+
print("".join(postprocess_response(sample, tokenizer)))
182+
183+
184+
@paddle.no_grad()
185+
def infer(args, predictor, data_loader, tokenizer):
186+
print('Infer begin...')
187+
pred_ref = []
188+
total_time = 0.0
189+
start_time = time.time()
190+
for step, inputs in enumerate(data_loader, 1):
191+
input_ids, token_type_ids, position_ids, attention_mask, seq_len = inputs
192+
data = {
193+
'input_ids': input_ids,
194+
'token_type_ids': token_type_ids,
195+
'position_ids': position_ids,
196+
'attention_mask': attention_mask,
197+
'seq_len': seq_len
198+
}
199+
200+
input_handles = {}
201+
for name in predictor.get_input_names():
202+
input_handles[name] = predictor.get_input_handle(name)
203+
if name == "attention_mask":
204+
input_handles[name].copy_from_cpu(
205+
np.asarray(data[name], dtype="float32"))
206+
else:
207+
input_handles[name].copy_from_cpu(
208+
np.asarray(data[name], dtype="int32"))
209+
210+
output_handles = [
211+
predictor.get_output_handle(name)
212+
for name in predictor.get_output_names()
213+
]
214+
215+
predictor.run()
216+
217+
output = [
218+
output_handle.copy_to_cpu() for output_handle in output_handles
219+
]
220+
221+
ids = output[0]
222+
scores = output[1]
223+
224+
ids = paddle.to_tensor(ids, dtype='int32')[:, 0, :]
225+
scores = paddle.to_tensor(scores, dtype='float32')
226+
227+
total_time += (time.time() - start_time)
228+
if step % args.logging_steps == 0:
229+
print('step %d - %.3fs/step' %
230+
(step, total_time / args.logging_steps))
231+
total_time = 0.0
232+
233+
results = select_sum(ids, scores, tokenizer, args.max_dec_len,
234+
args.num_return_sequences)
235+
236+
pred_ref.extend(results)
237+
start_time = time.time()
238+
239+
with open(args.output_path, 'w', encoding='utf-8') as fout:
240+
for ref in pred_ref:
241+
fout.write(ref + '\n')
242+
243+
print('\nSave inference result into: %s' % args.output_path)
244+
245+
if 'target' in data_loader.dataset[0].keys():
246+
with open(args.output_path + '.reference.txt', 'w',
247+
encoding='utf-8') as fout:
248+
targets = [example['target'] for example in data_loader.dataset]
249+
for target in targets:
250+
fout.write(target + '\n')
251+
252+
253+
if __name__ == "__main__":
254+
args = setup_args()
255+
pprint(args)
256+
257+
predictor = setup_predictor(args)
258+
tokenizer = UNIMOTokenizer.from_pretrained(args.model_name_or_path)
259+
ds = load_dataset(args.dataset_name,
260+
splits='dev',
261+
data_files=args.predict_file)
262+
ds, data_loader = create_data_loader(ds, tokenizer, args, 'test')
263+
264+
time_begin = time.time()
265+
infer(args, predictor, data_loader, tokenizer)
266+
print('inference cost time:', time.time() - time_begin)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
# Paddle Serving服务化部署
2+
3+
本文档将介绍如何使用[Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/develop/README_CN.md)工具部署问题生成在线服务。
4+
5+
## 目录
6+
- [Paddle Serving服务化部署](#paddle-serving服务化部署)
7+
- [目录](#目录)
8+
- [背景介绍](#背景介绍)
9+
- [环境准备](#环境准备)
10+
- [安装Paddle Serving](#安装paddle-serving)
11+
<!-- - [安装FasterTokenizer文本处理加速库(可选)](#安装fastertokenizer文本处理加速库可选) -->
12+
- [模型转换](#模型转换)
13+
- [pipeline部署](#pipeline部署)
14+
- [修改配置文件](#修改配置文件)
15+
- [server启动服务](#server启动服务)
16+
- [client发送服务请求](#client发送服务请求)
17+
18+
## 背景介绍
19+
Paddle Serving 依托深度学习框架 PaddlePaddle 旨在帮助深度学习开发者和企业提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议,提供多种异构硬件和多种操作系统环境下推理解决方案,和多种经典预训练模型示例。集成高性能服务端推理引擎 Paddle Inference 和端侧引擎 Paddle Lite。设计并实现基于有向无环图(DAG) 的异步流水线高性能推理框架,具有多模型组合、异步调度、并发推理、动态批量、多卡多流推理、请求缓存等特性。
20+
21+
Paddle Serving Python端预测部署主要包含以下步骤:
22+
- 环境准备
23+
- 模型转换
24+
- 部署模型
25+
26+
## 环境准备
27+
### 安装Paddle Serving
28+
安装client和serving app,用于向服务发送请求:
29+
```shell
30+
pip install paddle_serving_app paddle_serving_client
31+
```
32+
安装server,用于启动服务,根据服务器设备选择安装CPU server或GPU server:
33+
34+
- 安装CPU server
35+
```shell
36+
pip install paddle_serving_server
37+
```
38+
- 安装GPU server, 注意选择跟本地环境一致的命令
39+
```shell
40+
# CUDA10.2 + Cudnn7 + TensorRT6
41+
pip install paddle-serving-server-gpu==0.8.3.post102 # -i https://pypi.tuna.tsinghua.edu.cn/simple
42+
# CUDA10.1 + TensorRT6
43+
pip install paddle-serving-server-gpu==0.8.3.post101 # -i https://pypi.tuna.tsinghua.edu.cn/simple
44+
# CUDA11.2 + TensorRT8
45+
pip install paddle-serving-server-gpu==0.8.3.post112 # -i https://pypi.tuna.tsinghua.edu.cn/simple
46+
```
47+
48+
**NOTE:**
49+
- 可以开启国内清华镜像源来加速下载
50+
- 如果要安装最新版本的PaddleServing参考[链接](https://github.com/PaddlePaddle/Serving/blob/develop/doc/Latest_Packages_CN.md)
51+
52+
53+
<!-- ### 安装FasterTokenizer文本处理加速库(可选)
54+
如果部署环境是Linux,推荐安装faster_tokenizer可以得到更极致的文本处理效率,进一步提升服务性能。目前暂不支持Windows设备安装,将会在下个版本支持。
55+
```shell
56+
pip install faster_tokenizer
57+
``` -->
58+
59+
60+
## 模型转换
61+
62+
使用Paddle Serving做服务化部署时,需要将保存的inference模型转换为serving易于部署的模型。
63+
64+
用已安装的paddle_serving_client将静态图参数模型转换成serving格式。关于如何使用将训练后的动态图模型转为静态图模型详见[FasterTransformer加速及模型静态图导出](../../README.md)
65+
66+
模型转换命令如下:
67+
```shell
68+
python -m paddle_serving_client.convert --dirname ./export_checkpoint \
69+
--model_filename unimo_text.pdmodel \
70+
--params_filename unimo_text.pdiparams \
71+
--serving_server ./deploy/paddle_serving/export_checkpoint_server \
72+
--serving_client ./deploy/paddle_serving/export_checkpoint_client
73+
```
74+
关键参数释义如下:
75+
* `dirname`:静态图模型文件夹地址。
76+
* `model_filename`:模型文件名。
77+
* `params_filename`:模型参数名。
78+
* `serving_server`:server的模型文件和配置文件路径,默认"serving_server"。
79+
* `serving_client`:client的配置文件路径,默认"serving_client"。
80+
81+
更多参数可通过以下命令查询:
82+
```shell
83+
python -m paddle_serving_client.convert --help
84+
```
85+
模型转换完成后,会在./delopy/paddle_serving文件夹多出export_checkpoint_server和export_checkpoint_client的文件夹,文件夹目录格式如下:
86+
```
87+
export_checkpoint_server/
88+
├── unimo_text.pdiparams
89+
├── unimo_text.pdmodel
90+
├── serving_server_conf.prototxt
91+
└── serving_server_conf.stream.prototxt
92+
export_checkpoint_server/
93+
├── serving_client_conf.prototxt
94+
└── serving_client_conf.stream.prototxt
95+
```
96+
97+
## pipeline部署
98+
99+
paddle_serving目录包含启动pipeline服务和发送预测请求的代码,包括:
100+
```
101+
paddle_serving/
102+
├──config.yml # 启动服务端的配置文件
103+
├──pipeline_client.py # 发送pipeline预测请求的脚本
104+
└──pipeline_service.py # 启动pipeline服务端的脚本
105+
```
106+
107+
### 修改配置文件
108+
目录中的`config.yml`文件解释了每一个参数的含义,可以根据实际需要修改其中的配置。
109+
110+
### server启动服务
111+
修改好配置文件后,执行下面命令启动服务:
112+
```shell
113+
cd deploy/paddle_serving
114+
# 启动服务,运行日志保存在log.txt
115+
python pipeline_service.py &> log.txt &
116+
```
117+
成功启动服务后,log.txt中会打印类似如下日志
118+
```
119+
--- Running analysis [ir_graph_to_program_pass]
120+
I0901 12:09:27.248943 12190 analysis_predictor.cc:1035] ======= optimize end =======
121+
I0901 12:09:27.249596 12190 naive_executor.cc:102] --- skip [feed], feed -> seq_len
122+
I0901 12:09:27.249608 12190 naive_executor.cc:102] --- skip [feed], feed -> attention_mask
123+
I0901 12:09:27.249614 12190 naive_executor.cc:102] --- skip [feed], feed -> token_type_ids
124+
I0901 12:09:27.249617 12190 naive_executor.cc:102] --- skip [feed], feed -> input_ids
125+
I0901 12:09:27.250080 12190 naive_executor.cc:102] --- skip [_generated_var_3], fetch -> fetch
126+
I0901 12:09:27.250090 12190 naive_executor.cc:102] --- skip [transpose_0.tmp_0], fetch -> fetch
127+
[2022-09-01 12:09:27,251] [ INFO] - Already cached /root/.paddlenlp/models/unimo-text-1.0/unimo-text-1.0-vocab.txt
128+
[2022-09-01 12:09:27,269] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/unimo-text-1.0/tokenizer_config.json
129+
[2022-09-01 12:09:27,269] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/unimo-text-1.0/special_tokens_map.json
130+
[PipelineServicer] succ init
131+
[OP Object] init success
132+
2022/09/01 12:09:27 start proxy service
133+
```
134+
135+
### client发送服务请求
136+
执行以下命令发送文本摘要服务请求:
137+
```shell
138+
cd deploy/paddle_serving
139+
python pipeline_client.py
140+
```
141+
注意执行客户端请求时关闭代理,并根据实际情况修改server_url地址(启动服务所在的机器)
142+
143+
成功运行后,输出打印如下:
144+
```
145+
time cost :0.03429532051086426 seconds
146+
--------------------
147+
input: {'context': '平安银行95511电话按9转报案人工服务。 1.寿险 :95511转1 2.信用卡 95511转2 3.平安银行 95511转3 4.一账通 95511转4转8 5.产险 95511转5 6.养老险团体险 95511转6 7.健康险 95511转7 8.证券 95511转8 9.车险报案95511转9 0.重听', 'answer': '95511'}
148+
output: 问题:平安银行人工服务电话
149+
--------------------
150+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
#rpc端口, rpc_port和http_port不允许同时为空。当rpc_port为空且http_port不为空时,会自动将rpc_port设置为http_port+1
2+
rpc_port: 18011
3+
4+
#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port
5+
http_port: 9999
6+
7+
#worker_num, 最大并发数。
8+
#当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG
9+
#当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num
10+
worker_num: 10
11+
12+
#build_dag_each_worker, False,框架在进程内创建一条DAG;True,框架会每个进程内创建多个独立的DAG
13+
build_dag_each_worker: false
14+
15+
dag:
16+
#op资源类型, True, 为线程模型;False,为进程模型
17+
is_thread_op: True
18+
19+
#重试次数
20+
retry: 1
21+
22+
#使用性能分析, True,生成Timeline性能数据,对性能有一定影响;False为不使用
23+
use_profile: false
24+
tracer:
25+
interval_s: 10
26+
27+
op:
28+
question_generation:
29+
#并发数,is_thread_op=True时,为线程并发;否则为进程并发
30+
concurrency: 11
31+
32+
#当op配置没有server_endpoints时,从local_service_conf读取本地服务配置
33+
local_service_conf:
34+
#client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测
35+
client_type: local_predictor
36+
37+
#模型路径
38+
model_config: ../../unimo/serving/export_checkpoint_server
39+
40+
#Fetch结果列表,以client_config中fetch_var的alias_name为准,不设置默认取全部输出变量
41+
# fetch_list: ["_generated_var_3", "slice_0.tmp_0"]
42+
43+
# device_type, 0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu
44+
device_type: 1
45+
46+
#计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
47+
devices: "0"
48+
49+
#开启MKLDNN加速
50+
use_mkldnn: False
51+
52+
#thread_num
53+
thread_num: 12
54+
55+
#ir_optim
56+
ir_optim: False
57+
58+
#开启tensorrt后,进行优化的子图包含的最少节点数
59+
#min_subgraph_size: 10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,289 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
import random
16+
from functools import partial
17+
18+
import numpy as np
19+
from numpy import array
20+
21+
import paddle
22+
import paddle.distributed as dist
23+
from paddle.io import DataLoader, DistributedBatchSampler, BatchSampler
24+
from paddlenlp.data import Pad
25+
26+
27+
def postprocess_response(token_ids, tokenizer):
28+
"""Post-process the decoded sequence. Truncate from the first <eos>."""
29+
eos_pos = len(token_ids)
30+
for i, tok_id in enumerate(token_ids):
31+
if tok_id == tokenizer.mask_token_id:
32+
eos_pos = i
33+
break
34+
token_ids = token_ids[:eos_pos]
35+
tokens = tokenizer.convert_ids_to_tokens(token_ids)
36+
tokens = tokenizer.merge_subword(tokens)
37+
return tokens
38+
39+
40+
def print_args(args):
41+
print('----------- Configuration Arguments -----------')
42+
for arg, value in sorted(vars(args).items()):
43+
print('%s: %s' % (arg, value))
44+
print('------------------------------------------------')
45+
46+
47+
def set_seed(seed):
48+
# Use the same data seed(for data shuffle) for all procs to guarantee data
49+
# consistency after sharding.
50+
random.seed(seed)
51+
np.random.seed(seed)
52+
# Maybe different op seeds(for dropout) for different procs is better.
53+
paddle.seed(seed + dist.get_rank())
54+
55+
56+
def convert_example(example,
57+
tokenizer,
58+
max_seq_len=512,
59+
max_target_len=128,
60+
max_title_len=256,
61+
mode='test',
62+
template=0):
63+
"""Convert all examples into necessary features."""
64+
if mode == 'pretrain' or mode == 'pretrain_test':
65+
context = example['context']
66+
answer = example['answer']
67+
target = example['target']
68+
69+
source = '答案:' + answer + tokenizer.sep_token + '上下文:' + context
70+
title = None
71+
72+
elif mode == 'train' or mode == 'test':
73+
target = None
74+
if 'source' in example and 'title' in example:
75+
source = example['source']
76+
title = None
77+
if 'title' in example.keys():
78+
title = example['title']
79+
elif 'context' in example and 'answer' in example:
80+
source = example['context']
81+
title = None
82+
if 'answer' in example.keys():
83+
title = example['answer']
84+
else:
85+
assert False, "Source and title are not in the input dictionary, nor are context and answer."
86+
if 'target' in example.keys():
87+
target = example['target']
88+
89+
if template == 1:
90+
source = '答案:' + title + tokenizer.sep_token + '上下文:' + source
91+
title = None
92+
if target:
93+
target = '问题:' + target
94+
elif template == 2:
95+
source = '答案:' + title + tokenizer.sep_token + '上下文:' + source
96+
title = None
97+
if target:
98+
target = '在已知答案的前提下,问题:' + target
99+
elif template == 3:
100+
source = '这是一个问题生成任务,根据提供的答案和上下文,来生成问题。' + title + tokenizer.sep_token + '上下文:' + source
101+
title = None
102+
if target:
103+
target = '问题:' + target
104+
105+
if mode == 'train' or mode == 'pretrain':
106+
tokenized_example = tokenizer.gen_encode(source,
107+
title=title,
108+
target=target,
109+
max_seq_len=max_seq_len,
110+
max_target_len=max_target_len,
111+
max_title_len=max_title_len,
112+
return_position_ids=True,
113+
return_length=True)
114+
target_start = tokenized_example['input_ids'].index(
115+
tokenizer.cls_token_id, 1)
116+
target_end = tokenized_example['seq_len']
117+
# Use to gather the logits corresponding to the labels during training
118+
tokenized_example['masked_positions'] = list(
119+
range(target_start, target_end - 1))
120+
tokenized_example['labels'] = tokenized_example['input_ids'][
121+
target_start + 1:target_end]
122+
123+
return tokenized_example
124+
125+
elif mode == 'test' or mode == 'pretrain_test':
126+
tokenized_example = tokenizer.gen_encode(
127+
source,
128+
title=title,
129+
max_seq_len=max_seq_len,
130+
max_title_len=max_title_len,
131+
add_start_token_for_decoding=True,
132+
return_position_ids=True,
133+
return_length=True,
134+
)
135+
136+
if 'target' in example and example['target']:
137+
tokenized_example['target'] = example['target']
138+
return tokenized_example
139+
140+
141+
def batchify_fn(batch_examples, pad_val, mode='test'):
142+
143+
def pad_mask(batch_attention_mask):
144+
batch_size = len(batch_attention_mask)
145+
max_len = max(map(len, batch_attention_mask))
146+
attention_mask = np.ones(
147+
(batch_size, max_len, max_len), dtype='float32') * -1e9
148+
for i, mask_data in enumerate(attention_mask):
149+
seq_len = len(batch_attention_mask[i])
150+
mask_data[-seq_len:, -seq_len:] = np.array(batch_attention_mask[i],
151+
dtype='float32')
152+
# In order to ensure the correct broadcasting mechanism, expand one
153+
# dimension to the second dimension (n_head of Transformer).
154+
attention_mask = np.expand_dims(attention_mask, axis=1)
155+
return attention_mask
156+
157+
pad_func = Pad(pad_val=pad_val, pad_right=False, dtype='int64')
158+
159+
input_ids = pad_func([example['input_ids'] for example in batch_examples])
160+
token_type_ids = pad_func(
161+
[example['token_type_ids'] for example in batch_examples])
162+
position_ids = pad_func(
163+
[example['position_ids'] for example in batch_examples])
164+
165+
attention_mask = pad_mask(
166+
[example['attention_mask'] for example in batch_examples])
167+
168+
seq_len = np.asarray([example['seq_len'] for example in batch_examples],
169+
dtype='int32')
170+
171+
if mode == 'train' or mode == 'pretrain':
172+
max_len = max([example['seq_len'] for example in batch_examples])
173+
masked_positions = np.concatenate([
174+
np.array(example['masked_positions']) +
175+
(max_len - example['seq_len']) + i * max_len
176+
for i, example in enumerate(batch_examples)
177+
])
178+
labels = np.concatenate([
179+
np.array(example['labels'], dtype='int64')
180+
for example in batch_examples
181+
])
182+
return input_ids, token_type_ids, position_ids, attention_mask, masked_positions, labels
183+
elif mode == 'test' or mode == 'pretrain_test':
184+
return input_ids, token_type_ids, position_ids, attention_mask, seq_len
185+
186+
187+
def create_data_loader(dataset, tokenizer, args, mode='test'):
188+
trans_func = partial(convert_example,
189+
tokenizer=tokenizer,
190+
mode='test',
191+
template=1)
192+
dataset = dataset.map(trans_func, lazy=True)
193+
if mode == 'pretrain':
194+
batch_sampler = DistributedBatchSampler(dataset,
195+
batch_size=args.batch_size,
196+
shuffle=True)
197+
elif mode == 'train':
198+
batch_sampler = DistributedBatchSampler(dataset,
199+
batch_size=args.batch_size,
200+
shuffle=True)
201+
elif mode == 'test' or mode == 'pretrain_test':
202+
batch_sampler = BatchSampler(dataset,
203+
batch_size=args.batch_size // 2,
204+
shuffle=False)
205+
collate_fn = partial(batchify_fn, pad_val=tokenizer.pad_token_id, mode=mode)
206+
data_loader = DataLoader(dataset,
207+
batch_sampler=batch_sampler,
208+
collate_fn=collate_fn,
209+
return_list=True)
210+
return dataset, data_loader
211+
212+
213+
def post_process_sum(token_ids, tokenizer):
214+
"""Post-process the decoded sequence. Truncate from the first <eos>."""
215+
eos_pos = len(token_ids)
216+
for i, tok_id in enumerate(token_ids):
217+
if tok_id == tokenizer.mask_token_id:
218+
eos_pos = i
219+
break
220+
token_ids = token_ids[:eos_pos]
221+
tokens = tokenizer.convert_ids_to_tokens(token_ids)
222+
tokens = tokenizer.merge_subword(tokens)
223+
special_tokens = ['[UNK]']
224+
tokens = [token for token in tokens if token not in special_tokens]
225+
return token_ids, tokens
226+
227+
228+
def remove_template(instr):
229+
"""Remove template prefix of decoded sequence."""
230+
outstr = instr.strip('问题:')
231+
outstr = instr.strip('在已知答案的前提下,问题:')
232+
return outstr
233+
234+
235+
def select_sum(ids,
236+
scores,
237+
tokenizer,
238+
max_dec_len=None,
239+
num_return_sequences=1):
240+
results = []
241+
group = []
242+
tmp = []
243+
if scores is not None:
244+
ids = ids.numpy()
245+
scores = scores.numpy()
246+
247+
if len(ids) != len(scores) or (len(ids) % num_return_sequences) != 0:
248+
raise ValueError(
249+
"the length of `ids` is {}, but the `num_return_sequences` is {}"
250+
.format(len(ids), num_return_sequences))
251+
252+
for pred, score in zip(ids, scores):
253+
pred_token_ids, pred_tokens = post_process_sum(pred, tokenizer)
254+
num_token = len(pred_token_ids)
255+
256+
target = "".join(pred_tokens)
257+
target = remove_template(target)
258+
259+
# not ending
260+
if max_dec_len is not None and num_token >= max_dec_len:
261+
score -= 1e3
262+
263+
tmp.append([target, score])
264+
if len(tmp) == num_return_sequences:
265+
group.append(tmp)
266+
tmp = []
267+
268+
for preds in group:
269+
preds = sorted(preds, key=lambda x: -x[1])
270+
results.append(preds[0][0])
271+
else:
272+
ids = ids.numpy()
273+
274+
for pred in ids:
275+
pred_token_ids, pred_tokens = post_process_sum(pred, tokenizer)
276+
num_token = len(pred_token_ids)
277+
response = "".join(pred_tokens)
278+
response = remove_template(response)
279+
280+
# TODO: Support return scores in FT.
281+
tmp.append([response])
282+
if len(tmp) == num_return_sequences:
283+
group.append(tmp)
284+
tmp = []
285+
286+
for preds in group:
287+
results.append(preds[0][0])
288+
289+
return results
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
from paddle_serving_server.pipeline import PipelineClient
15+
from numpy import array, float32
16+
import time
17+
import numpy as np
18+
19+
20+
class Runner(object):
21+
22+
def __init__(
23+
self,
24+
server_url: str,
25+
):
26+
self.client = PipelineClient()
27+
self.client.connect([server_url])
28+
29+
def Run(self, data):
30+
inputs = data
31+
start_time = time.time()
32+
ret = self.client.predict(feed_dict={"inputs": inputs})
33+
end_time = time.time()
34+
print("time cost :{} seconds".format(end_time - start_time))
35+
if not ret.value:
36+
print('Fail to fetch summary.')
37+
# ret is special class but a dict
38+
for d, s in zip(data, eval(ret.value[0])):
39+
print("--------------------")
40+
print("input: ", d)
41+
print("output: ", s)
42+
print("--------------------")
43+
return
44+
45+
46+
if __name__ == "__main__":
47+
server_url = "127.0.0.1:18011"
48+
runner = Runner(server_url)
49+
requests = [{
50+
"context":
51+
"奇峰黄山千米以上的山峰有77座,整座黄山就是一座花岗岩的峰林,自古有36大峰,36小峰,最高峰莲花峰、最险峰天都峰和观日出的最佳点光明顶构成黄山的三大主峰。",
52+
"answer": "莲花峰"
53+
}]
54+
runner.Run(requests)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
from paddle_serving_server.web_service import WebService, Op
16+
from numpy import array
17+
import logging
18+
import numpy as np
19+
from paddlenlp.transformers import AutoTokenizer
20+
from paddlenlp.ops.ext_utils import load
21+
from paddlenlp.transformers import UNIMOTokenizer
22+
from paddlenlp.data import Pad
23+
24+
from infer_utils import convert_example, batchify_fn, select_sum, postprocess_response
25+
26+
import paddle_serving_server.pipeline.operator
27+
28+
_LOGGER = logging.getLogger(__name__)
29+
30+
31+
class UnimoTextOp(Op):
32+
"""Op for unimo_text."""
33+
34+
def init_op(self):
35+
self.tokenizer = UNIMOTokenizer.from_pretrained('unimo-text-1.0')
36+
37+
def preprocess(self, input_dicts, data_id, log_id):
38+
# Convert input format
39+
(_, input_dict), = input_dicts.items()
40+
data = input_dict["inputs"]
41+
if isinstance(data, str) and "array(" in data:
42+
data = eval(data)
43+
else:
44+
_LOGGER.error("input value {}is not supported.".format(data))
45+
examples = [convert_example(i, self.tokenizer) for i in data]
46+
input_ids, token_type_ids, position_ids, attention_mask, seq_len = batchify_fn(
47+
examples, self.tokenizer.pad_token_id)
48+
new_dict = {}
49+
new_dict['input_ids'] = input_ids
50+
new_dict['token_type_ids'] = token_type_ids
51+
new_dict['attention_mask'] = attention_mask
52+
new_dict['seq_len'] = seq_len
53+
# the first return must be a dict or a list of dict, the dict corresponding to a batch of model input
54+
return new_dict, False, None, ""
55+
56+
def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
57+
# keyname refer to export_checkpoint_client/serving_client_conf.prototxt
58+
ids = fetch_dict['transpose_0.tmp_0'][:, 0, :].tolist()
59+
scores = fetch_dict['_generated_var_3'][:, 0].tolist()
60+
61+
results = [
62+
"".join(postprocess_response(sample, self.tokenizer))
63+
for sample in ids
64+
]
65+
new_dict = {}
66+
new_dict["outputs"] = str(results)
67+
# the first return must be a dict or a list of dict, the dict corresponding to a batch of model output
68+
return new_dict, None, ""
69+
70+
71+
class UnimoTextService(WebService):
72+
73+
def get_pipeline_response(self, read_op):
74+
return UnimoTextOp(name="question_generation", input_ops=[read_op])
75+
76+
77+
if __name__ == "__main__":
78+
# Load FasterTransformer lib.
79+
load("FasterTransformer", verbose=True)
80+
service = UnimoTextService(name="question_generation")
81+
service.prepare_pipeline_config("config.yml")
82+
service.run_service()

0 commit comments

Comments
 (0)
Please sign in to comment.