Skip to content

Commit 3a826a4

Browse files
authored
Update Model card for GPT2 (#37101)
* Update Model card for gpt2 * Update link for gpt2 space * fixes docs based on suggestions * Add transformers-cli and quantization example for GPT-2 * Remove resources and flash attention docs and fix typos
1 parent 5e85509 commit 3a826a4

File tree

1 file changed

+61
-161
lines changed

1 file changed

+61
-161
lines changed

docs/source/en/model_doc/gpt2.md

Lines changed: 61 additions & 161 deletions
Original file line numberDiff line numberDiff line change
@@ -14,197 +14,97 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# OpenAI GPT2
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<a href="https://huggingface.co/models?filter=gpt2">
21-
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-gpt2-blueviolet">
22-
</a>
23-
<a href="https://huggingface.co/spaces/docs-demos/gpt2">
24-
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
25-
</a>
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
<img alt="TensorFlow" src="https://img.shields.io/badge/TensorFlow-FF6F00?style=flat&logo=tensorflow&logoColor=white">
21+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
22+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
23+
</div>
2624
</div>
2725

28-
## Overview
2926

30-
OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec
31-
Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from [OpenAI](https://huggingface.co/openai). It's a causal (unidirectional)
32-
transformer pretrained using language modeling on a very large corpus of ~40 GB of text data.
27+
# GPT-2
3328

34-
The abstract from the paper is the following:
29+
[GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) is a scaled up version of GPT, a causal transformer language model, with 10x more parameters and training data. The model was pretrained on a 40GB dataset to predict the next word in a sequence based on all the previous words. This approach enabled the model to perform many downstream tasks in a zero-shot setting.
3530

36-
*GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million
37-
web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some
38-
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
39-
across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than
40-
10X the amount of data.*
31+
The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks.
4132

42-
[Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by
43-
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
44-
different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*.
33+
You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community?search_models=gpt) organization.
4534

46-
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/).
35+
> [!TIP]
36+
> Click on the GPT-2 models in the right sidebar for more examples of how to apply GPT-2 to different language tasks.
4737
48-
## Usage tips
38+
The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line.
4939

50-
- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
51-
the left.
52-
- GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
53-
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
54-
observed in the *run_generation.py* example script.
55-
- The model can take the *past_key_values* (for PyTorch) or *past* (for TF) as input, which is the previously computed
56-
key/value attention pairs. Using this (*past_key_values* or *past*) value prevents the model from re-computing
57-
pre-computed values in the context of text generation. For PyTorch, see *past_key_values* argument of the
58-
[`GPT2Model.forward`] method, or for TF the *past* argument of the
59-
[`TFGPT2Model.call`] method for more information on its usage.
60-
- Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability
61-
improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only).
40+
<hfoptions id="usage">
41+
<hfoption id="Pipeline">
6242

63-
## Usage example
43+
```py
44+
import torch
45+
from transformers import pipeline
6446

65-
The `generate()` method can be used to generate text using GPT2 model.
66-
67-
```python
68-
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
69-
70-
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
71-
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
72-
73-
>>> prompt = "GPT2 is a model developed by OpenAI."
74-
75-
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
76-
77-
>>> gen_tokens = model.generate(
78-
... input_ids,
79-
... do_sample=True,
80-
... temperature=0.9,
81-
... max_length=100,
82-
... )
83-
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
47+
pipeline = pipeline(task="text-generation", model="openai-community/gpt2", torch_dtype=torch.float16, device=0)
48+
pipeline("Hello, I'm a language model")
8449
```
50+
</hfoption>
51+
<hfoption id="AutoModel">
8552

86-
## Using Flash Attention 2
53+
```py
54+
import torch
55+
from transformers import AutoModelForCausalLM, AutoTokenizer
8756

88-
Flash Attention 2 is a faster, optimized version of the attention scores computation which relies on `cuda` kernels.
57+
model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
58+
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
8959

90-
### Installation
60+
input_ids = tokenzier("Hello, I'm a language model". return_tensors="pt").to("cuda")
9161

92-
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
62+
output = model.generate(**input_ids, cache_implementation="static")
63+
print(tokenizer.decode(output[0], skip_special_tokens=True))
64+
```
9365

94-
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2:
66+
</hfoption>
67+
<hfoption id="transformers-cli">
9568

9669
```bash
97-
pip install -U flash-attn --no-build-isolation
70+
echo -e "Hello, I'm a language model" | transformers-cli run --task text-generation --model openai-community/gpt2 --device 0
9871
```
9972

100-
### Usage
73+
</hfoption>
74+
</hfoptions>
10175

102-
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
76+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
10377

104-
```python
105-
>>> import torch
106-
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
107-
>>> device = "cuda" # the device to load the model onto
78+
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits.
10879

109-
>>> model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
110-
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
80+
```py
81+
import torch
82+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
11183

112-
>>> prompt = "def hello_world():"
84+
quantization_config = BitsAndBytesConfig(
85+
load_in_4bit=True,
86+
bnb_4bit_quant_type="nf4",
87+
bnb_4bit_compute_dtype="float16",
88+
bnb_4bit_use_double_quant=True
89+
)
11390

114-
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
115-
>>> model.to(device)
91+
model = AutoModelForCausalLM.from_pretrained(
92+
"openai-community/gpt2-xl",
93+
quantization_config=quantization_config,
94+
device_map="auto"
95+
)
11696

117-
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
118-
>>> tokenizer.batch_decode(generated_ids)[0]
97+
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2-xl")
98+
inputs = tokenizer("Once upon a time, there was a magical forest", return_tensors="pt").to("cuda")
99+
outputs = model.generate(**inputs, max_new_tokens=100)
100+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
119101
```
120102

103+
## Notes
121104

122-
### Expected speedups
123-
124-
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `gpt2` checkpoint and the Flash Attention 2 version of the model using a sequence length of 512.
125-
126-
<div style="text-align: center">
127-
<img src="https://huggingface.co/datasets/EduardoPacheco/documentation-images/resolve/main/gpt2_flash_attention_2_speedup.jpg">
128-
</div>
129-
130-
131-
## Using Scaled Dot Product Attention (SDPA)
132-
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
133-
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
134-
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
135-
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
136-
page for more information.
137-
138-
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
139-
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
140-
141-
```python
142-
from transformers import AutoModelForCausalLM
143-
model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa")
144-
...
145-
```
146-
147-
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
148-
149-
On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `float16` with
150-
[gpt2-large](https://huggingface.co/openai-community/gpt2-large), we saw the
151-
following speedups during training and inference.
152-
153-
### Training
154-
| Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) |
155-
|-----------:|--------:|----------------------------:|--------------------------:|------------:|--------------------:|-------------------:|------------------:|
156-
| 1 | 128 | 0.039 | 0.032 | 23.042 | 3482.32 | 3494.62 | -0.352 |
157-
| 1 | 256 | 0.073 | 0.059 | 25.15 | 3546.66 | 3552.6 | -0.167 |
158-
| 1 | 512 | 0.155 | 0.118 | 30.96 | 4230.1 | 3665.59 | 15.4 |
159-
| 1 | 1024 | 0.316 | 0.209 | 50.839 | 8682.26 | 4881.09 | 77.875 |
160-
| 2 | 128 | 0.07 | 0.06 | 15.324 | 3557.8 | 3545.91 | 0.335 |
161-
| 2 | 256 | 0.143 | 0.122 | 16.53 | 3901.5 | 3657.68 | 6.666 |
162-
| 2 | 512 | 0.267 | 0.213 | 25.626 | 7062.21 | 4876.47 | 44.822 |
163-
| 2 | 1024 | OOM | 0.404 | / | OOM | 8096.35 | SDPA does not OOM |
164-
| 4 | 128 | 0.134 | 0.128 | 4.412 | 3675.79 | 3648.72 | 0.742 |
165-
| 4 | 256 | 0.243 | 0.217 | 12.292 | 6129.76 | 4871.12 | 25.839 |
166-
| 4 | 512 | 0.494 | 0.406 | 21.687 | 12466.6 | 8102.64 | 53.858 |
167-
| 4 | 1024 | OOM | 0.795 | / | OOM | 14568.2 | SDPA does not OOM |
168-
169-
### Inference
170-
| Batch size | Seq len | Per token latency Eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem Eager (MB) | Mem SDPA (MB) | Mem saved (%) |
171-
|-----------:|--------:|-----------------------------:|----------------------------:|------------:|---------------:|--------------:|--------------:|
172-
| 1 | 128 | 7.991 | 6.968 | 14.681 | 1685.2 | 1701.32 | -0.947 |
173-
| 1 | 256 | 8.462 | 7.199 | 17.536 | 1745.49 | 1770.78 | -1.428 |
174-
| 1 | 512 | 8.68 | 7.853 | 10.529 | 1907.69 | 1921.29 | -0.708 |
175-
| 1 | 768 | 9.101 | 8.365 | 8.791 | 2032.93 | 2068.12 | -1.701 |
176-
| 2 | 128 | 9.169 | 9.001 | 1.861 | 1803.84 | 1811.4 | -0.418 |
177-
| 2 | 256 | 9.907 | 9.78 | 1.294 | 1907.72 | 1921.44 | -0.714 |
178-
| 2 | 512 | 11.519 | 11.644 | -1.071 | 2176.86 | 2197.75 | -0.951 |
179-
| 2 | 768 | 13.022 | 13.407 | -2.873 | 2464.3 | 2491.06 | -1.074 |
180-
| 4 | 128 | 10.097 | 9.831 | 2.709 | 1942.25 | 1985.13 | -2.16 |
181-
| 4 | 256 | 11.599 | 11.398 | 1.764 | 2177.28 | 2197.86 | -0.937 |
182-
| 4 | 512 | 14.653 | 14.45 | 1.411 | 2753.16 | 2772.57 | -0.7 |
183-
| 4 | 768 | 17.846 | 17.617 | 1.299 | 3327.04 | 3343.97 | -0.506 |
184-
185-
186-
187-
188-
## Resources
189-
190-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
191-
192-
<PipelineTag pipeline="text-generation"/>
193-
194-
- A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface).
195-
- A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2.
196-
- A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model.
197-
- A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2.
198-
- A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model.
199-
- A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎
200-
- A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎
201-
- [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course.
202-
- [`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
203-
- [`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
204-
- [`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb).
205-
- [Text classification task guide](../tasks/sequence_classification)
206-
- [Token classification task guide](../tasks/token_classification)
207-
- [Causal language modeling task guide](../tasks/language_modeling)
105+
- Pad inputs on the right because GPT-2 uses absolute position embeddings.
106+
- GPT-2 can reuse previously computed key-value attention pairs. Access this feature with the [past_key_values](https://huggingface.co/docs/transformers//en/model_doc/gpt2#transformers.GPT2Model.forward.past_key_values) parameter in [`GPT2Model.forward`].
107+
- Enable the [scale_attn_by_inverse_layer_idx](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.scale_attn_by_inverse_layer_idx) and [reorder_and_upcast_attn](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.reorder_and_upcast_attn) parameters to apply the training stability improvements from [Mistral](./mistral).
208108

209109
## GPT2Config
210110

0 commit comments

Comments
 (0)