You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update Model card for gpt2
* Update link for gpt2 space
* fixes docs based on suggestions
* Add transformers-cli and quantization example for GPT-2
* Remove resources and flash attention docs and fix typos
OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) by Alec
31
-
Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from [OpenAI](https://huggingface.co/openai). It's a causal (unidirectional)
32
-
transformer pretrained using language modeling on a very large corpus of ~40 GB of text data.
27
+
# GPT-2
33
28
34
-
The abstract from the paper is the following:
29
+
[GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) is a scaled up version of GPT, a causal transformer language model, with 10x more parameters and training data. The model was pretrained on a 40GB dataset to predict the next word in a sequence based on all the previous words. This approach enabled the model to perform many downstream tasks in a zero-shot setting.
35
30
36
-
*GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million
37
-
web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some
38
-
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
39
-
across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than
40
-
10X the amount of data.*
31
+
The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks.
41
32
42
-
[Write With Transformer](https://transformer.huggingface.co/doc/gpt2-large) is a webapp created and hosted by
43
-
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
44
-
different sizes: small, medium, large, xl and a distilled version of the small checkpoint: *distilgpt-2*.
33
+
You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community?search_models=gpt) organization.
45
34
46
-
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://openai.com/blog/better-language-models/).
35
+
> [!TIP]
36
+
> Click on the GPT-2 models in the right sidebar for more examples of how to apply GPT-2 to different language tasks.
47
37
48
-
## Usage tips
38
+
The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line.
49
39
50
-
- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
51
-
the left.
52
-
- GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
53
-
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
54
-
observed in the *run_generation.py* example script.
55
-
- The model can take the *past_key_values* (for PyTorch) or *past* (for TF) as input, which is the previously computed
56
-
key/value attention pairs. Using this (*past_key_values* or *past*) value prevents the model from re-computing
57
-
pre-computed values in the context of text generation. For PyTorch, see *past_key_values* argument of the
58
-
[`GPT2Model.forward`] method, or for TF the *past* argument of the
59
-
[`TFGPT2Model.call`] method for more information on its usage.
60
-
- Enabling the *scale_attn_by_inverse_layer_idx* and *reorder_and_upcast_attn* flags will apply the training stability
61
-
improvements from [Mistral](https://github.com/stanford-crfm/mistral/) (for PyTorch only).
40
+
<hfoptionsid="usage">
41
+
<hfoptionid="Pipeline">
62
42
63
-
## Usage example
43
+
```py
44
+
import torch
45
+
from transformers import pipeline
64
46
65
-
The `generate()` method can be used to generate text using GPT2 model.
input_ids = tokenzier("Hello, I'm a language model". return_tensors="pt").to("cuda")
91
61
92
-
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2:
66
+
</hfoption>
67
+
<hfoptionid="transformers-cli">
95
68
96
69
```bash
97
-
pip install -U flash-attn --no-build-isolation
70
+
echo -e "Hello, I'm a language model"| transformers-cli run --task text-generation --model openai-community/gpt2 --device 0
98
71
```
99
72
100
-
### Usage
73
+
</hfoption>
74
+
</hfoptions>
101
75
102
-
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
76
+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `gpt2` checkpoint and the Flash Attention 2 version of the model using a sequence length of 512.
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
136
-
page for more information.
137
-
138
-
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
139
-
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
140
-
141
-
```python
142
-
from transformers import AutoModelForCausalLM
143
-
model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16, attn_implementation="sdpa")
144
-
...
145
-
```
146
-
147
-
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
148
-
149
-
On a local benchmark (rtx3080ti-16GB, PyTorch 2.2.1, OS Ubuntu 22.04) using `float16` with
150
-
[gpt2-large](https://huggingface.co/openai-community/gpt2-large), we saw the
151
-
following speedups during training and inference.
152
-
153
-
### Training
154
-
| Batch size | Seq len | Time per batch (Eager - s) | Time per batch (SDPA - s) | Speedup (%) | Eager peak mem (MB) | SDPA peak mem (MB) | Mem saving (%) |
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
191
-
192
-
<PipelineTagpipeline="text-generation"/>
193
-
194
-
- A blog on how to [Finetune a non-English GPT-2 Model with Hugging Face](https://www.philschmid.de/fine-tune-a-non-english-gpt-2-model-with-huggingface).
195
-
- A blog on [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) with GPT-2.
196
-
- A blog on [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot), a large GPT-2 model.
197
-
- A blog on [Faster Text Generation with TensorFlow and XLA](https://huggingface.co/blog/tf-xla-generate) with GPT-2.
198
-
- A blog on [How to train a Language Model with Megatron-LM](https://huggingface.co/blog/megatron-training) with a GPT-2 model.
199
-
- A notebook on how to [finetune GPT2 to generate lyrics in the style of your favorite artist](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb). 🌎
200
-
- A notebook on how to [finetune GPT2 to generate tweets in the style of your favorite Twitter user](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb). 🌎
201
-
-[Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course.
202
-
-[`GPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling), [text generation example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation), and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
203
-
-[`TFGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
204
-
-[`FlaxGPT2LMHeadModel`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb).
-[Causal language modeling task guide](../tasks/language_modeling)
105
+
- Pad inputs on the right because GPT-2 uses absolute position embeddings.
106
+
- GPT-2 can reuse previously computed key-value attention pairs. Access this feature with the [past_key_values](https://huggingface.co/docs/transformers//en/model_doc/gpt2#transformers.GPT2Model.forward.past_key_values) parameter in [`GPT2Model.forward`].
107
+
- Enable the [scale_attn_by_inverse_layer_idx](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.scale_attn_by_inverse_layer_idx) and [reorder_and_upcast_attn](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.reorder_and_upcast_attn) parameters to apply the training stability improvements from [Mistral](./mistral).
0 commit comments