Skip to content

Commit 29a0893

Browse files
authored
Tmp tp transformers (#2942)
* Upgrade the version number. * Remove modifications in Lock. * Tmp branch to test transformers backend with 2.5.1 and TP>1 * Fixing the transformers backend. inference_mode forces the use of `aten.matmul` instead of `aten.mm` the former doesn't have sharding support crashing the transformers TP support. `lm_head.forward` also crashes because it skips the hook that cast/decast the DTensor. Torch 2.5.1 is required for sharding support. * Put back the attention impl. * Revert the flashinfer (this will fails). * Building AOT. * Using 2.5 kernels. * Remove the archlist, it's defined in the docker anyway.
1 parent 0a89902 commit 29a0893

File tree

15 files changed

+47
-49
lines changed

15 files changed

+47
-49
lines changed

Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ RUN cargo build --profile release-opt --frozen
4747
FROM nvidia/cuda:12.4.1-devel-ubuntu22.04 AS pytorch-install
4848

4949
# NOTE: When updating PyTorch version, beware to remove `pip install nvidia-nccl-cu12==2.22.3` below in the Dockerfile. Context: https://github.com/huggingface/text-generation-inference/pull/2099
50-
ARG PYTORCH_VERSION=2.4.0
50+
ARG PYTORCH_VERSION=2.5.1
5151

5252
ARG PYTHON_VERSION=3.11
5353
# Keep in sync with `server/pyproject.toml
@@ -235,8 +235,8 @@ RUN cd server && \
235235
make gen-server && \
236236
python -c "from text_generation_server.pb import generate_pb2" && \
237237
pip install -U pip uv && \
238-
uv pip install -e ".[attention, bnb, accelerate, compressed-tensors, marlin, moe, quantize, peft, outlines]" --no-cache-dir && \
239-
uv pip install nvidia-nccl-cu12==2.22.3
238+
uv pip install -e ".[attention, bnb, accelerate, compressed-tensors, marlin, moe, quantize, peft, outlines]" --no-cache-dir # && \
239+
# uv pip install nvidia-nccl-cu12==2.22.3
240240

241241
ENV LD_PRELOAD=/opt/conda/lib/python3.11/site-packages/nvidia/nccl/lib/libnccl.so.2
242242
# Required to find libpython within the rust binaries

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ model=HuggingFaceH4/zephyr-7b-beta
8484
volume=$PWD/data
8585

8686
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
87-
ghcr.io/huggingface/text-generation-inference:3.0.0 --model-id $model
87+
ghcr.io/huggingface/text-generation-inference:3.0.2 --model-id $model
8888
```
8989

9090
And then you can make requests like
@@ -121,7 +121,7 @@ curl localhost:8080/v1/chat/completions \
121121

122122
**Note:** To use NVIDIA GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). We also recommend using NVIDIA drivers with CUDA version 12.2 or higher. For running the Docker container on a machine with no GPUs or CUDA support, it is enough to remove the `--gpus all` flag and add `--disable-custom-kernels`, please note CPU is not the intended platform for this project, so performance might be subpar.
123123

124-
**Note:** TGI supports AMD Instinct MI210 and MI250 GPUs. Details can be found in the [Supported Hardware documentation](https://huggingface.co/docs/text-generation-inference/installation_amd#using-tgi-with-amd-gpus). To use AMD GPUs, please use `docker run --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.0-rocm --model-id $model` instead of the command above.
124+
**Note:** TGI supports AMD Instinct MI210 and MI250 GPUs. Details can be found in the [Supported Hardware documentation](https://huggingface.co/docs/text-generation-inference/installation_amd#using-tgi-with-amd-gpus). To use AMD GPUs, please use `docker run --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.2-rocm --model-id $model` instead of the command above.
125125

126126
To see all options to serve your models (in the [code](https://github.com/huggingface/text-generation-inference/blob/main/launcher/src/main.rs) or in the cli):
127127
```
@@ -152,7 +152,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
152152
token=<your cli READ token>
153153

154154
docker run --gpus all --shm-size 1g -e HF_TOKEN=$token -p 8080:80 -v $volume:/data \
155-
ghcr.io/huggingface/text-generation-inference:3.0.0 --model-id $model
155+
ghcr.io/huggingface/text-generation-inference:3.0.2 --model-id $model
156156
```
157157

158158
### A note on Shared Memory (shm)

docs/source/basic_tutorials/gated_model_access.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,6 @@ docker run --gpus all \
1919
--shm-size 1g \
2020
-e HF_TOKEN=$token \
2121
-p 8080:80 \
22-
-v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.1 \
22+
-v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.2 \
2323
--model-id $model
2424
```

docs/source/conceptual/quantization.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,15 +19,15 @@ bitsandbytes is a library used to apply 8-bit and 4-bit quantization to models.
1919
In TGI, you can use 8-bit quantization by adding `--quantize bitsandbytes` like below 👇
2020

2121
```bash
22-
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.1 --model-id $model --quantize bitsandbytes
22+
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.2 --model-id $model --quantize bitsandbytes
2323
```
2424

2525
4-bit quantization is also possible with bitsandbytes. You can choose one of the following 4-bit data types: 4-bit float (`fp4`), or 4-bit `NormalFloat` (`nf4`). These data types were introduced in the context of parameter-efficient fine-tuning, but you can apply them for inference by automatically converting the model weights on load.
2626

2727
In TGI, you can use 4-bit quantization by adding `--quantize bitsandbytes-nf4` or `--quantize bitsandbytes-fp4` like below 👇
2828

2929
```bash
30-
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.1 --model-id $model --quantize bitsandbytes-nf4
30+
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.2 --model-id $model --quantize bitsandbytes-nf4
3131
```
3232

3333
You can get more information about 8-bit quantization by reading this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration), and 4-bit quantization by reading [this blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
@@ -48,7 +48,7 @@ $$({\hat{W}_{l}}^{*} = argmin_{\hat{W_{l}}} ||W_{l}X-\hat{W}_{l}X||^{2}_{2})$$
4848
TGI allows you to both run an already GPTQ quantized model (see available models [here](https://huggingface.co/models?search=gptq)) or quantize a model of your choice using quantization script. You can run a quantized model by simply passing --quantize like below 👇
4949

5050
```bash
51-
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.1 --model-id $model --quantize gptq
51+
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.0.2 --model-id $model --quantize gptq
5252
```
5353

5454
Note that TGI's GPTQ implementation doesn't use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) under the hood. However, models quantized using AutoGPTQ or Optimum can still be served by TGI.

docs/source/installation_amd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
1111
docker run --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
1212
--device=/dev/kfd --device=/dev/dri --group-add video \
1313
--ipc=host --shm-size 256g --net host -v $volume:/data \
14-
ghcr.io/huggingface/text-generation-inference:3.0.1-rocm \
14+
ghcr.io/huggingface/text-generation-inference:3.0.2-rocm \
1515
--model-id $model
1616
```
1717

docs/source/installation_intel.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
1212
docker run --rm --privileged --cap-add=sys_nice \
1313
--device=/dev/dri \
1414
--ipc=host --shm-size 1g --net host -v $volume:/data \
15-
ghcr.io/huggingface/text-generation-inference:3.0.1-intel-xpu \
15+
ghcr.io/huggingface/text-generation-inference:3.0.2-intel-xpu \
1616
--model-id $model --cuda-graphs 0
1717
```
1818

@@ -29,7 +29,7 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading
2929
docker run --rm --privileged --cap-add=sys_nice \
3030
--device=/dev/dri \
3131
--ipc=host --shm-size 1g --net host -v $volume:/data \
32-
ghcr.io/huggingface/text-generation-inference:3.0.1-intel-cpu \
32+
ghcr.io/huggingface/text-generation-inference:3.0.2-intel-cpu \
3333
--model-id $model --cuda-graphs 0
3434
```
3535

docs/source/installation_nvidia.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ model=teknium/OpenHermes-2.5-Mistral-7B
1111
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
1212

1313
docker run --gpus all --shm-size 64g -p 8080:80 -v $volume:/data \
14-
ghcr.io/huggingface/text-generation-inference:3.0.1 \
14+
ghcr.io/huggingface/text-generation-inference:3.0.2 \
1515
--model-id $model
1616
```
1717

docs/source/quicktour.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ model=teknium/OpenHermes-2.5-Mistral-7B
1111
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
1212

1313
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
14-
ghcr.io/huggingface/text-generation-inference:3.0.1 \
14+
ghcr.io/huggingface/text-generation-inference:3.0.2 \
1515
--model-id $model
1616
```
1717

@@ -96,7 +96,7 @@ curl 127.0.0.1:8080/generate \
9696
To see all possible deploy flags and options, you can use the `--help` flag. It's possible to configure the number of shards, quantization, generation parameters, and more.
9797

9898
```bash
99-
docker run ghcr.io/huggingface/text-generation-inference:3.0.1 --help
99+
docker run ghcr.io/huggingface/text-generation-inference:3.0.2 --help
100100
```
101101

102102
</Tip>

docs/source/reference/api_reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ hub = {
163163

164164
# create Hugging Face Model Class
165165
huggingface_model = HuggingFaceModel(
166-
image_uri=get_huggingface_llm_image_uri("huggingface",version="3.0.1"),
166+
image_uri=get_huggingface_llm_image_uri("huggingface",version="3.0.2"),
167167
env=hub,
168168
role=role,
169169
)

server/Makefile-flashinfer

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
install-flashinfer:
22
# We need fsspec as an additional dependency, but
33
# `pip install flashinfer` cannot resolve it.
4-
pip install fsspec
5-
pip install flashinfer==0.2.0.post1 -i https://flashinfer.ai/whl/cu124/torch2.4
4+
pip install fsspec sympy==1.13.1 numpy
5+
pip install -U setuptools
6+
FLASHINFER_ENABLE_AOT=1 pip install git+https://github.com/flashinfer-ai/[email protected]#egg=flashinfer --no-build-isolation

0 commit comments

Comments
 (0)