Skip to content

Commit db00f5f

Browse files
committed
Update 2025-05-01 05:16:14
1 parent 7326653 commit db00f5f

File tree

83 files changed

+8345
-7779
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+8345
-7779
lines changed

README.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
<link rel="preload" as="script" href="_static/scripts/pydata-sphinx-theme.js?digest=dfe6caa3a7d634c4db9b" />
4040
<script src="_static/vendor/fontawesome/6.5.2/js/all.min.js?digest=dfe6caa3a7d634c4db9b"></script>
4141

42-
<script src="_static/documentation_options.js?v=58fa0cba"></script>
42+
<script src="_static/documentation_options.js?v=4f8e8cdc"></script>
4343
<script src="_static/doctools.js?v=9a2dae69"></script>
4444
<script src="_static/sphinx_highlight.js?v=dc90522c"></script>
4545
<script src="_static/clipboard.min.js?v=a7894cd8"></script>

_sources/backend/function_calling.ipynb

+340-174
Large diffs are not rendered by default.

_sources/backend/lora.ipynb

+264-255
Large diffs are not rendered by default.

_sources/backend/native_api.ipynb

+322-324
Large diffs are not rendered by default.

_sources/backend/offline_engine_api.ipynb

+438-448
Large diffs are not rendered by default.

_sources/backend/openai_api_completions.ipynb

+218-233
Large diffs are not rendered by default.

_sources/backend/openai_api_embeddings.ipynb

+60-66
Large diffs are not rendered by default.

_sources/backend/openai_api_vision.ipynb

+92-81
Large diffs are not rendered by default.

_sources/backend/send_request.ipynb

+99-79
Large diffs are not rendered by default.

_sources/backend/separate_reasoning.ipynb

+135-120
Large diffs are not rendered by default.

_sources/backend/speculative_decoding.ipynb

+319-282
Large diffs are not rendered by default.

_sources/backend/structured_outputs.ipynb

+146-145
Large diffs are not rendered by default.

_sources/backend/structured_outputs_for_reasoning_models.ipynb

+553-513
Large diffs are not rendered by default.

_sources/developer/setup_github_runner.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
1111
# Nvidia
1212
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
1313
# AMD
14-
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post1-rocm630 /bin/bash
14+
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post2-rocm630 /bin/bash
1515
# AMD just the last 2 GPUs
16-
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post1-rocm630 /bin/bash
16+
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post2-rocm630 /bin/bash
1717
```
1818

1919
### Step 2: Configure the runner by `config.sh`

_sources/frontend/frontend.ipynb

+230-242
Large diffs are not rendered by default.

_sources/start/install.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
1111
```bash
1212
pip install --upgrade pip
1313
pip install uv
14-
uv pip install "sglang[all]>=0.4.6.post1"
14+
uv pip install "sglang[all]>=0.4.6.post2"
1515
```
1616

1717
**Quick Fixes to Common Problems**
@@ -29,7 +29,7 @@ uv pip install "sglang[all]>=0.4.6.post1"
2929

3030
```bash
3131
# Use the last release branch
32-
git clone -b v0.4.6.post1 https://github.com/sgl-project/sglang.git
32+
git clone -b v0.4.6.post2 https://github.com/sgl-project/sglang.git
3333
cd sglang
3434

3535
pip install --upgrade pip
@@ -44,7 +44,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
4444

4545
```bash
4646
# Use the last release branch
47-
git clone -b v0.4.6.post1 https://github.com/sgl-project/sglang.git
47+
git clone -b v0.4.6.post2 https://github.com/sgl-project/sglang.git
4848
cd sglang
4949

5050
pip install --upgrade pip
@@ -73,7 +73,7 @@ docker run --gpus all \
7373
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use `docker/Dockerfile.rocm` to build images, example and usage as below:
7474

7575
```bash
76-
docker build --build-arg SGL_BRANCH=v0.4.6.post1 -t v0.4.6.post1-rocm630 -f Dockerfile.rocm .
76+
docker build --build-arg SGL_BRANCH=v0.4.6.post2 -t v0.4.6.post2-rocm630 -f Dockerfile.rocm .
7777

7878
alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
7979
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
@@ -82,11 +82,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
8282
drun -p 30000:30000 \
8383
-v ~/.cache/huggingface:/root/.cache/huggingface \
8484
--env "HF_TOKEN=<secret>" \
85-
v0.4.6.post1-rocm630 \
85+
v0.4.6.post2-rocm630 \
8686
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
8787

8888
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
89-
drun v0.4.6.post1-rocm630 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
89+
drun v0.4.6.post2-rocm630 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
9090
```
9191

9292
## Method 4: Using docker compose

_static/documentation_options.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
const DOCUMENTATION_OPTIONS = {
2-
VERSION: '0.4.6.post1',
2+
VERSION: '0.4.6.post2',
33
LANGUAGE: 'en',
44
COLLAPSE_INDEX: false,
55
BUILDER: 'html',

backend/attention_backend.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
<link rel="preload" as="script" href="../_static/scripts/pydata-sphinx-theme.js?digest=dfe6caa3a7d634c4db9b" />
4040
<script src="../_static/vendor/fontawesome/6.5.2/js/all.min.js?digest=dfe6caa3a7d634c4db9b"></script>
4141

42-
<script src="../_static/documentation_options.js?v=58fa0cba"></script>
42+
<script src="../_static/documentation_options.js?v=4f8e8cdc"></script>
4343
<script src="../_static/doctools.js?v=9a2dae69"></script>
4444
<script src="../_static/sphinx_highlight.js?v=dc90522c"></script>
4545
<script src="../_static/clipboard.min.js?v=a7894cd8"></script>

backend/custom_chat_template.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
<link rel="preload" as="script" href="../_static/scripts/pydata-sphinx-theme.js?digest=dfe6caa3a7d634c4db9b" />
4040
<script src="../_static/vendor/fontawesome/6.5.2/js/all.min.js?digest=dfe6caa3a7d634c4db9b"></script>
4141

42-
<script src="../_static/documentation_options.js?v=58fa0cba"></script>
42+
<script src="../_static/documentation_options.js?v=4f8e8cdc"></script>
4343
<script src="../_static/doctools.js?v=9a2dae69"></script>
4444
<script src="../_static/sphinx_highlight.js?v=dc90522c"></script>
4545
<script src="../_static/clipboard.min.js?v=a7894cd8"></script>

0 commit comments

Comments
 (0)