Closed
Description
Version
1.6
[package]
name = "TEST"
version = "0.1.0"
edition = "2021"
[profile.release]
debug = true
[dependencies]
tokio = { version = "1.45", features = ["full"] }
hyper = { version = "1.6.0", features = ["server", "http1", "http2"] }
hyper-util = { version = "0.1", features = ["server", "tokio"] }
http-body-util = "0.1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
Platform
6.11.0-25-generic #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Description
Hello :)
I am working on migrating a Java service to Rust, but so far the Hyper version's memory use seems unbounded and is using twice what the Java version used to use before we have to kill it. This service doesn't do much CPU work at all but it does handle lots of concurrent requests.
After the first run, the service sits at 270mb RSS.
After the second run, it sits at 484mb RSS.
3rd run: 699mb.
4th run: 868mb.
A simple NodeJS server completes the test with 100mb of ram used. I am not sure what I am doing wrong :(
- I have tried with and without keepalive.
- It only happens with concurrent requests. Sequential requests do not cause the problem.
- I have tried with different allocators.
- Heaptrack shows the allocations from Hyper.
If it was reusing buffers I'd expect the memory to go up and sit there, but it seems to continually increase.
use hyper::body::Bytes;
use hyper::server::conn::http1;
use hyper::service::service_fn;
use hyper::{Method, Request, Response, StatusCode};
use hyper::header::{CONNECTION, HeaderValue};
use hyper_util::rt::TokioIo;
use http_body_util::Full;
use std::convert::Infallible;
use std::net::SocketAddr;
use tokio::net::TcpSocket;
use tracing::{error, info};
async fn handle_request(req: Request<hyper::body::Incoming>) -> Result<Response<Full<Bytes>>, Infallible> {
match (req.method(), req.uri().path()) {
(&Method::GET, "/test") => {
let resp = Response::new(Full::new(Bytes::from("OK")));
Ok(resp)
}
_ => {
let mut response = Response::new(Full::new(Bytes::from("Not Found")));
*response.status_mut() = StatusCode::NOT_FOUND;
Ok(response)
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.init();
let port = 3000;
info!("Starting HTTP server on port {}", port);
let addr = SocketAddr::from(([0, 0, 0, 0], 3000));
let socket = TcpSocket::new_v4()?;
// socket.set_keepalive(false)?;
socket.bind(addr)?;
let listener = socket.listen(1024)?;
info!("HTTP server listening on {}", addr);
loop {
let (stream, _) = listener.accept().await?;
let io = TokioIo::new(stream);
tokio::task::spawn(async move {
if let Err(err) = http1::Builder::new()
.serve_connection(io, service_fn(handle_request))
.await
{
error!("Error serving connection: {:?}", err);
}
});
}
}
Running it:
ulimit -n 65000
cargo run --release
To run, if you have npm
:
npm i -g autocannon
autocannon -c 20000 -d 10 http://localhost:3000/test
(will fire a total of around 100k req on my machine)