Skip to content

feat: generate config for upstreams with keepalive enabled#217

Open
kevin-secrist wants to merge 8 commits intolancachenet:masterfrom
kevin-secrist:generate-upstream-keepalive
Open

feat: generate config for upstreams with keepalive enabled#217
kevin-secrist wants to merge 8 commits intolancachenet:masterfrom
kevin-secrist:generate-upstream-keepalive

Conversation

@kevin-secrist
Copy link
Copy Markdown

@kevin-secrist kevin-secrist commented Jan 14, 2026

Enables HTTP/1.1 keepalive connections to upstream CDNs, improving cache-miss download speeds. On my setup I was able to increase throughput for 1 user/client from 200Mbps to ~1Gbps for Steam (my ISP speed). This will benefit everyone, but it is most obvious in home situations.

How it works

  • Auto-generates upstream blocks with keepalive pools from cache_domains.json at container startup
  • Maps CDN domains to their keepalive pools via $upstream_name variable
  • Enables HTTP/1.1 with Connection: "" header to reuse TCP connections
  • Unresolvable/wildcard domains fall back to direct proxy (no keepalive)
  • Periodically re-resolves DNS via a supervisord-managed refresh loop

In nginx 1.27.3 resolve is available for non-commercial users which would be a much cleaner implementation of this. The refresh script and the scripted resolution of hosts could be removed at that point.

Configuration

This feature is opt-in. Set the following environment variables:

Variable Default Description
ENABLE_UPSTREAM_KEEPALIVE false Set to true to enable
UPSTREAM_REFRESH_INTERVAL 1h How often to re-resolve DNS (set to 0 to disable refresh)

Changes

  • overlay/hooks/entrypoint-pre.d/16_generate_upstream_keepalive.sh - Entrypoint hook to generate upstream configs at startup
  • overlay/scripts/refresh_upstreams.sh - Periodic DNS refresh loop
  • overlay/etc/supervisor/conf.d/upstream_refresh.conf - Supervisord program for the refresh loop
  • overlay/etc/nginx/sites-available/upstream.conf.d/30_primary_proxy.conf - Modified to use HTTP/1.1 keepalive and route via $upstream_name

Generated at runtime:

  • /etc/nginx/conf.d/40_upstream_pools.conf - Upstream blocks with keepalive directives
  • /etc/nginx/conf.d/35_upstream_maps.conf - Domain to upstream mappings

Why this works

Many games/downloads fetch content as tiny chunks under 1MB. Without keepalive, each chunk requires a new TCP connection leading to massive overhead. Personally I tried adjusting settings with lancache (like slice size) but this ended up being the bottleneck.

Closes #202

Generated config examples (removed most of the content for brevity)

/etc/nginx/conf.d/35_upstream_maps.conf

# Map hostnames to upstream pools for keepalive routing

map $http_host $upstream_name_default {
    hostnames;
    default $host;  # Fallback to direct proxy for unmapped domains
    assetcdn.101.arenanetworks.com assetcdn_101_arenanetworks_com;
    # snip
    xvcf2.xboxlive.com xvcf2_xboxlive_com;
}

# Steam user-agent detection - routes Steam client traffic to steam upstream pool
map $http_user_agent $upstream_name {
    default $upstream_name_default;
    ~Valve\/Steam\ HTTP\ Client\ 1\.0 lancache_steamcontent_com;
}

/etc/nginx/conf.d/40_upstream_pools.conf

# Auto-generated upstream pools with keepalive
# Generated from cache_domains.json at Thu 15 Jan 17:54:13 EST 2026

upstream assetcdn_101_arenanetworks_com {
    server 13.249.82.104;  # assetcdn.101.arenanetworks.com
    keepalive 16;
    keepalive_timeout 5m;
}

upstream assetcdn_102_arenanetworks_com {
    server 3.171.73.128;  # assetcdn.102.arenanetworks.com
    keepalive 16;
    keepalive_timeout 5m;
}

upstream assetcdn_103_arenanetworks_com {
    server 3.170.43.196;  # assetcdn.103.arenanetworks.com
    keepalive 16;
    keepalive_timeout 5m;
}

# ... snip

upstream xbox_mbr_xboxlive_com {
    server 104.97.85.167;  # xbox-mbr.xboxlive.com
    keepalive 16;
    keepalive_timeout 5m;
}

upstream xvcf1_xboxlive_com {
    server 23.53.11.15;  # xvcf1.xboxlive.com
    keepalive 16;
    keepalive_timeout 5m;
}

upstream xvcf2_xboxlive_com {
    server 23.3.75.133;  # xvcf2.xboxlive.com
    keepalive 16;
    keepalive_timeout 5m;
}

@kevin-secrist
Copy link
Copy Markdown
Author

I swear I did my testing with this config but I think I must have forgotten to re-check in a rush. This doesn't work for two reasons:

  1. We need a *.steamcontent.com map to the steam upstream, which is simple enough to fix.
  2. The resolved domains for the upstreams are just going to point back to nginx in a loop because it's not able to be configured to use the UPSTREAM_DNS resolver.

It would be really nice if we could upgrade to a newer version of nginx (1.27.3 came out 2024-11-26), but otherwise I'm going to try to workaround this. I might just settle for a simpler manual mapping though if it gets to be too complicated or brittle.

@kevin-secrist
Copy link
Copy Markdown
Author

Added some more changes and re-verified that I get (basically) line-speed on an empty cache:

[9:57:35 PM] Starting Counter-Strike: Source
[9:58:01 PM] Finished in 23.4508 - 926.69 Mbit/s

[9:58:01 PM] Starting DOOM Eternal
[10:10:18 PM] Finished in 12:09.55 - 931.55 Mbit/s

[10:10:18 PM] Starting Dota 2
[10:15:04 PM] Finished in 04:35.30 - 929.52 Mbit/s

@VibroAxe
Copy link
Copy Markdown
Member

Hey @kevin-secrist

This looks interesting, we've thought about this for a while and have been trying to work out a decent way to implement without the huge map generation but not worked an answer. Will leave the team for some discussion on accepting this but I see that the CI check is currently failing. We've had some issues with the test script on another PR so I suggest you rebaseline cache_test.sh and run-tests.sh from master and if that doesn't fix it then something is breaking in the new keepalive logic

One thing I am concerned about is that this assumes all upstream cache's support keepalive, what happens if the upstream doesn't do we fail gracefully?

echo "map \$http_host \$upstream_name {" >> "$MAPS_TMP_FILE"
echo " hostnames;" >> "$MAPS_TMP_FILE"
echo " default \$host; # Fallback to direct proxy for unmapped domains" >> "$MAPS_TMP_FILE"
echo " *.steamcontent.com lancache_steamcontent_com; # Redirect all steam traffic" >> "$MAPS_TMP_FILE"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like this hard coded line as it overrides the cache-domains logic. If you want this it should be added on a custom fork of cache-domains adding *.steamcontent.com into steam.txt. The original wildcard was removed and overridden with the trigger domain for good reasons as steam behaves differently through the lancache local trigger

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about this more, it does need something because the trigger domain is used for DNS not for $http_host. Is the user agent passed through to the proxy - if so, something similar to the default cachemap should work?

echo "map \"\$http_user_agent£££\$http_host\" \$cacheidentifier {" >> $OUTPUTFILE
echo "    default \$http_host;" >> $OUTPUTFILE
echo "    ~Valve\\/Steam\\ HTTP\\ Client\\ 1\.0£££.* steam;" >> $OUTPUTFILE

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VibroAxe thanks, that's a good idea - I've done this in cefd7bb

@kevin-secrist kevin-secrist force-pushed the generate-upstream-keepalive branch from c2251eb to cefd7bb Compare February 15, 2026 00:07
@kevin-secrist
Copy link
Copy Markdown
Author

kevin-secrist commented Feb 15, 2026

I haven't forgotten about this, I've just been a bit busy recently. I've made a change to use the Steam UA as you requested. I've also edited the config examples above to match.

Hey @kevin-secrist

This looks interesting, we've thought about this for a while and have been trying to work out a decent way to implement without the huge map generation but not worked an answer. Will leave the team for some discussion on accepting this but I see that the CI check is currently failing. We've had some issues with the test script on another PR so I suggest you rebaseline cache_test.sh and run-tests.sh from master and if that doesn't fix it then something is breaking in the new keepalive logic

One thing I am concerned about is that this assumes all upstream cache's support keepalive, what happens if the upstream doesn't do we fail gracefully?

I'm not an expert with this, but based on my research most of this is covered in RFC9112, Section 9.3 for Persistence and there's also an appendix C.2.2 about keep-alive. These are my conclusions on what should happen, for what that's worth.

  1. In HTTP/1.1 persistence is the default behavior. A server opts out by sending Connection: close. No opt-in is needed, so any HTTP/1.1 upstream supports keepalive unless it explicitly declines. No source for this, but I imagine all the CDNs that lancache is generally used for would support HTTP/1.1 by now, it would be much more expensive for them if they did not support persistent connections. Of course, could be wrong and there's always going to be an exception (see point 3).

  2. If an upstream understands persistence (e.g. HTTP/1.1+) but does not want to use it, it sends Connection: close back to the client. Nginx receives the response successfully, closes that connection, and opens a new one for the next request. The keepalive pool slot simply stays empty for that upstream. Behavior is identical to not having keepalive configured.

  3. If an upstream is HTTP/1.0, it doesn't support persistence by default. The Connection header we send is an HTTP/1.1 construct, but HTTP requires servers to ignore unrecognized headers, so a 1.0 server should skip that header, respond normally, and close the connection. Nginx would see the HTTP/1.0 status line, know not to pool the connection, and fall back to connection-per-request. Additionally, from RFC 9110 if a server receives HTTP 1.1 and doesn't understand that, it should respond as if it was a 1.0 request.

Relevant bits from the RFC

9.3. Persistence

HTTP/1.1 defaults to the use of "persistent connections", allowing multiple requests and responses to be carried over a single connection. HTTP implementations SHOULD support persistent connections.

A recipient determines whether a connection is persistent or not based on the protocol version and Connection header field (Section 7.6.1 of [HTTP]) in the most recently received message, if any:

  • If the "close" connection option is present (Section 9.6), the connection will not persist after the current response; else,
  • If the received protocol is HTTP/1.1 (or later), the connection will persist after the current response; else,
  • If the received protocol is HTTP/1.0, the "keep-alive" connection option is present, either the recipient is not a proxy or the message is a response, and the recipient wishes to honor the HTTP/1.0 "keep-alive" mechanism, the connection will persist after the current response; otherwise,
  • The connection will close after the current response.
    A client that does not support persistent connections MUST send the "close" connection option in every request message.

A server that does not support persistent connections MUST send the "close" connection option in every response message that does not have a 1xx (Informational) status code.

A client MAY send additional requests on a persistent connection until it sends or receives a "close" connection option or receives an HTTP/1.0 response without a "keep-alive" connection option.

In order to remain persistent, all messages on a connection need to have a self-defined message length (i.e., one not defined by closure of the connection), as described in Section 6. A server MUST read the entire request message body or close the connection after sending its response; otherwise, the remaining data on a persistent connection would be misinterpreted as the next request. Likewise, a client MUST read the entire response message body if it intends to reuse the same connection for a subsequent request.

A proxy server MUST NOT maintain a persistent connection with an HTTP/1.0 client (see Appendix C.2.2 for information and discussion of the problems with the Keep-Alive header field implemented by many HTTP/1.0 clients).

See Appendix C.2.2 for more information on backwards compatibility with HTTP/1.0 clients.

C.2.2. Keep-Alive Connections

In HTTP/1.0, each connection is established by the client prior to the request and closed by the server after sending the response. However, some implementations implement the explicitly negotiated ("Keep-Alive") version of persistent connections described in Section 19.7.1 of [RFC2068].

Some clients and servers might wish to be compatible with these previous approaches to persistent connections, by explicitly negotiating for them with a "Connection: keep-alive" request header field. However, some experimental implementations of HTTP/1.0 persistent connections are faulty; for example, if an HTTP/1.0 proxy server doesn't understand Connection, it will erroneously forward that header field to the next inbound server, which would result in a hung connection.

One attempted solution was the introduction of a Proxy-Connection header field, targeted specifically at proxies. In practice, this was also unworkable, because proxies are often deployed in multiple layers, bringing about the same problem discussed above.

As a result, clients are encouraged not to send the Proxy-Connection header field in any requests.

Clients are also encouraged to consider the use of "Connection: keep-alive" in requests carefully; while they can enable persistent connections with HTTP/1.0 servers, clients using them will need to monitor the connection for "hung" requests (which indicate that the client ought to stop sending the header field), and this mechanism ought not be used by clients at all when a proxy is being used.

@kevin-secrist
Copy link
Copy Markdown
Author

kevin-secrist commented Feb 15, 2026

This time around I used my laptop to do the testing locally (my server is a little incapacitated at the moment for reasons unrelated to lancache). I created a docker-compose file with lancache-dns, monolithic, and also steam-prefill, and set the DNS for the prefill container within docker to the lancache-dns container so it would use that rather than my server. The bandwidth is actually still better, but also I am measuring the number of syn/ack packets coming out of the monolithic container using tcpdump for comparison.

Docker Compose File
services:
  dns:
    image: lancachenet/lancache-dns:latest
    environment:
      UPSTREAM_DNS: 1.1.1.1
      LANCACHE_IP: 10.10.0.3
      USE_GENERIC_CACHE: "true"
    networks:
      lancache:
        ipv4_address: 10.10.0.2

  monolithic:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      UPSTREAM_DNS: 1.1.1.1
      ENABLE_UPSTREAM_KEEPALIVE: "true"
      DNS_BIND_IP: 10.10.0.2
    volumes:
      - cache:/data/cache
      - logs:/data/logs
    networks:
      lancache:
        ipv4_address: 10.10.0.3

  prefill:
    image: tpill90/steam-lancache-prefill:latest
    entrypoint: ["sleep", "infinity"]
    depends_on:
      - dns
    dns:
      - 10.10.0.2
    volumes:
      - prefill-config:/Config
    networks:
      lancache:
        ipv4_address: 10.10.0.4

networks:
  lancache:
    driver: bridge
    ipam:
      config:
        - subnet: 10.10.0.0/24

volumes:
  cache:
  logs:
  prefill-config:

In one shell, running prefill for 2 minutes:

docker compose exec monolithic bash -c 'rm -rf /data/cache/* && nginx -s reload' && docker compose exec prefill timeout 120 /SteamPrefill prefill --recent

In another, monitoring the monolithic container with tcpdump: (note: locally I added tcpdump into the apt install in the Dockerfile)

docker compose exec monolithic timeout 120 tcpdump -i any -n 'tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0 and not dst net 10.10.0.0/24 and not dst host 127.0.0.1'

ENABLE_UPSTREAM_KEEPALIVE: "true"

[11:59:33 PM] Starting Dota 2
[11:59:34 PM] Detected Lancache server at lancache.steamcontent.com [10.10.0.3]

Downloading.. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   4% 00:34:12 1.2/29.8 GiB 119.6 Mbit/s

79 packets captured
79 packets received by filter
0 packets dropped by kernel

ENABLE_UPSTREAM_KEEPALIVE: "false"

[12:02:56 AM] Starting Dota 2
[12:02:56 AM] Detected Lancache server at lancache.steamcontent.com [10.10.0.3]

Downloading.. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   2% 01:34:16 0.8/29.8 GiB 44.0 Mbit/s

3758 packets captured
3761 packets received by filter
0 packets dropped by kernel

I haven't done much testing of non-steam upstreams with the recent changes.

@kevin-secrist
Copy link
Copy Markdown
Author

Ubuntu 26.04 (which comes out in a few weeks) looks like it's going to contain nginx 1.28.2, so once the lancachenet/ubuntu image is updated we should be able to simplify much of the logic in this PR.

@VibroAxe any thoughts on the above? Do you have plans to update immediately or do you typically wait a bit?

@mastermc0
Copy link
Copy Markdown

Get some errors on xboxlive upstream when using this PR let me know if you need more info/testing.
upstream-error.log

2026/03/27 12:38:28 [error] 1885#1885: *238036 connect() failed (101: Network is unreachable) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.0", upstream: "http://199.232.214.172:80/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 12:39:28 [error] 1887#1887: *238033 upstream timed out (110: Connection timed out) while reading upstream, client: 127.0.0.1, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.0", upstream: "http://23.36.15.161:80/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 12:39:28 [error] 1886#1886: *238027 upstream timed out (110: Connection timed out) while reading upstream, client: 127.0.0.1, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.0", upstream: "http://23.36.15.151:80/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 13:08:10 [error] 1886#1886: *4374 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.0", upstream: "http://23.36.15.161:80/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com"

error.log

2026/03/27 12:39:28 [error] 1885#1885: *237253 upstream prematurely closed connection while reading upstream, client: 192.168.0.222, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 12:39:28 [error] 1887#1887: *237251 upstream prematurely closed connection while reading upstream, client: fdce:6f8f:beef:1:147:a3ef:6787:b6e2, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 13:22:19 [error] 5175#5175: *19 unexpected status code 500 in slice response while reading response header from upstream, client: 192.168.0.179, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", subrequest: "/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 13:22:19 [error] 5174#5174: *28 unexpected status code 500 in slice response while reading response header from upstream, client: 192.168.0.179, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", subrequest: "/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 13:22:19 [error] 5176#5176: *31 unexpected status code 500 in slice response while reading response header from upstream, client: 192.168.0.179, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", subrequest: "/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com" 2026/03/27 13:23:20 [error] 5173#5173: *9 unexpected status code 500 in slice response while reading response header from upstream, client: 192.168.0.179, server: , request: "GET /4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc HTTP/1.1", subrequest: "/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", upstream: "http://127.0.0.1:3128/4/5493b477-b261-47c9-8338-848272bafe14/21b40136-2fdb-404b-be34-4656e6a88528/1.5199.5682.0.8c7090e6-c1a9-4c59-b559-33ac260725be/436609B6.FortniteClient_1.5199.5682.0_x64__9ncxwbgmmv7m8.msixvc", host: "assets1.xboxlive.com"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Suggestion] Keepalive not being used for upstream servers

3 participants