feat: generate config for upstreams with keepalive enabled#217
feat: generate config for upstreams with keepalive enabled#217kevin-secrist wants to merge 8 commits intolancachenet:masterfrom
Conversation
|
I swear I did my testing with this config but I think I must have forgotten to re-check in a rush. This doesn't work for two reasons:
It would be really nice if we could upgrade to a newer version of nginx (1.27.3 came out 2024-11-26), but otherwise I'm going to try to workaround this. I might just settle for a simpler manual mapping though if it gets to be too complicated or brittle. |
|
Added some more changes and re-verified that I get (basically) line-speed on an empty cache: |
|
Hey @kevin-secrist This looks interesting, we've thought about this for a while and have been trying to work out a decent way to implement without the huge map generation but not worked an answer. Will leave the team for some discussion on accepting this but I see that the CI check is currently failing. We've had some issues with the test script on another PR so I suggest you rebaseline One thing I am concerned about is that this assumes all upstream cache's support keepalive, what happens if the upstream doesn't do we fail gracefully? |
| echo "map \$http_host \$upstream_name {" >> "$MAPS_TMP_FILE" | ||
| echo " hostnames;" >> "$MAPS_TMP_FILE" | ||
| echo " default \$host; # Fallback to direct proxy for unmapped domains" >> "$MAPS_TMP_FILE" | ||
| echo " *.steamcontent.com lancache_steamcontent_com; # Redirect all steam traffic" >> "$MAPS_TMP_FILE" |
There was a problem hiding this comment.
I don't like this hard coded line as it overrides the cache-domains logic. If you want this it should be added on a custom fork of cache-domains adding *.steamcontent.com into steam.txt. The original wildcard was removed and overridden with the trigger domain for good reasons as steam behaves differently through the lancache local trigger
There was a problem hiding this comment.
Thinking about this more, it does need something because the trigger domain is used for DNS not for $http_host. Is the user agent passed through to the proxy - if so, something similar to the default cachemap should work?
echo "map \"\$http_user_agent£££\$http_host\" \$cacheidentifier {" >> $OUTPUTFILE
echo " default \$http_host;" >> $OUTPUTFILE
echo " ~Valve\\/Steam\\ HTTP\\ Client\\ 1\.0£££.* steam;" >> $OUTPUTFILE
c2251eb to
cefd7bb
Compare
|
I haven't forgotten about this, I've just been a bit busy recently. I've made a change to use the Steam UA as you requested. I've also edited the config examples above to match.
I'm not an expert with this, but based on my research most of this is covered in RFC9112, Section 9.3 for Persistence and there's also an appendix C.2.2 about keep-alive. These are my conclusions on what should happen, for what that's worth.
Relevant bits from the RFC
|
|
This time around I used my laptop to do the testing locally (my server is a little incapacitated at the moment for reasons unrelated to lancache). I created a docker-compose file with lancache-dns, monolithic, and also steam-prefill, and set the DNS for the prefill container within docker to the lancache-dns container so it would use that rather than my server. The bandwidth is actually still better, but also I am measuring the number of syn/ack packets coming out of the monolithic container using tcpdump for comparison. Docker Compose Fileservices:
dns:
image: lancachenet/lancache-dns:latest
environment:
UPSTREAM_DNS: 1.1.1.1
LANCACHE_IP: 10.10.0.3
USE_GENERIC_CACHE: "true"
networks:
lancache:
ipv4_address: 10.10.0.2
monolithic:
build:
context: .
dockerfile: Dockerfile
environment:
UPSTREAM_DNS: 1.1.1.1
ENABLE_UPSTREAM_KEEPALIVE: "true"
DNS_BIND_IP: 10.10.0.2
volumes:
- cache:/data/cache
- logs:/data/logs
networks:
lancache:
ipv4_address: 10.10.0.3
prefill:
image: tpill90/steam-lancache-prefill:latest
entrypoint: ["sleep", "infinity"]
depends_on:
- dns
dns:
- 10.10.0.2
volumes:
- prefill-config:/Config
networks:
lancache:
ipv4_address: 10.10.0.4
networks:
lancache:
driver: bridge
ipam:
config:
- subnet: 10.10.0.0/24
volumes:
cache:
logs:
prefill-config:
In one shell, running prefill for 2 minutes: docker compose exec monolithic bash -c 'rm -rf /data/cache/* && nginx -s reload' && docker compose exec prefill timeout 120 /SteamPrefill prefill --recentIn another, monitoring the monolithic container with tcpdump: (note: locally I added tcpdump into the apt install in the Dockerfile) ENABLE_UPSTREAM_KEEPALIVE: "true"ENABLE_UPSTREAM_KEEPALIVE: "false"I haven't done much testing of non-steam upstreams with the recent changes. |
|
Ubuntu 26.04 (which comes out in a few weeks) looks like it's going to contain nginx 1.28.2, so once the lancachenet/ubuntu image is updated we should be able to simplify much of the logic in this PR. @VibroAxe any thoughts on the above? Do you have plans to update immediately or do you typically wait a bit? |
|
Get some errors on xboxlive upstream when using this PR let me know if you need more info/testing.
error.log
|
Enables HTTP/1.1 keepalive connections to upstream CDNs, improving cache-miss download speeds. On my setup I was able to increase throughput for 1 user/client from 200Mbps to ~1Gbps for Steam (my ISP speed). This will benefit everyone, but it is most obvious in home situations.
How it works
cache_domains.jsonat container startup$upstream_namevariableConnection: ""header to reuse TCP connectionsIn nginx 1.27.3
resolveis available for non-commercial users which would be a much cleaner implementation of this. The refresh script and the scripted resolution of hosts could be removed at that point.Configuration
This feature is opt-in. Set the following environment variables:
ENABLE_UPSTREAM_KEEPALIVEfalsetrueto enableUPSTREAM_REFRESH_INTERVAL1h0to disable refresh)Changes
overlay/hooks/entrypoint-pre.d/16_generate_upstream_keepalive.sh- Entrypoint hook to generate upstream configs at startupoverlay/scripts/refresh_upstreams.sh- Periodic DNS refresh loopoverlay/etc/supervisor/conf.d/upstream_refresh.conf- Supervisord program for the refresh loopoverlay/etc/nginx/sites-available/upstream.conf.d/30_primary_proxy.conf- Modified to use HTTP/1.1 keepalive and route via$upstream_nameGenerated at runtime:
/etc/nginx/conf.d/40_upstream_pools.conf- Upstream blocks with keepalive directives/etc/nginx/conf.d/35_upstream_maps.conf- Domain to upstream mappingsWhy this works
Many games/downloads fetch content as tiny chunks under 1MB. Without keepalive, each chunk requires a new TCP connection leading to massive overhead. Personally I tried adjusting settings with lancache (like slice size) but this ended up being the bottleneck.
Closes #202
Generated config examples (removed most of the content for brevity)
/etc/nginx/conf.d/35_upstream_maps.conf
/etc/nginx/conf.d/40_upstream_pools.conf