Skip to content

Commit 18f103c

Browse files
committed
add varnish-errors-docs
1 parent 09eb53b commit 18f103c

File tree

1 file changed

+145
-0
lines changed

1 file changed

+145
-0
lines changed
Lines changed: 145 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,145 @@
1+
---
2+
myst:
3+
html_meta:
4+
description: Learn how to investigate and resolve Varnish errors on Hypernode by checking NGINX and Varnish logs, identifying header and workspace issues, and applying the correct buffer and workspace settings.
5+
title: Investigating Varnish errors on Hypernode
6+
---
7+
8+
# Investigating Varnish errors
9+
10+
When Varnish is enabled, two common HTTP errors can occur: **502 Bad Gateway** and **503 Service Unavailable**. Both are related to how NGINX and Varnish handle response headers and buffers, but they have different causes and solutions. This article guides you through identifying and resolving both.
11+
12+
## 502 Bad Gateway
13+
14+
### What causes it?
15+
16+
One common cause of a `502 Bad Gateway` error with Varnish enabled is that NGINX receives response headers from Varnish that exceed its configured buffer sizes.
17+
18+
This can happen after enabling Varnish or after a change that increases the
19+
size of response headers, for example:
20+
- large cookies
21+
- many `Set-Cookie` headers
22+
- additional custom response headers
23+
24+
### Step 1: Check the NGINX Error Log
25+
26+
Inspect `/var/log/nginx/error.log` and look for the following message:
27+
28+
```console
29+
upstream sent too big header while reading response header from upstream
30+
```
31+
32+
If this message is present, increase the NGINX buffer sizes used for upstream.
33+
34+
### Step 2: Solution
35+
36+
Create a custom NGINX config file at `~/nginx/server.header_buffer` with the following content:
37+
38+
```console
39+
fastcgi_buffers 16 16k;
40+
fastcgi_buffer_size 32k;
41+
proxy_buffer_size 128k;
42+
proxy_buffers 4 256k;
43+
proxy_busy_buffers_size 256k;
44+
```
45+
46+
This increases the buffer sizes NGINX uses when reading response headers from upstream (Varnish), which resolves the "too big header" issue in the vast majority of cases.
47+
48+
```{tip}
49+
After creating the file, NGINX will be reloaded automatically
50+
```
51+
52+
## 503 Service Unavailable (Backend Fetch Failed)
53+
54+
### What causes it?
55+
56+
A 503 error occurs when Varnish itself runs out of workspace memory while processing the response from the backend (e.g. PHP-FPM). This is an internal Varnish issue, visible in the Varnish log as an `out of workspace (Bo)` error.
57+
58+
### Step 1: Check the NGINX Access Log
59+
60+
Start by checking `/var/log/nginx/access.log` for any `503` responses. Look for a line similar to the following:
61+
62+
```log
63+
./access.log:{"time":"2026-02-26T06:55:31+00:00", "remote_addr":"122.173.26.219", "remote_user":"", "host":"www.domain.com", "request":"GET /some/url/", "status":"503", "body_bytes_sent":"552", "referer":"", "user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36", "request_time":"0.000", "handler":"varnish", "country":"NL", "server_name":"www.domain.com", "port":"443", "ssl_cipher":"TLS_AES_128_GCM_SHA256", "ssl_protocol":"TLSv1.3"}
64+
```
65+
66+
### Step 2: Check the Varnish Log
67+
68+
If you confirmed a `503` response, inspect the Varnish logs using `varnishlog` to identify the root cause. Look for lines like the following:
69+
70+
```
71+
- FetchError workspace_backend overflow
72+
- BackendClose 24 boot.default
73+
- Timestamp Error: 1772091843.439388 0.032699 0.000160
74+
- BerespProtocol HTTP/1.1
75+
- Error out of workspace (Bo)
76+
- LostHeader 503
77+
- BerespReason Service Unavailable
78+
- BerespReason Backend fetch failed
79+
- Error out of workspace (Bo)
80+
```
81+
82+
The key indicators are:
83+
- `FetchError: workspace_backend overflow` — Varnish could not allocate enough workspace to process the backend response.
84+
- `Error: out of workspace (Bo)` — the backend object workspace (`Bo`) is too small for the response headers being returned.
85+
86+
### Step 3: Solution
87+
88+
#### Increase the Varnish backend workspace
89+
90+
Increase the backend workspace limit using the `hypernode-systemctl` CLI:
91+
92+
```console
93+
hypernode-systemctl settings varnish_workspace_backend 256k
94+
```
95+
96+
A value of `256k` is a good starting point; increase further if the error persists.
97+
98+
#### Increase the response header buffer sizes
99+
100+
If the backend is sending unusually large response headers, also raise the following settings:
101+
102+
```console
103+
# Maximum size of a single response header line
104+
hypernode-systemctl settings varnish_http_resp_hdr_len 8k
105+
106+
# Maximum total size of all response headers combined
107+
hypernode-systemctl settings varnish_http_resp_size 32k
108+
```
109+
```{important}
110+
After changing these settings, Varnish will restart automatically. Allow a moment for it to reload before testing.
111+
```
112+
113+
### Verification
114+
115+
After applying the changes, monitor the Varnish log to confirm `503` errors are no longer occurring:
116+
117+
```console
118+
varnishlog -q "BerespStatus == 503"
119+
```
120+
121+
If errors continue, consider gradually increasing the workspace values further (e.g., `512k` for `varnish_workspace_backend`).
122+
123+
---
124+
125+
## Summary
126+
127+
| Error | Logged in | Root cause | Fix |
128+
|---|---|---|---|
129+
| **502** | `/var/log/nginx/error.log` | Nginx buffer too small for response headers from Varnish | Add `~/nginx/server.header_buffer` with increased buffer settings |
130+
| **503** | `/var/log/nginx/access.log` + `varnishlog` | Varnish backend workspace too small | Increase `varnish_workspace_backend` (and optionally `varnish_http_resp_hdr_len` / `varnish_http_resp_size`) via `hypernode-systemctl` |
131+
132+
## Additional Information
133+
134+
| Setting | Description | Default | Recommended |
135+
|---|---|---|---|
136+
| `varnish_workspace_backend` | Memory allocated for processing backend responses | `64k` | `256k`+ |
137+
| `varnish_http_resp_hdr_len` | Maximum size of a single response header | `8k` | `8k``16k` |
138+
| `varnish_http_resp_size` | Maximum total size of all response headers | `32k` | `32k``64k` |
139+
140+
If the problem persists after applying these fixes, contact support for further assistance.
141+
142+
For more information about Varnish configuration and tuning, see our
143+
[documentation on improving Varnish hit rate](https://docs.hypernode.com/hypernode-platform/varnish/improving-varnish-hit-rate-on-hypernode.html)
144+
and the official
145+
[Varnish documentation](https://varnish-cache.org/docs/).

0 commit comments

Comments
 (0)