-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support retries of failed proxy requests #2414
Support retries of failed proxy requests #2414
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think we need to edit
proxyRaw
so all errors are actually set as error.c.Set("_error", fmt.Sprintf("proxy raw, hijack error=%v, url=%s", t.URL, err))
is not error
Good catch, I'll do this as part of this PR. |
…heck for previous error
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #2414 +/- ##
=========================================
Coverage ? 92.84%
=========================================
Files ? 39
Lines ? 4555
Branches ? 0
=========================================
Hits ? 4229
Misses ? 237
Partials ? 89
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, better than my ideas.
just fix these linting errors and we are good.
p.s. makefile default target is helpful here.
@lammel, do you want to take a look? |
@mikemherron could you please add PR for docs also https://github.com/labstack/echox/blob/master/website/content/middleware/proxy.md |
Yes - I will try do this later today or failing that tomorrow at the latest. |
Docs PR: labstack/echox#281 |
Nice work! Although I don't like the name RetryFilter I failed to come up with something better. So lets stick with it. What I'd like to see is an additional test for timeout of an proxy target (quite common due to firewall or load issues). |
Yes, I know what you mean, there is nothing else I could see in the code base with
That's a good point. I wouldn't have time to look at it this week but will have time next week if you'd prefer to hold off merging until then. |
I failed too. So let's stick with it for now.
I'd prefer to wait as we are not in a hurry. Take your time, looking forward to it. |
Added a new test with a timing out backend that sends 20 concurrent requests. The behaviour of the timeouts seems fine, but it does raise another interesting issue: when using the round robin load balancer, the index of the current target is shared amongst all requests. This means that is is possible for a failing request to end up retrying against the same backend, since other concurrent requests will have incremented the current target index. This makes sense, but probably won't be what users expect. In the test, we have 2 backends configured, 1 that will always timeout, and another that will always succeed, and I'm not sure there is a simple fix to this. The expected behaviour, IMO, should be for an individual request to somehow keep track of what backend it tried, and then ask the balancer for the next one relative to that, rather than the "next" one as determined by some global state. This could be done by adding another argument on to This limitation means the retry feature is useful for skipping over intermittent failures, but less useful in cases with an entire instance becomes unavailable. For now, I just made the test pass on 502 errors (backend unavailable) - interested to hear any thoughts on potential solitions. |
@mikemherron as we/you moved |
Yes, that's a good point. Should we change the provided round robin and random balancer implementations to do this? Or leave it to users that want that behaviour? |
It makes sense for these default implementations to avoid serving same target for next try. Do you feel adventurous and have time for this enchantment? If this feature solves more problems that it creates - it is probably worth implementing. |
I went to do that but realised the balancers don't have access to the I didn't seem to make as much sense to do this on the random balancer. We could keep getting random balancers until we get one that is not the same, but it seems sort of wasteful. We could start iterating from the previous index on retires (similar to what's being done in the round robine balancer) but I think if you make a choice to use the random balancer that would be unexpected. Let me know what you think... |
allright, I am merging this PR. That part I do not agree is private/internal so we can revise it if needs be. @mikemherron Thank you for the work and being patient with me. |
No problem at all @aldas, I totally understand you need to do what you think best for the project. Thanks for all your input! |
Implements #2372
Support for retrying proxy requests that fail due to an unavailable backend instance.