-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(runtime): compare the results from factory functions correctly #420
Conversation
582b325
to
64e7a04
Compare
26f4a0a
to
8d8c090
Compare
a1f433b
to
c90f029
Compare
517f716
to
9def8f4
Compare
🚀 Snapshot Release (
|
Package | Version | Info |
---|---|---|
@graphql-tools/executor-http |
1.2.5-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-tools/federation |
3.0.9-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/fusion-runtime |
0.10.29-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-hive/gateway |
1.7.9-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/plugin-opentelemetry |
1.3.36-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/plugin-prometheus |
1.3.24-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-hive/gateway-runtime |
1.4.8-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/transport-common |
0.7.27-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/transport-http |
0.6.31-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/transport-http-callback |
0.5.18-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
@graphql-mesh/transport-ws |
0.4.16-alpha-2f4a1173f1280a85a6f87d9c1ab2db7e2a48d413 |
npm ↗︎ unpkg ↗︎ |
🚀 Snapshot Release (Bun Docker Image)The latest changes of this PR are available as image on GitHub Container Registry (based on the declared
|
🚀 Snapshot Release (Node Docker Image)The latest changes of this PR are available as image on GitHub Container Registry (based on the declared
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from one small comment, LGTM.
Hey - i pulled the snapshot version It still does abort when there is a schema change (this is much more rare, ~once/day). I guess it's a lot harder logic to allow the in-flight requests to complete on the old schema whilst validating new requests with the new schema. Keeping the old context around until all requests have completed. However as a client, when that request is aborted we don't know if the downstream request passed or failed. We can add retry logic on the specific abort error. If the failed request is a non-idempotent mutation we'd need to investigate via extra reads and logic to see if the mutation is successful. Whilst it's a tiny edge-case that we will probably not see, it still is there. One way around this is recycling our k8s pods on schema update, but that defies the point of the |
f4c47f2
to
c6ccdfd
Compare
@fauna5 If you try the new alphas, the errors will be more clear in case of schema reload. With 503 status code and |
Co-authored-by: Denis Badurina <[email protected]>
1409d49
to
2f4a117
Compare
Fixes #419
Leave the supergraph configuration handling logic to fusion-runtime package so it can compare bare read supergraph sdl directly inside unified graph manager to decide if the supergraph has changed.
Before, it was parsed by gateway runtime and printed sdls might be different on manager side, so it was wrongly assuming that supergraph has changed whenever the supergraph fetcher has given by the user as a factory function.
Another set of improvements;
SCHEMA_RELOAD
error while recreating the transports and executorsSHUTTING_DOWN
error while cleaning the transports and executors upPreviously, these errors are only thrown for subscriptions, it wasn't thrown in other type of operations as well.
And previously the thrown errors during these two cleanup and restart process were cryptic, now the mentioned two errors above are thrown with more clear messages
No more timers
Instead of
setTimeout
, on each request as we do on initial request, we compare the last fetch time and the current time to see if the supergraph should be refetched. We use a shared promise here to prevent race condition.This will relax the usage in serverless, because some serverless envs complain about timers.
It will also relax the event loop which won't be busy if there is no request in the background.
In case of clustering, it will also relax the supergraph source, because when there is setTimeout, all members of the cluster will send requests at the same time to the source(CDN etc). Now they will make those requests lazily.