Skip to content

Conversation

MadLittleMods
Copy link
Contributor

@MadLittleMods MadLittleMods commented Sep 23, 2025

Fix run_coroutine_in_background(...) incorrectly handling logcontext.

Regressed in #18900 (comment) (see conversation there for more context)

How is this a regression?

To give this an update with more hindsight; this logic was redundant with the early return and it is safe to remove this complexity ✅

It seems like this actually has to do with completed vs incomplete deferreds...

To explain how things previously worked without the early-return shortcut:

With the normal case of incomplete awaitable, we store the calling_context and the f function is called and runs until it yields to the reactor. Because f follows the logcontext rules, it sets the sentinel logcontext. Then in run_in_background(...), we restore the calling_context, store the current ctx (which is sentinel) and return. When the deferred completes, we restore ctx (which is sentinel) before yielding to the reactor again (all good ✅)

With the other case where we see a completed awaitable, we store the calling_context and the f function is called and runs to completion (no logcontext change). This is where the shortcut would kick in but I'm going to continue explaining as if we commented out the shortcut. -- Then in run_in_background(...), we restore the calling_context, store the current ctx (which is same as the calling_context). Because the deferred is already completed, our extra callback is called immediately and we restore ctx (which is same as the calling_context). Since we never yield to the reactor, the calling_context is perfect as that's what we want again (all good ✅)


But this also means that our early-return shortcut is no longer just an optimization and is necessary to act correctly in the completed awaitable case as we want to return with the calling_context and not reset to the sentinel context. I've updated the comment in #18964 to explain the necessity as it's currently just described as an optimization.

But because we made the same change to run_coroutine_in_background(...) which didn't have the same early-return shortcut, we regressed the correct behavior ❌ . This is being fixed in #18964

-- @MadLittleMods, #18900 (comment)

How did we find this problem?

Spawning from @wrjlewis seeing Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost in the logs:

More logs
synapse.http.request_metrics - 222 - ERROR - sentinel - Trying to stop RequestMetrics in the sentinel context.
2025-09-23 14:43:19,712 - synapse.util.metrics - 212 - WARNING - sentinel - Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost
2025-09-23 14:43:19,713 - synapse.rest.client.sync - 851 - INFO - sentinel - Client has disconnected; not serializing response.
2025-09-23 14:43:19,713 - synapse.http.server - 825 - WARNING - sentinel - Not sending response to request <XForwardedForRequest at 0x7f23e8111ed0 method='POST' uri='/_matrix/client/unstable/org.matrix.simplified_msc3575/sync?pos=281963%2Fs929324_147053_10_2652457_147960_2013_25554_4709564_0_164_2&timeout=30000' clientproto='HTTP/1.1' site='8008'>, already dis
connected.
2025-09-23 14:43:19,713 - synapse.access.http.8008 - 515 - INFO - sentinel - 92.40.194.87 - 8008 - {@me:wi11.co.uk} Processed request: 30.005sec/-8.041sec (0.001sec, 0.000sec) (0.000sec/0.002sec/2) 0B 200! "POST /_matrix/client/unstable/org.matrix.simplified_msc3575/

From the logs there, we can see things relating to typing.get_new_events and /_matrix/client/unstable/org.matrix.simplified_msc3575/sync which led me to trying out Sliding Sync with the typing extension enabled and allowed me to reproduce the problem locally. Sliding Sync is a unique scenario as it's the only place we use gather_optional_coroutines(...) -> run_coroutine_in_background(...) (introduced in #17884) to exhibit this behavior.

Testing strategy

  1. Configure Synapse to enable MSC4186: Simplified Sliding Sync which is actually under MSC3575
    experimental_features:
      msc3575_enabled: true
  2. Start synapse: poetry run synapse_homeserver --config-path homeserver.yaml
  3. Make a Sliding Sync request with one of the extensions enabled
    POST http://localhost:8008/_matrix/client/unstable/org.matrix.simplified_msc3575/sync
    {
      "lists": {},
      "room_subscriptions": {
            "!FlgJYGQKAIvAscfBhq:my.synapse.linux.server": {
                "required_state": [],
                "timeline_limit": 1
            }
        },
        "extensions": {
            "typing": {
                "enabled": true
            }
        }
    }
  4. Open your homeserver logs and notice warnings about Starting ... from sentinel context: metrics will be lost

Dev notes


SYNAPSE_TEST_LOG_LEVEL=DEBUG poetry run trial tests.util.test_logcontext.LoggingContextTestCase

Pull Request Checklist

  • Pull request is based on the develop branch
  • Pull request includes a changelog file. The entry should:
    • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
    • Use markdown where necessary, mostly for code blocks.
    • End with either a period (.) or an exclamation mark (!).
    • Start with a capital letter.
    • Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
  • Code style is correct (run the linters)

Comment on lines -899 to -900
do not run until called, and so calling an async function without awaiting
cannot change the log contexts.
Copy link
Contributor Author

@MadLittleMods MadLittleMods Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed this last sentence because this isn't true on multiple levels:

  1. Calling an async function will run immediately until it yields (hits an await for an incomplete awaitable)
  2. Calling an async function can change the logcontext (and happens all the time). This is exactly why we set the logcontext back to the calling_context before returning.

# Wrap the coroutine in a deferred, which will have the side effect of executing the
# coroutine in the background.
d = defer.ensureDeferred(coroutine)

Copy link
Contributor Author

@MadLittleMods MadLittleMods Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To fix the root problem, all we need to do is add this same fix that run_in_background already has for already completed deferreds:

# The deferred has already completed
if d.called and not d.paused:
# The function should have maintained the logcontext, so we can
# optimise out the messing about
return d

But instead of duplicating all of this specialty logic and context into run_coroutine_in_background(...), we can just simplify to using run_in_background(...). Especially when run_coroutine_in_background(...) is just an ergonomic wrapper around run_in_background(...)

See #18900 (comment) for more information on how this shortcut and the logcontext logic works for run_in_background(...)

Related conversation where I asked why we even have run_coroutine_in_background(...) -> #18900 (comment)

Comment on lines 862 to 866
# The function should have maintained the calling logcontext, so we can avoid
# messing with it further. Additionally, if the deferred has already completed,
# then it would be a mistake to then add a deferred callback (below) to reset
# the logcontext to the sentinel logcontext as that would run immediately
# (remember our goal is to maintain the calling logcontext when we return).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no longer just an optimization since #18900 (comment). It turns out the previous behavior was there for a reason but was redundant with this shortcut. I've updated the comment here to reflect that.

And the problem was that we just didn't also have this shortcut in run_coroutine_in_background(...) so we regressed things.

We've now consolidated things to re-use all of the run_in_background(...) logic


def _test_run_in_background(self, function: Callable[[], object]) -> defer.Deferred:
sentinel_context = current_context()
async def _test_run_in_background(self, function: Callable[[], object]) -> None:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_in_background test changes should be equivalent to before. This is just a small refactor to clean up these tests (use async/await) to make them more straight-forward.

await self._test_run_in_background(testfunc)

@logcontext_clean
async def test_run_coroutine_in_background(self) -> None:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New run_coroutine_in_background tests are here.

We could use a similar pattern to run_in_background(...) where we have a test helper that shares a bunch of the logic. Given we only have to test two variants with coroutines, I've opted not to dry this out.

self._check_test_key("sentinel")

@logcontext_clean
async def test_run_coroutine_in_background_with_nonblocking_coroutine(self) -> None:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not in love with the "nonblocking" terminology used here but I've aligned with the prior art here (test_run_in_background_with_nonblocking_coroutine) and expanded on what that means in the docstring ⏩

@@ -0,0 +1 @@
Fix `run_coroutine_in_background(...)` incorrectly handling logcontext.
Copy link
Contributor Author

@MadLittleMods MadLittleMods Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this technically regressed in #18900 which is part of 1.139.0rc1, we should land another RC with this PR

(The PR description here at the top explains the regression)

cc @anoadragon453

Comment on lines +805 to +813
- When `run_in_background` is called, the calling logcontext is stored
("original"), we kick off the background task in the current context, and we
restore that original context before returning.
- For a completed deferred, that's the end of the story.
- For an incomplete deferred, when the background task finishes, we don't want to
leak our context into the reactor which would erroneously get attached to the
next operation picked up by the event loop. We add a callback to the deferred
which will clear the logging context after it finishes and yields control back to
the reactor.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the explanation laid out in #18900 (comment)

@MadLittleMods MadLittleMods marked this pull request as ready for review September 23, 2025 23:58
@MadLittleMods MadLittleMods requested a review from a team as a code owner September 23, 2025 23:58
# (remember our goal is to maintain the calling logcontext when we return).
return d

# The function may have reset the context before returning, so we need to restore it
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're trusting that the function maintained the calling logcontext just above, why here do we not trust that the function maintained the calling logcontext?

Copy link
Contributor Author

@MadLittleMods MadLittleMods Sep 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This plays into the Synapse logcontext rules:

**Rules for functions returning awaitables:**
> - If the awaitable is already complete, the function returns with the
> same logcontext it started with.
> - If the awaitable is incomplete, the function clears the logcontext
> before returning; when the awaitable completes, it restores the
> logcontext before running any callbacks.

I've updated the comments with some further details:

# The deferred has already completed
if d.called and not d.paused:
# If the function messes with logcontexts, we can assume it follows the Synapse
# logcontext rules (Rules for functions returning awaitables: "If the awaitable
# is already complete, the function returns with the same logcontext it started
# with."). If it function doesn't touch logcontexts at all, we can also assume
# the logcontext is unchanged.
#
# Either way, the function should have maintained the calling logcontext, so we
# can avoid messing with it further. Additionally, if the deferred has already
# completed, then it would be a mistake to then add a deferred callback (below)
# to reset the logcontext to the sentinel logcontext as that would run
# immediately (remember our goal is to maintain the calling logcontext when we
# return).
return d
# Since the function we called may follow the Synapse logcontext rules (Rules for
# functions returning awaitables: "If the awaitable is incomplete, the function
# clears the logcontext before returning"), the function may have reset the
# logcontext before returning, so we need to restore the calling logcontext now
# before we return ourselves.
#
# Our goal is to have the caller logcontext unchanged after firing off the
# background task and returning.
set_current_context(calling_context)
# If the function we called is playing nice and following the Synapse logcontext
# rules, it will restore original calling logcontext when the deferred completes;
# but there is nothing waiting for it, so it will get leaked into the reactor (which
# would then get picked up by the next thing the reactor does). We therefore need to
# reset the logcontext here (set the `sentinel` logcontext) before yielding control
# back to the reactor.
#
# (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are
# probably currently within a ``with LoggingContext()`` block,
# which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively
# adding a new exit point.)
d.addBoth(_set_context_cb, SENTINEL_CONTEXT)

Let me know if that makes more sense otherwise we can continue iterating in another PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so there's a difference in the potential current logcontext dependent on whether the awaitable is complete or incomplete. That makes sense why we don't need to bother when it has completed.

Thanks for the clarification and the extra comments!

@MadLittleMods MadLittleMods enabled auto-merge (squash) September 24, 2025 15:15
@MadLittleMods MadLittleMods merged commit 0458f69 into develop Sep 24, 2025
76 of 78 checks passed
@MadLittleMods MadLittleMods deleted the madlittlemods/fix-run_coroutine_in_background-logcontext branch September 24, 2025 15:24
@MadLittleMods
Copy link
Contributor Author

Thanks for the review @reivilibre and @anoadragon453 🦢

@wrjlewis also confirmed the problem was solved with these changes ✅ Thanks for the original report and trying things out early 🐃

anoadragon453 pushed a commit that referenced this pull request Sep 25, 2025
#18964)

Regressed in
#18900 (comment)
(see conversation there for more context)


### How is this a regression?

> To give this an update with more hindsight; this logic *was* redundant
with the early return and it is safe to remove this complexity
:white_check_mark:
> 
> It seems like this actually has to do with completed vs incomplete
deferreds...
> 
> To explain how things previously worked *without* the early-return
shortcut:
> 
> With the normal case of **incomplete awaitable**, we store the
`calling_context` and the `f` function is called and runs until it
yields to the reactor. Because `f` follows the logcontext rules, it sets
the `sentinel` logcontext. Then in `run_in_background(...)`, we restore
the `calling_context`, store the current `ctx` (which is `sentinel`) and
return. When the deferred completes, we restore `ctx` (which is
`sentinel`) before yielding to the reactor again (all good
:white_check_mark:)
> 
> With the other case where we see a **completed awaitable**, we store
the `calling_context` and the `f` function is called and runs to
completion (no logcontext change). *This is where the shortcut would
kick in but I'm going to continue explaining as if we commented out the
shortcut.* -- Then in `run_in_background(...)`, we restore the
`calling_context`, store the current `ctx` (which is same as the
`calling_context`). Because the deferred is already completed, our extra
callback is called immediately and we restore `ctx` (which is same as
the `calling_context`). Since we never yield to the reactor, the
`calling_context` is perfect as that's what we want again (all good
:white_check_mark:)
> 
> ---
> 
> But this also means that our early-return shortcut is no longer just
an optimization and is *necessary* to act correctly in the **completed
awaitable** case as we want to return with the `calling_context` and not
reset to the `sentinel` context. I've updated the comment in
#18964 to explain the
necessity as it's currently just described as an optimization.
> 
> But because we made the same change to
`run_coroutine_in_background(...)` which didn't have the same
early-return shortcut, we regressed the correct behavior ❌ . This is
being fixed in #18964
>
>
> *-- @MadLittleMods,
#18900 (comment)

### How did we find this problem?

Spawning from @wrjlewis
[seeing](https://matrix.to/#/!SGNQGPGUwtcPBUotTL:matrix.org/$h3TxxPVlqC6BTL07dbrsz6PmaUoZxLiXnSTEY-QYDtA?via=jki.re&via=matrix.org&via=element.io)
`Starting metrics collection 'typing.get_new_events' from sentinel
context: metrics will be lost` in the logs:

<details>
<summary>More logs</summary>

```
synapse.http.request_metrics - 222 - ERROR - sentinel - Trying to stop RequestMetrics in the sentinel context.
2025-09-23 14:43:19,712 - synapse.util.metrics - 212 - WARNING - sentinel - Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost
2025-09-23 14:43:19,713 - synapse.rest.client.sync - 851 - INFO - sentinel - Client has disconnected; not serializing response.
2025-09-23 14:43:19,713 - synapse.http.server - 825 - WARNING - sentinel - Not sending response to request <XForwardedForRequest at 0x7f23e8111ed0 method='POST' uri='/_matrix/client/unstable/org.matrix.simplified_msc3575/sync?pos=281963%2Fs929324_147053_10_2652457_147960_2013_25554_4709564_0_164_2&timeout=30000' clientproto='HTTP/1.1' site='8008'>, already dis
connected.
2025-09-23 14:43:19,713 - synapse.access.http.8008 - 515 - INFO - sentinel - 92.40.194.87 - 8008 - {@me:wi11.co.uk} Processed request: 30.005sec/-8.041sec (0.001sec, 0.000sec) (0.000sec/0.002sec/2) 0B 200! "POST /_matrix/client/unstable/org.matrix.simplified_msc3575/
```

</details>

From the logs there, we can see things relating to
`typing.get_new_events` and
`/_matrix/client/unstable/org.matrix.simplified_msc3575/sync` which led
me to trying out Sliding Sync with the typing extension enabled and
allowed me to reproduce the problem locally. Sliding Sync is a unique
scenario as it's the only place we use `gather_optional_coroutines(...)`
-> `run_coroutine_in_background(...)` (introduced in
#17884) to exhibit this
behavior.


### Testing strategy

1. Configure Synapse to enable
[MSC4186](matrix-org/matrix-spec-proposals#4186):
Simplified Sliding Sync which is actually under
[MSC3575](matrix-org/matrix-spec-proposals#3575)
    ```yaml
    experimental_features:
      msc3575_enabled: true
    ```
1. Start synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
 1. Make a Sliding Sync request with one of the extensions enabled
    ```http
POST
http://localhost:8008/_matrix/client/unstable/org.matrix.simplified_msc3575/sync
    {
      "lists": {},
      "room_subscriptions": {
            "!FlgJYGQKAIvAscfBhq:my.synapse.linux.server": {
                "required_state": [],
                "timeline_limit": 1
            }
        },
        "extensions": {
            "typing": {
                "enabled": true
            }
        }
    }
    ```
1. Open your homeserver logs and notice warnings about `Starting ...
from sentinel context: metrics will be lost`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants