Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[instrumentation-fetch] Use msw for fetch instrumentation tests #5282

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

chancancode
Copy link
Contributor

@chancancode chancancode commented Dec 18, 2024

Which problem is this PR solving?

Currently the test suite heavily relies on mocking to work, specifically it uses sinon.stub(window, 'fetch') to replace it with a custom function that is intended to work similarly to the real one. However, the actual behavior of fetch is very complicated and is very difficult to get right, with a lot of subtle interactions around CORS, redirects, resource timing, etc.

Issues like #5122 is kind of an extreme example of the problem (assuming the report is correct), where the current code and the "fake fetch" are assuming a browser behavior that didn't actually exist, and the current tests are explicitly confirming that non-existent behavior based on the bad stub.

This is a problem, considering that the point of the this package (and consequently, what we are trying to test) is:

  1. replace the globals with an instrumented version without introducing any breakages
  2. extract the relevant information from the browser into the spans

Essentially, we are testing that "if our assumptions about how the browser works is correct, then our instrumentation code behaves correctly".

The current factoring of the tests – a big setup block that needs to handle all the various edge cases under test – is also quite difficult to work with. It would be quite complex to add additional test coverage for things like redirects and opaque responses.

In the case of fetch, there is a better way to go about this and use as much of the real browser functionalities as possible.

Service Workers allows webpages to intercept network requests (like fetch() and XHR, but also other resources), and msw is a library to make that work nicely for testing/development workflows. Essentially, it allows for mocking/intercepting any fetch() requests in the tests while still invoking the actual native fetch() function in the browser.

Improvements Made

  • Completely refactored all fetch instrumentation tests
  • Use Service Worker via msw
  • No longer mocks/stubs browser functionalities in these tests – all fetch() requests are actually made, and all timing information came from the real resource-timing browser APIs
  • Broke up the setup code into smaller pieces specific to each group of test
  • Removed incorrect/unrealistic CORS tests (CORS Preflight Requests not accounted for #5122)
  • Improved type safety (no more any)
  • General small improvements along the way

Future Improvements

Potentially these tests can be run in additional browsers (e.g. Firefox) to improve coverage by changing the karma config. This will ensure we aren't accidentally targeting any chrome-specific behavior in the instrumentation code.

This approach also works with XHR, so we could also do the same refactor to those tests. (I'm unsure if I'll personally have time to finish that, but I think set up a good foundation/framework to follow if someone wants to do that.)

TODOs

  • Code review(s) – for your convenience, I broke up this PR into a series of commits, each porting one group of related tests, I'd recommend reviewing it that way (I also mostly kept the tests in the same order, if you want to just open both files side-by-side and review that way)
  • Determine how to proceed with fix(inst-fetch,inst-xhr) Ignore network events with zero-timing #5332 and either
  • Remove the 500ms timeout in each test – currently, these tests are a bit slow because I added an artificial 500ms timeout to each call to fetch() to ensure the resource-timing information made it to the instrumentation's PerformanceObserver (plus the instrumentation code also has its own 300ms artificial timeout)

@chancancode chancancode requested a review from a team as a code owner December 18, 2024 09:29
@chancancode
Copy link
Contributor Author

Discussed this at the Dec 18 SIG meeting, the group seems generally onboard with trying this out, so I'll keep pushing forward on this and report any issues with the approach

@chancancode chancancode changed the title [instrumentation-fetch] [Research/Discussion] Use msw for fetch (and XHR) instrumentation tests [instrumentation-fetch] Use msw for fetch (and XHR) instrumentation tests Jan 8, 2025
@chancancode chancancode marked this pull request as draft January 8, 2025 18:11
@chancancode
Copy link
Contributor Author

Discussed in the SIG meeting today, the group is generally in favor with the approach, I'll clean it up and get things ready for merge.

Copy link

codecov bot commented Jan 13, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 94.58%. Comparing base (fc0edd8) to head (9f8149c).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5282   +/-   ##
=======================================
  Coverage   94.58%   94.58%           
=======================================
  Files         318      318           
  Lines        8069     8069           
  Branches     1701     1701           
=======================================
  Hits         7632     7632           
  Misses        437      437           

@chancancode chancancode changed the title [instrumentation-fetch] Use msw for fetch (and XHR) instrumentation tests [instrumentation-fetch] Use msw for fetch instrumentation tests Jan 14, 2025
@chancancode chancancode force-pushed the msw-fetch-test branch 2 times, most recently from f405cbc to b4c7074 Compare January 14, 2025 09:20
Also ran `npx msw init test --save` in inst-fetch and adjusted the
karma config to make the generated worker file available at root.
These tests doesn't actually need the complicated setup to work,
so seperated them out into a new suite. Spiritually should be the
same coverage as before, with a new test for `{ enabled: false }`.
Without this, the next set of test fails intermittently
Tightened some of the assertions in the old tests where they were
unnecessarily vague (e.g. protocol is either "http" or "https"
when the test setup already fully dictated which one will be used).

This shows the promise of the new appraoch – we are able to test
the instrumentation without mocking the browser APIs at all, fully
execrising the instrumentation code including `PerformanceObserver`
and the resource-timing matching code.

We did cheat and added a 500ms delay to make this work, so we do
have to come back and deal with that at some later point.
For the most part, it's a pretty straightforward port, but also
improved clarity with a `describe` grouping and actually assert
the headers in each test.
This explictly tests that the trace propagation headers are NOT
added when there are no global propagator set. This wasn't in the
old tests but seemed like good coverage to have.
As pointed out open-telemetry#5122, the expectations in the existing tests are
not the real browser behavior (which was ultimately the motivation
for refactoring these tests to rely on real browser behavior as
much as possible).

So instead of porting them as-is, this adjust the tests behavior to
match what actually happens.
As with the CORS tests, the preflight span does not exist IRL, so
those tests were dropped.
It's unclear what these tests are trying to validate. For the most
part, the `fetch()` API and its instrumenation behave similarly
between successful and unsuccessful, so this seems to be largely
duplicating coverage from the other tests without making any other
meaningful assertions.
@chancancode chancancode marked this pull request as ready for review January 14, 2025 09:46
@chancancode
Copy link
Contributor Author

chancancode commented Jan 14, 2025

@pichlermarc and other reviewers, this is not ready for merge just yet (see TODOs) but the code should be ready for review!

For your convenience, I broke up this PR into a series of commits, each porting one group of related tests, I'd recommend reviewing it that way. I also mostly kept the tests in the same order, if you want to just open both files side-by-side and review that way.

Would appreciate it if anyone has time to do a pass, even if incomplete, so we can discuss any issues/feedback on the Wednesday SIG 🙏🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant