You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CONTRIBUTING.md
+11-11
Original file line number
Diff line number
Diff line change
@@ -33,13 +33,13 @@ If proposing a new audit for Lighthouse, see the [new audit proposal guide](./do
33
33
34
34
A PR for a new audit or changing an existing audit almost always needs the following:
35
35
36
-
1. If new, add the audit to the [default config file](lighthouse-core/config/default-config.js) (or, rarely, one of the other config files) so Lighthouse will run it.
36
+
1. If new, add the audit to the [default config file](core/config/default-config.js) (or, rarely, one of the other config files) so Lighthouse will run it.
37
37
38
-
1.**Unit tests**: in the matching test file (e.g. tests for `lighthouse-core/audits/my-swell-audit.js` go in `lighthouse-core/test/audits/my-swell-audit-test.js`).
38
+
1.**Unit tests**: in the matching test file (e.g. tests for `core/audits/my-swell-audit.js` go in `core/test/audits/my-swell-audit-test.js`).
39
39
40
-
1.**Smoke (end-to-end) tests**: search through the [existing test expectations](lighthouse-cli/test/smokehouse/test-definitions/) to see if there's a logical place to add a check for your change, or (as a last resort) add a new smoke test.
40
+
1.**Smoke (end-to-end) tests**: search through the [existing test expectations](cli/test/smokehouse/test-definitions/) to see if there's a logical place to add a check for your change, or (as a last resort) add a new smoke test.
41
41
42
-
1. Run `yarn update:sample-json` to update the [sample Lighthouse result JSON](lighthouse-core/test/results/sample_v2.json) kept in the repo for testing. This will also pull any strings needed for localization into the correct files.
42
+
1. Run `yarn update:sample-json` to update the [sample Lighthouse result JSON](core/test/results/sample_v2.json) kept in the repo for testing. This will also pull any strings needed for localization into the correct files.
43
43
44
44
### Audit `description` Guidelines
45
45
@@ -68,13 +68,13 @@ It can be tempting to serialize the entire state of the world into the artifact
68
68
69
69
A PR adding or changing a gatherer almost always needs to include the following:
70
70
71
-
1. If new, add the gatherer to the [default config file](lighthouse-core/config/default-config.js) (or, rarely, one of the other config files) so Lighthouse will run it.
71
+
1. If new, add the gatherer to the [default config file](core/config/default-config.js) (or, rarely, one of the other config files) so Lighthouse will run it.
72
72
73
73
1.**Unit tests**: gatherer execution often takes place mostly on the browser side, either through protocol functionality or executing javascript in the test page. This makes gatherers difficult to unit test without extensive mocking, ending up mostly exercising the mocks instead of the actual gatherer.
74
74
75
-
As a result, we mostly rely on smoke testing for gatherers. However, if there are parts of a gatherer that naturally lend themselves to unit testing, the new tests would go in the matching test file (e.g. tests for `lighthouse-core/gather/gatherers/reap.js` go in `lighthouse-core/test/gather/gatherers/reap-test.js`).
75
+
As a result, we mostly rely on smoke testing for gatherers. However, if there are parts of a gatherer that naturally lend themselves to unit testing, the new tests would go in the matching test file (e.g. tests for `core/gather/gatherers/reap.js` go in `core/test/gather/gatherers/reap-test.js`).
76
76
77
-
1.**Smoke (end-to-end) tests**: search through the [existing test expectations](lighthouse-cli/test/smokehouse/test-definitions/) to see if there's a logical place to add a check for your change, or (as a last resort) add a new smoke test if one is required.
77
+
1.**Smoke (end-to-end) tests**: search through the [existing test expectations](cli/test/smokehouse/test-definitions/) to see if there's a logical place to add a check for your change, or (as a last resort) add a new smoke test if one is required.
78
78
79
79
It's most important to get true end-to-end coverage, so be sure that audits that consume the new gatherer output are in the expectations. Artifacts can also have expectations for those intermediate results.
80
80
@@ -87,7 +87,7 @@ A PR adding or changing a gatherer almost always needs to include the following:
87
87
88
88
This command works for updating `yarn update:sample-artifacts devtoolsLogs` or `traces` as well, but the resulting `sample_v2.json` churn may be extensive and you might be better off editing manually.
89
89
90
-
1. Run `yarn update:sample-json` to update the [sample Lighthouse result JSON](lighthouse-core/test/results/sample_v2.json) kept in the repo for testing. This will also pull any strings needed for localization into the correct files.
90
+
1. Run `yarn update:sample-json` to update the [sample Lighthouse result JSON](core/test/results/sample_v2.json) kept in the repo for testing. This will also pull any strings needed for localization into the correct files.
91
91
92
92
## Protobuf errors
93
93
@@ -128,13 +128,13 @@ accept your pull requests.
128
128
129
129
## Tracking Errors
130
130
131
-
We track our errors in the wild with Sentry. In general, do not worry about wrapping your audits or gatherers in try/catch blocks and reporting every error that could possibly occur; `lighthouse-core/runner.js` and `lighthouse-core/gather/gather-runner.js` already catch and report any errors that occur while running a gatherer or audit, including errors fatal to the entire run. However, there are some situations when you might want to explicitly handle an error and report it to Sentry or wrap it to avoid reporting. Generally, you can interact with Sentry simply by requiring the `lighthouse-core/lib/sentry.js` file and call its methods. The module exports a delegate that will correctly handle the error reporting based on the user's opt-in preference and will simply no-op if they haven't so you don't need to check.
131
+
We track our errors in the wild with Sentry. In general, do not worry about wrapping your audits or gatherers in try/catch blocks and reporting every error that could possibly occur; `core/runner.js` and `core/gather/gather-runner.js` already catch and report any errors that occur while running a gatherer or audit, including errors fatal to the entire run. However, there are some situations when you might want to explicitly handle an error and report it to Sentry or wrap it to avoid reporting. Generally, you can interact with Sentry simply by requiring the `core/lib/sentry.js` file and call its methods. The module exports a delegate that will correctly handle the error reporting based on the user's opt-in preference and will simply no-op if they haven't so you don't need to check.
132
132
133
133
134
134
#### If you have an expected error that is recoverable but want to track how frequently it happens, *use Sentry.captureException*.
NOTE: If the message you're capturing is dynamic/based on user data or you need a stack trace, then create a fake error instead and use `Sentry.captureException` so that the instances will be grouped together in Sentry.
0 commit comments