You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/comparison-analysis.md
+11-11
Original file line number
Diff line number
Diff line change
@@ -33,11 +33,11 @@ How many _significant_ test results indicate performance changes and what is the
33
33
* given 20 changes of different kinds all of low magnitude, the result is mixed unless only 2 or fewer of the changes are of one kind.
34
34
* given 5 changes of different kinds all of low magnitude, the result is always mixed.
35
35
36
-
Whether we actually _report_ an analysis or not depends on the context and how _confident_ we are in the summary of the results (see below for an explanation of how confidence is derived). For example, in pull request performance "try" runs, we report a performance change if we are at least confident that the results are "probably relevant", while for the triage report, we only report if the we are confident the results are "definitely relevant".
36
+
Whether we actually _report_ an analysis or not depends on the context and how relevant we find the summary of the results over all (see below for an explanation of how the relevance of a summary is determined). For example, in pull request performance "try" runs, we report a performance change if results are "somewhat relevant", while for the triage report, we only report if the we are confident the results are "definitely relevant".
37
37
38
38
### What makes a test result significant?
39
39
40
-
A test result is significant if the relative change percentage is considered an outlier against historical data. Determining whether a value is an outlier is done through interquartile range "fencing" (i.e., whether a value exceeds a threshold equal to the third quartile plus 1.5 times the interquartile range):
40
+
A test result is significant if the relative change percentage is considered an outlier against historical data. Determining whether a value is an outlier is done through interquartile range ["fencing"](https://www.statisticshowto.com/upper-and-lower-fences/#:~:text=Upper%20and%20lower%20fences%20cordon,%E2%80%93%20(1.5%20*%20IQR)) (i.e., whether a value exceeds a threshold equal to the third quartile plus 1.5 times the interquartile range):
We ignore the lower fence, because result data is bounded by 0.
50
50
51
-
This upper fence is often called the "significance threshold".
51
+
This upper fence is called the "significance threshold".
52
52
53
-
### How is confidence in whether a test analysis is "relevant" determined?
53
+
### How is relevance of a test run summary determined?
54
54
55
-
The confidence in whether a test analysis is relevant depends on the number of significant test results and their magnitude.
55
+
The relevance test run summary is determined by the number of significant and relevant test results and their magnitude.
56
56
57
57
#### Magnitude
58
58
59
59
Magnitude is a combination of two factors:
60
60
* how large a change is regardless of the direction of the change
61
61
* how much that change went over the significance threshold
62
62
63
-
If a large change only happens to go over the significance threshold by a small factor, then the over magnitude of the change is considered small.
63
+
As an example, if a change that is large in absolute terms only exceeds the significance threshold by a small factor, then the overall magnitude of the change is considered small.
64
64
65
-
#### Confidence algorithm
65
+
#### Relevance algorithm
66
66
67
-
The actual algorithm for determining confidence may change, but in general the following rules apply:
68
-
*Definitely relevant: any number of very large or large changes, a small amount of medium changes, or a large amount of small or very small changes.
69
-
*Probably relevant: any number of very large or large changes, any medium change, or smaller but still substantial amount of small or very small changes.
70
-
*Maybe relevant: if it doesn't fit into the above two categories, it ends in this category.
67
+
The actual algorithm for determining relevance of a comparison summary may change, but in general the following rules apply:
68
+
*High relevance: any number of very large or large changes, a small amount of medium changes, or a large number of small or very small changes.
69
+
*Medium relevance: any number of very large or large changes, any medium change, or smaller but still substantial number of small or very small changes.
70
+
*Low relevance: if it doesn't fit into the above two categories, it ends in this category.
Copy file name to clipboardExpand all lines: docs/glossary.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -25,20 +25,20 @@ The following is a glossary of domain specific terminology. Although benchmarks
25
25
***test case**: a combination of a benchmark, a profile, and a scenario.
26
26
***test**: the act of running an artifact under a test case. Each test result is composed of many iterations.
27
27
***test iteration**: a single iteration that makes up a test. Note: we currently normally run 2 test iterations for each test.
28
-
***test result**: the result of the collection of all statistics from running a test. Currently the minimum of the statistics.
28
+
***test result**: the result of the collection of all statistics from running a test. Currently, the minimum value of a statistic from all the test iterations is used.
29
29
***statistic**: a single value of a metric in a test result
30
30
***statistic description**: the combination of a metric and a test case which describes a statistic.
31
31
***statistic series**: statistics for the same statistic description over time.
32
32
***run**: a collection of test results for all currently available test cases run on a given artifact.
33
-
***test result delta**: the delta between two test results for the same test case but (optionally) different artifacts. The [comparison page](https://perf.rust-lang.org/compare.html) lists all the test result deltas as percentages comparing two runs.
34
33
35
34
## Analysis
36
35
37
-
***test result delta**: the difference between two test results for the same metric and test case.
38
-
***significance threshold**: the threshold at which a test result delta is considered "significant" (i.e., a real change in performance and not just noise). This is calculated using [the upper IQR fence](https://www.statisticshowto.com/upper-and-lower-fences/#:~:text=Upper%20and%20lower%20fences%20cordon,%E2%80%93%20(1.5%20*%20IQR)) as seen [here](https://github.com/rust-lang/rustc-perf/blob/8ba845644b4cfcffd96b909898d7225931b55557/site/src/comparison.rs#L935-L941).
39
-
***significant test result delta**: a test result delta above the significance threshold. Significant test result deltas can be thought of as "statistically significant".
36
+
***test result comparison**: the delta between two test results for the same test case but (optionally) different artifacts. The [comparison page](https://perf.rust-lang.org/compare.html) lists all the test result comparisons as percentages between two runs.
37
+
***significance threshold**: the threshold at which a test result comparison is considered "significant" (i.e., a real change in performance and not just noise). You can see how this is calculated [here](https://github.com/rust-lang/rustc-perf/blob/master/docs/comparison-analysis.md#what-makes-a-test-result-significant).
38
+
***significant test result comparison**: a test result comparison above the significance threshold. Significant test result comparisons can be thought of as being "statistically significant".
39
+
***relevant test result comparison**: a test result comparison can be significant but still not be relevant (i.e., worth paying attention to). Relevance is a factor of the test result comparison's significance and magnitude. Comparisons are considered relevant if they are significant and have at least a small magnitude .
40
+
***test result comparison magnitude**: how "large" the delta is between the two test result's under comparison. This is determined by the average of two factors: the absolute size of the change (i.e., a change of 5% is larger than a change of 1%) and the amount above the significance threshold (i.e., a change that is 5x the significance threshold is larger than a change 1.5x the significance threshold).
40
41
***dodgy test case**: a test case for which the significance threshold is significantly large indicating a high amount of variability in the test and thus making it necessary to be somewhat skeptical of any results too close to the significance threshold.
41
-
***relevant test result delta**: a synonym for *significant test result delta* in situations where the term "significant" might be ambiguous and readers may potentially interpret *significant* as "large" or "statistically significant". For example, in try run results, we use the term relevant instead of significant.
0 commit comments