You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided. This helps teams to understand the quality of the APIs regarding the definition.
67
+
68
+
The scoring is produced in two different metrics:
69
+
70
+
- A number scoring. Who cames as substracting from 100% from any error or warning
71
+
- A letter, who groups numeric scorings in letters from A (better) to any
72
+
73
+
Also it introduces a quality gate, were an API scoring below the specific threshold will fail in a pipeline.
74
+
75
+
Enabling scoring is done using a new parameter called --scoring-config or -s and the scoring configuration file, where you can define how an error or a warning affects to the scoring
- scoringSubtract : An object with a key/value pair objects for every result level we want to subtract percentage, with the percentage to subtract from number of results on every result type
129
+
- scoringLetter : An object with key/value pairs with scoring letter and scoring percentage, that the result must be greater , for this letter
130
+
- threshold : A number with minimum percentage value to provide valid the file we are checking
131
+
- warningsSubtract : A boolean to setup if accumulate the result types to less the scoring percentage or stop counting on most critical result types
132
+
- uniqueErrors : A boolean to setup a count with unique errors or with all of them
133
+
134
+
Example:
135
+
136
+
With previous scoring config file, if we have:
137
+
138
+
1 error, the scoring is 45% and D
139
+
2 errors, the scoring is 35% and E
140
+
3 errors, the scoring is 25% and E
141
+
4 errors, the scoring is 25% and E
142
+
and so on
143
+
144
+
Output:
145
+
146
+
Below your output log you can see the scoring, like:
147
+
148
+
✖ SCORING: A (93%)
149
+
63
150
## Error Results
64
151
65
152
Spectral has a few different error severities: `error`, `warn`, `info`, and `hint`, and they are in "order" from highest to lowest. By default, all results will be shown regardless of severity, but since v5.0, only the presence of errors will cause a failure status code of 1. Seeing results and getting a failure code for it are now two different things.
0 commit comments