@@ -153,7 +153,7 @@ A single case is defined by the following properties:
153
153
154
154
A single set of features is expanded into one or more (usually many more) config cases.
155
155
For example, if the features support HTTP 1.1 and HTTP/2, all three protocols, all
156
- stream types, identity and gzip encoding, and TLS, that results in 2* 3 * 5 * 2 * 2 = 120
156
+ stream types, identity and gzip encoding, and TLS, that results in 2×3×5×2× 2 = 120
157
157
combinations. Some of those combinations may not be valid (such as full-duplex
158
158
bidirectional streams over HTTP 1.1, or gRPC over HTTP 1.1), so the total number of
159
159
config cases would be close to 120 but not quite.
@@ -208,9 +208,9 @@ Let's dissect this line-by-line:
208
208
anything that _looks_ like an option, is actually a positional argument.
209
209
* `./path/to/client/program --some-flag-for-client-program`: The positional arguments
210
210
represent the command to invoke in order to run the client under test. The first
211
- token must be the path to the executable. Any other arguments are passed to that
212
- executable as arguments . So in this case, `--some-flag-for-client-program` is an
213
- option that our client under test understands.
211
+ token must be the path to the executable. Any subsequent arguments are passed as
212
+ arguments to that executable . So in this case, `--some-flag-for-client-program` is
213
+ an option that our client under test understands.
214
214
215
215
Common reasons to pass arguments to the client or server under test are :
216
216
1. To control verbosity of log output. When troubleshooting an implementation, it
@@ -331,9 +331,9 @@ If you provide a `-v` option to the test runner, it will print some other messag
331
331
running :
332
332
` ` ` text
333
333
Computed 44 config case permutations.
334
- Loaded 1 known failing test cases/patterns .
335
- Loaded 8 test suites, 97 test case templates .
336
- Computed 602 test case permutations across 10 server configurations .
334
+ Loaded 8 test suite(s), 97 test case template(s) .
335
+ Loaded 1 known failing test case pattern(s) that match 4 test case permutation(s) .
336
+ Computed 602 test case permutation(s) across 10 server configuration(s) .
337
337
Running 47 tests with reference server for server config {HTTP_VERSION_1, PROTOCOL_CONNECT, TLS:false}...
338
338
Running 47 tests with reference server for server config {HTTP_VERSION_1, PROTOCOL_CONNECT, TLS:true}...
339
339
Running 46 tests with reference server for server config {HTTP_VERSION_1, PROTOCOL_GRPC_WEB, TLS:false}...
@@ -349,11 +349,12 @@ Running 46 tests with reference server (grpc) for server config {HTTP_VERSION_2,
349
349
Running 46 tests with reference server (grpc) for server config {HTTP_VERSION_2, PROTOCOL_GRPC_WEB, TLS:false}...
350
350
` ` `
351
351
This shows a summary of the config as it is loaded and processed, telling us the total number of
352
- [config cases](#config-cases) that apply to the current configuration (44) and the number of patterns that
353
- identify "known failing" cases. It shows us the total number of test suites (8) and the total number of test
354
- cases across those suites (97). The next line shows us that it has used the 44 relevant config cases and 97
355
- test case templates to compute a total of 602 [test case permutations](#test-case-permutations). This means
356
- that the client under test will be invoking 602 RPCs.
352
+ [config cases](#config-cases) that apply to the current configuration (44), the total number of test suites (8),
353
+ and the total number of test cases across those suites (97). It then shows the number of patterns
354
+ provided to identify "known failing" cases (1), and the number of test cases that matched the "known
355
+ failing" patterns (4). The next line shows us that it has used the 44 relevant config cases and 97
356
+ test case templates to compute a total of 602 [test case permutations](#test-case-permutations). This
357
+ means that the client under test will be invoking 602 RPCs.
357
358
358
359
The remaining lines in the example output above are printed as each test server is started. Each server config
359
360
represents a different RPC server, started with the given configuration (since we are running the tests using
@@ -447,6 +448,11 @@ flaky test cases, use `--known-flaky` (instead of `--skip`). Use of `--run` or `
447
448
configurations is discouraged. It should instead be possibly to correctly filter the set of tests
448
449
to run just based on config YAML files.
449
450
451
+ One reason one might need to use `--skip` in a CI configuration is if a bug in the implementation
452
+ under test causes the client or server to crash or to deadlock. Since such bugs could prevent the
453
+ conformance suite from ever completing successfully (even if such tests are marked as "known
454
+ failing"), it may be necessary to temporarily skip them in CI until those bugs are fixed.
455
+
450
456
# # Configuring CI
451
457
452
458
The easiest way to run conformance tests as part of CI is to do so from a container that has the
@@ -500,6 +506,40 @@ If you have multiple test programs, such as both a client and a server, or even
500
506
different sets of arguments, you should name the relevant config YAML and known-failing files so
501
507
it is clear to which invocation they apply.
502
508
509
+ ## Upgrading
510
+
511
+ When a new version of the conformance suite is released, ideally, you could simply update
512
+ the version number you are using and everything just works. We aim for
513
+ backwards-compatibility between releases to maximize the chances of this ideal outcome. But
514
+ there are a number of things that can happen in a release that make the process a little
515
+ more laborious:
516
+
517
+ * As a matter of hygiene/maintenance, we may rename and re-organize test suites and test
518
+ cases. This means that any test case patterns that are part of your configuration (like
519
+ known-failing files) may need to be updated. We don't expect this to happen often, but
520
+ when it does, we will include information in the release notes to aid in updating your
521
+ configuration.
522
+ * The new version may contain new/updated test cases that require some changes in the
523
+ behavior/logic of your implementations under test. This might be for testing new
524
+ functionality that requires new fields in the conformance protocol messages. Without
525
+ changes in your client or server under test, the new test cases will likely fail.
526
+ * The new version may contain new/updated test cases that reveal previously undetected
527
+ conformance failures.
528
+
529
+ To minimize disruption when upgrading, we recommend a process that looks like so:
530
+ 1 . Update to the new release of the conformance suite.
531
+ 2 . Update test case patterns (like in known-failing configurations) if necessary to match
532
+ any changes to test case names and organization.
533
+ 3 . Update/add known-failing configurations for any new failures resulting from new/updated
534
+ test cases.
535
+ 4 . ** Commit/merge the upgrade.**
536
+ 5 . File bugs for the new failures.
537
+ 6 . As the bugs are fixed, update the known-failing configurations as you go.
538
+
539
+ By simply marking all new failures as "known failing" and filing bugs for them, it should
540
+ allow you to upgrade to a new release quickly. You can then decide on the urgency of fixing
541
+ the new failures and prioritize accordingly.
542
+
503
543
[ config-proto ] : https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.Config
504
544
[ configcase-proto ] : https://buf.build/connectrpc/conformance/docs/main:connectrpc.conformance.v1#connectrpc.conformance.v1.ConfigCase
505
545
[ connect-protocol ] : https://connectrpc.com/docs/protocol/
0 commit comments