Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: OpenShift Tests Extension Framework Initial #1676

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
130 changes: 88 additions & 42 deletions enhancements/testing/openshift-tests-extension.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,43 +13,43 @@ authors:
---

<!-- TOC -->

* [OpenShift Tests Extensions](#openshift-tests-extensions)
* [Release Signoff Checklist](#release-signoff-checklist)
* [Summary](#summary)
* [Motivation](#motivation)
* [Goals](#goals)
* [Non-Goals](#non-goals)
* [Proposal](#proposal)
* [Concepts](#concepts)
* [Component](#component)
* [Subcomponent](#subcomponent)
* [Test ID](#test-id)
* [Test Environment](#test-environment)
* [Test Context](#test-context)
* [Test Extension Binaries](#test-extension-binaries)
* [Binary Discovery](#binary-discovery)
* [OpenShift Payload Extension Binaries](#openshift-payload-extension-binaries)
* [Non-Payload Extension Binaries](#non-payload-extension-binaries)
* [Binary Format](#binary-format)
* [Binary Extraction](#binary-extraction)
* [Extension Interface](#extension-interface)
* [Info - Extension Metadata](#info---extension-metadata)
* [List - Extension Test Listing](#list---extension-test-listing)
* [Run-Test - Running Extension Tests](#run-test---running-extension-tests)
* [Config - Component Configuration Testing](#config---component-configuration-testing)
* [Update - Metadata Validation](#update---metadata-validation)
* [Extension Implementation](#extension-implementation)
* [Test Result Aggregation](#test-result-aggregation)
* [Risks and Mitigations](#risks-and-mitigations)
* [Binary Incompatibility](#binary-incompatibility)
* [CPU Architecture](#cpu-architecture)
* [Runtime Size / Speed](#runtime-size--speed)
* [Image Size](#image-size)
* [Poor Extension Implementation](#poor-extension-implementation)
* [Version Skew Strategy](#version-skew-strategy)
* [Alternatives](#alternatives)

* [Release Signoff Checklist](#release-signoff-checklist)
* [Summary](#summary)
* [Motivation](#motivation)
* [Goals](#goals)
* [Non-Goals](#non-goals)
* [Proposal](#proposal)
* [Concepts](#concepts)
* [Component](#component)
* [Subcomponent](#subcomponent)
* [Test ID](#test-id)
* [Test Environment](#test-environment)
* [Test Context](#test-context)
* [Test Extension Binaries](#test-extension-binaries)
* [Binary Discovery](#binary-discovery)
* [OpenShift Payload Extension Binaries](#openshift-payload-extension-binaries)
* [Non-Payload Extension Binaries](#non-payload-extension-binaries)
* [Binary Format](#binary-format)
* [Binary Extraction](#binary-extraction)
* [Extension Interface](#extension-interface)
* [Info - Extension Metadata](#info---extension-metadata)
* [List - Extension Test Listing](#list---extension-test-listing)
* [Run-Test - Running Extension Tests](#run-test---running-extension-tests)
* [Run-Suite - Running Tests in Local Suites](#run-suite---running-tests-in-local-suites)
* [Run-Monitor - Monitoring Cluster during Test Run](#run-monitor---monitoring-cluster-during-test-run)
* [Config - Component Configuration Testing](#config---component-configuration-testing)
* [Update - Metadata Validation](#update---metadata-validation)
* [Extension Implementation](#extension-implementation)
* [Test Result Aggregation](#test-result-aggregation)
* [Risks and Mitigations](#risks-and-mitigations)
* [Binary Incompatibility](#binary-incompatibility)
* [CPU Architecture](#cpu-architecture)
* [Runtime Size / Speed](#runtime-size--speed)
* [Image Size](#image-size)
* [Poor Extension Implementation](#poor-extension-implementation)
* [Version Skew Strategy](#version-skew-strategy)
* [Alternatives](#alternatives)
<!-- TOC -->

# OpenShift Tests Extensions
Expand Down Expand Up @@ -273,12 +273,13 @@ must adhere to.
Running an extension binary will output the following help text for the initial version of the interface.

```
info - Output test contribution extension version and metadata.
list - Output tests supported by this extension.
run-test - Run one or more tests and output results.
run-suite - Runs tests associated with suites supplied by this extension.
config - Component configuration management.
update - Update git metadata for extension.
info - Output test contribution extension version and metadata.
list - Output tests supported by this extension.
run-test - Run one or more tests and output results.
run-suite - Runs tests associated with suites supplied by this extension.
run-monitor - Runs tests associated with suites supplied by this extension.
config - Component configuration management.
update - Update git metadata for extension.
```

##### Info - Extension Metadata
Expand Down Expand Up @@ -477,6 +478,27 @@ Annotated example `info` output is provided below.
"source = \"openshift:payload:hyperkube\" && test.name.contains(\"FIPS\"))"
]
}
],

# Monitors are processes that will be run by origin for the duration of
# test execution. They are similar to tests in that they can write
# artifacts and report back tests results. They differ in that they
# cannot have any conflicts/isolations (they must be able to run
# during all testing).
"monitors": [
{
# The name of the monitor (will be passed to run-monitor)
"name": "fips-endpoints",
"description": "optional description",

# Monitors can specify whether they should be run in order to
# save resources when they have no value. If they select any tests
# origin identifies execution, origin will run the monitor. If
# no qualifiers are specified, the monitor will always be run.
"qualifiers": [
"source = \"openshift:payload:hyperkube\" && test.name.contains(\"FIPS\"))"
]
}
]

}
Expand Down Expand Up @@ -737,6 +759,30 @@ in parallel, without consideration for system resources or parallelism constrain
test orchestration -- it will choreograph invocations of `run-test` consistent with
those constraints.

##### Run-Monitor - Monitoring Cluster during Test Run

The `run-monitor` will start a monitor identified by `info`. A monitor should
stay running until it receives SIGINT from origin. After receiving SIGINT,
it will be given a 30-second grace period before receiving a SIGKILL.

```
$ ./extension-binary run-monitor
--component "default" or "<product>:<type>:<component>"
--platform The hardware or cloud platform ("aws", "gcp", "metal", ...).
...other environment arguments except config...
--name | -n Test name of the monitor to run (-n can be specified multiple times).
```

`run-monitor` will receive the same environment parameters as `run-test` --
exception `--config` which vary during the course of execution -- and
will output to the same formats (e.g. JSONL test results). `run-monitor` should
always output at least one test result. A test result in the stdout
stream must be called 'Monitor: <monitor name>' for each monitor that
was run with the invocation and reflect the success or failure of the monitor.

Failure to include this test result will result in `origin` creating it
synthetically and reporting it as a failure.

##### Config - Component Configuration Testing

A component can advertise that it wants to be exercised in multiple different configuration.
Expand Down