Replies: 2 comments 2 replies
-
Thanks Paul. We should also consider a naming convention for the TOSCA files or CSAR files themselves. Any suggestions? |
Beta Was this translation helpful? Give feedback.
2 replies
-
I upgraded all of the TOSCA v1.0 tests cases that were in this repository to TOSCA v2.0 and moved them into the appropriate subdirectories created by @pmjordan. Aside from stripping the v1.0 section numbers, I have not made any changes to the file names. We should continue to discuss naming conventions to make it easier to recognize whether tests are supposed to succeed or fail. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I understand that there have been some requests for validation certification and that we expect that in the first instance this will be by self-certification.
Clearly we will want to gather together a range of TOSCA and CSAR files, chosen to represent all the features described in the specification, both valid and deliberately invalid. But how do we
Here are my ideas on how that might be supported:
Outline
We make use of a test framework. Each TOSCA or CSAR file has an associated test case file which includes the expected result. The framework finds all the test case files and runs them.
Each test case file calls a single command which is consistent across all tests passing to the command the path to the TOSCA file. The command returns a response which the test case compares to the expected result.
The single command acts as a wrapper to the particular tool under test at that time. Tool vendors will be required to supply a wrapper for thier tool based on a boilerplate example that we supply.
The command will invoke the tool under test using its specific flags, options, environment variables etc. as it requires, it will return pass/fail and other results to the test file in a format which is standard for all tests and all tools under test.
Only one wrapper instance can be invoked on the system at any one time.
There is a potential risk that the incorrect wrapper may be invoked. To guard against this one of the tests will be to collect a name and version string from either the wrapper or perferably from the tool itself (via the wrapper). The response should be included in the test results.
Levels of testing
The simplest level of testing is the confirm that the tool under test can correctly distingish between valid and invalid TOSCA. Call this level 1. If the tool under test is a parser then that may be enough. If the tool is an orchestrator then it would be better to confirm operation of the representation model (level 2). This might be done by means of get_property, get_attribute and TOSCA_path. It would also be helpful to confirm deployment of external representations, (level 3). This might be done by packaging implementation artifacts which create files, the prescence of which can be confirmed by the test case file.
Environment
I suggest that the TOSCA files are held in a directory containing directories with names equal to the anchor text used for the headings in the specification (not the section numbers as that is harder to maintain). It would be possible to use a deep directory structure following the spec headings but again that would be harder to maintain. I prefer a flat structure listing all the anchor names. Currently some of these names are terse when taken out of thier hierachy context but these names could be improved in future vesions of the spec. The test file would be in the same directory as the TOSCA file it uses.
Implementation
I propose we require python to be present on the system which runs the tests. That is widely used and should provide independance from the operating system. The wrapper can then be implemented as a python program. I belive that python unittest is capable of discovering the multple test case files throughout the directory structure however that is arranged.
The wrapper program would then also be a python program making an os call to execute the tool under test. The implementation artefacts would also be python scripts so that python handles the complexities of writing the files to the different operating systems.
Beta Was this translation helpful? Give feedback.
All reactions