Add suite creation and jedi config render tests#751
Conversation
…ome/manstett/swell-main into feature/mranst/code_tests
docs/code_tests/code_tests.md
Outdated
| The suite creation test attempts to construct experiments for all suites within swell in a temporary directory. If one fails, try creating the suite on its own to make sure it is configured properly. Ensure all values are valid and are not filled by the templates `defer_to_model` or `defer_to_platform`. | ||
|
|
||
| ### JEDI Config test | ||
| The JEDI config test generates mock configs for jedi executables in a dry-run mode, where obs will not be checked and placeholders will be used for experiment filepaths. These configs are compared against reference files located in `src/swell/test/jedi_configs/`, and named `jedi_<suite>_config.yaml`. Any difference in values in these yamls will cause this test to fail, so ensure any differences created are intentional, then run `swell utility CreateMockConfigs` to automatically generate new reference files for all suites. These new files are placed in the `jedi_config` location in the source code. |
There was a problem hiding this comment.
I noticed that chaning something like start_cycle_point or final_cycle_point in suite_config.py doesn't result in failure. So perhaps it's worth mentioning it here.
There was a problem hiding this comment.
Thanks, added a note for this. I didn't realize this as I was writing this test, but changing the cycle times does not have an effect on the configs, since the cycle directory is replaced with a placeholder and swell is not checking for obs
|
|
||
| marine_default_datetime = '20210701T120000Z' | ||
| atmosphere_default_datetime = '20231010T000000Z' | ||
| compo_default_datetime = '20230805T1800Z' |
There was a problem hiding this comment.
Why are these dates hard coded?
There was a problem hiding this comment.
Normally the individual cycling points are calculated by the Cylc scheduler, which I am bypassing here. These aren't too important here because these mock configs don't change with the cycling point as long as the two cases match, but I could potentially look into re-creating this calculation in swell
|
|
||
| defaults_dict['3dfgat_marine_cycle'] = {'datetime': marine_default_datetime, | ||
| 'model': 'geos_marine', | ||
| 'executable_type': 'fgat'} |
There was a problem hiding this comment.
Just a small detail, why executable_type for 3dfgat_marine_cycle and 3dfgat_atmos are different?
There was a problem hiding this comment.
This is correct to Swell, but I'm not sure why they run different executables
This PR adds two code tests discussed in #736. The first simply creates all suites to ensure they are configured correctly, and do not contain
defer_to_platformordefer_to_model.The second test renders the jedi config yaml for all JEDI suites and evaluates differences to a set of stored files. These are constructed in a
dry-runmode, where observations are not fetched and all filepath prefixes are replaced by placeholders. The idea behind this test is to ensure all configs are able to be successfully constructed, and also can be used to evaluate changes to jedi configs, as any change will be reflected in the file diff as part of a PR. Any change to a jedi yaml is expected to trigger a failure in this test. To make it easy to account for changes, I have introduced a script that can be run usingswell utility CreateMockConfigsthat will automatically regenerate all of the configs used for comparison in the source code.I have implemented these tests as part of the regular code testing, as they add only about 20 seconds to the runtime of the code tests, but we can potentially look at implementing them in a different way, such as part of a new workflow