Skip to content

Add suite creation and jedi config render tests#751

Open
mranst wants to merge 44 commits intodevelopfrom
feature/mranst/code_tests
Open

Add suite creation and jedi config render tests#751
mranst wants to merge 44 commits intodevelopfrom
feature/mranst/code_tests

Conversation

@mranst
Copy link
Copy Markdown
Collaborator

@mranst mranst commented Mar 26, 2026

This PR adds two code tests discussed in #736. The first simply creates all suites to ensure they are configured correctly, and do not contain defer_to_platform or defer_to_model.

The second test renders the jedi config yaml for all JEDI suites and evaluates differences to a set of stored files. These are constructed in a dry-run mode, where observations are not fetched and all filepath prefixes are replaced by placeholders. The idea behind this test is to ensure all configs are able to be successfully constructed, and also can be used to evaluate changes to jedi configs, as any change will be reflected in the file diff as part of a PR. Any change to a jedi yaml is expected to trigger a failure in this test. To make it easy to account for changes, I have introduced a script that can be run using swell utility CreateMockConfigs that will automatically regenerate all of the configs used for comparison in the source code.

I have implemented these tests as part of the regular code testing, as they add only about 20 seconds to the runtime of the code tests, but we can potentially look at implementing them in a different way, such as part of a new workflow

The suite creation test attempts to construct experiments for all suites within swell in a temporary directory. If one fails, try creating the suite on its own to make sure it is configured properly. Ensure all values are valid and are not filled by the templates `defer_to_model` or `defer_to_platform`.

### JEDI Config test
The JEDI config test generates mock configs for jedi executables in a dry-run mode, where obs will not be checked and placeholders will be used for experiment filepaths. These configs are compared against reference files located in `src/swell/test/jedi_configs/`, and named `jedi_<suite>_config.yaml`. Any difference in values in these yamls will cause this test to fail, so ensure any differences created are intentional, then run `swell utility CreateMockConfigs` to automatically generate new reference files for all suites. These new files are placed in the `jedi_config` location in the source code.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that chaning something like start_cycle_point or final_cycle_point in suite_config.py doesn't result in failure. So perhaps it's worth mentioning it here.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, added a note for this. I didn't realize this as I was writing this test, but changing the cycle times does not have an effect on the configs, since the cycle directory is replaced with a placeholder and swell is not checking for obs


marine_default_datetime = '20210701T120000Z'
atmosphere_default_datetime = '20231010T000000Z'
compo_default_datetime = '20230805T1800Z'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these dates hard coded?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally the individual cycling points are calculated by the Cylc scheduler, which I am bypassing here. These aren't too important here because these mock configs don't change with the cycling point as long as the two cases match, but I could potentially look into re-creating this calculation in swell


defaults_dict['3dfgat_marine_cycle'] = {'datetime': marine_default_datetime,
'model': 'geos_marine',
'executable_type': 'fgat'}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a small detail, why executable_type for 3dfgat_marine_cycle and 3dfgat_atmos are different?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct to Swell, but I'm not sure why they run different executables

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants