Skip to content

claireyung:dev-MC_4km_jra_ryf+regionalpanan+isf. PR #2#1078

Open
chrisb13 wants to merge 6 commits intodev-MC_4km_jra_ryf+regionalpanan+isffrom
880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp
Open

claireyung:dev-MC_4km_jra_ryf+regionalpanan+isf. PR #2#1078
chrisb13 wants to merge 6 commits intodev-MC_4km_jra_ryf+regionalpanan+isffrom
880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp

Conversation

@chrisb13
Copy link
Collaborator

@chrisb13 chrisb13 commented Jan 21, 2026

A new PR for the alpha release of the regional panan with ice-shelves. This PR allows us to add @claireyung's commits on top of the latest dev-MC_25km_jra_ryf.

ADDED BY DOUGIE: This PR now includes the same commits as #814 but rebased onto an updated dev-MC_4km_jra_ryf+regionalpanan+isf branch

We plan to do a squash merge.

Related:

@chrisb13 chrisb13 changed the title 880 dev mc 4km jra ryf+regionalpanan+isf temp claireyung:dev-MC_4km_jra_ryf+regionalpanan+isf. PR #2 Jan 21, 2026
@chrisb13
Copy link
Collaborator Author

@dougiesquire, I think I've now done what we discussed today. This is now for the ice-shelf version. Can you have a go at resolving the conflicts please?

If needed, I imagine we can ask for Claire's help resolving any trickier bits.

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 2, 2026

As for the non-isf configuration, I've generated repro checksums using Claire's original branch and pushed them to her branch for reference.

I'll use these to make sure/document that I don't changes answers while doing the conflict resolution.

@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch from 6d4c012 to 0392a72 Compare February 2, 2026 22:04
@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 2, 2026

The 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch Chris set up for this PR still included the final merge commit from Claire's original branch so I've remove that (no answer-changing changes)

@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch from 0392a72 to 7ce6a41 Compare February 5, 2026 02:18
@dougiesquire
Copy link
Collaborator

!test repro commit

@github-actions
Copy link

github-actions bot commented Feb 5, 2026

⚠️ The Bitwise Reproducibility Check Had Errors - Check https://github.com/ACCESS-NRI/access-om3-configs/actions/runs/21696243030 ⚠️
❌ The Bitwise Reproducibility Check Failed ❌

When comparing:

  • 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp (checksums created using commit 7ce6a41), against
  • dev-MC_4km_jra_ryf+regionalpanan+isf (checksums in commit d60cb09)
Further information

The experiment can be found on Gadi at /scratch/tm70/repro-ci/experiments/access-om3-configs/7ce6a41f20da484079d0910e3599c70567ffce2b, and the test results at https://github.com/ACCESS-NRI/access-om3-configs/runs/62567050147.

The checksums generated by this !test command are found in the testing/checksum directory of https://github.com/ACCESS-NRI/access-om3-configs/actions/runs/21696243030/artifacts/5384079720.

The checksums compared against are found here https://github.com/ACCESS-NRI/access-om3-configs/tree/d60cb09f95af27f0512ef6c6575ca8047be37b48/testing/checksum

Test summary:
🔥 test_repro_historical

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 5, 2026

Some of Claire's inputs are in a project that tm70_ci does not have access to, so I'll run the repro test locally and commit the new checksums.

Note, we'll update this config shortly to use the copies of Claire's files in /g/data/vk83 so this is just for bookkeeping.

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 5, 2026

The checksums I just committed match those I created using Claire's original branch.

UPDATE: this has now been squashed into 8c49aa5, along with the answer-changing changes that to the base branch that were reverted for testing.

@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch 3 times, most recently from 91f710b to a4c54d6 Compare February 5, 2026 10:42
@dougiesquire
Copy link
Collaborator

Here's the difference between Claire's original branch and this one.

Here's the difference between the dev-MC_4km_jra_ryf+regionalpanan branch and this one.

@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch 2 times, most recently from 7fffbb1 to 26c3d46 Compare February 6, 2026 03:04
This commit squashes 168 commits made during the original development of this configuration. See #814 for the original 168 commits. The first lines of the original 168 commit messages are as follows:

- Add regional panantarctic configuration (1/12th degree/4km setup) (#689)
- Update regional panantarctic configuration and ensure it runs
- Get rid of some old information in the README
- Add ice shelves
- 2025-08-25 11:49:52: Run 0
- 2025-08-25 11:55:18: Run 0
- Add ice shelf diag to diagtable
- Add ice shelf diag to diagtable, fix name
- 2025-08-25 13:27:37: Run 0
- 2025-08-25 14:26:11: Run 0
- 2025-08-26 08:39:13: Run 0
- 2025-08-26 08:49:57: Run 0
- 2025-08-26 08:58:46: Run 0
- 2025-08-26 14:00:06: Run 0
- 2025-08-27 09:01:37: Run 0
- 2025-08-27 09:30:33: Run 0
- Try exe with fatal error mesh/mask inconsistent turned into a warning
- 2025-08-27 17:39:24: Run 0
- Add err logs
- 2025-08-27 20:10:05: Run 0
- revert atm mesh in nuopc.runconfig to nomask
- check_for_nans = .false.
- Turn Nan checker back on but versboity on too
- 2025-08-28 08:30:37: Run 0
- Add diagnostics of mediator
- 2025-09-04 11:09:55: Run 0
- try pr142-10 and no outputs and check for nans = T
- 2025-09-05 08:54:40: Run 0
- 2025-09-05 11:40:08: Run 0
- 2025-09-05 12:15:25: Run 0
- 2025-09-07 10:45:45: Run 0
- 2025-09-07 10:55:04: Run 0
- 2025-09-08 19:59:17: Run 0
- 2025-09-08 20:12:06: Run 0
- 2025-09-09 11:52:17: Run 0
- 2025-09-09 12:08:58: Run 0
- 2025-09-09 12:24:29: Run 0
- 2025-09-09 12:33:49: Run 0
- 2025-09-09 14:02:22: Run 0
- 2025-09-09 14:29:18: Run 0
- 2025-09-09 15:22:32: Run 0
- 2025-09-09 15:40:05: Run 0
- 2025-09-09 16:17:47: Run 0
- 2025-09-09 16:48:38: Run 0
- 2025-09-09 17:24:31: Run 0
- 2025-09-10 07:44:08: Run 0
- 2025-09-10 13:51:58: Run 0
- 2025-09-10 14:02:08: Run 0
- 2025-09-10 14:04:43: Run 0
- 2025-09-10 15:35:17: Run 0
- 2025-09-10 17:11:44: Run 0
- 2025-09-10 17:22:09: Run 0
- 2025-09-10 17:45:12: Run 0
- 2025-09-10 23:12:56: Run 0
- 2025-09-11 09:00:46: Run 0
- 2025-09-11 10:32:01: Run 0
- 2025-09-11 14:45:03: Run 0
- 2025-09-11 15:38:56: Run 0
- 2025-09-11 16:14:22: Run 0
- 2025-09-11 16:40:13: Run 0
- 2025-09-11 21:49:18: Run 0
- 2025-09-11 22:41:26: Run 0
- 2025-09-12 08:51:30: Run 0
- 2025-09-12 12:36:17: Run 0
- 2025-09-13 18:35:38: Run 0
- 2025-09-15 08:44:18: Run 0
- 2025-09-15 18:25:30: Run 0
- 2025-09-15 21:35:38: Run 0
- 2025-09-15 21:46:45: Run 0
- payu archive: documentation of MOM6 run-time configuration
- Increase timestep to reduce runtime
- 2025-09-16 09:48:00: Run 1
- Drop dt again to 300
- 2025-09-16 11:25:29: Run 1
- 2025-09-16 15:19:34: Run 1
- 2025-09-16 15:39:51: Run 1
- 2025-09-16 17:26:16: Run 1
- 2025-09-16 17:45:49: Run 1
- 2025-09-16 18:02:06: Run 1
- 2025-09-16 18:06:34: Run 1
- 2025-09-16 18:09:12: Run 1
- 2025-09-16 18:59:08: Run 1
- 2025-09-16 19:14:00: Run 1
- 2025-09-16 19:34:04: Run 1
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-16 21:10:07: Run 1
- 2025-09-16 21:32:03: Run 0
- 2025-09-16 21:35:58: Run 0
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-16 22:37:48: Run 0
- 2025-09-16 23:17:44: Run 1
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-17 00:03:55: Run 2
- 2025-09-17 08:10:44: Run 2
- 2025-09-17 08:47:39: Run 2
- 2025-09-17 08:59:32: Run 2
- 2025-09-17 09:12:46: Run 2
- 2025-09-17 09:40:11: Run 2
- 2025-09-17 11:50:41: Run 0
- 2025-09-17 12:33:55: Run 0
- 2025-09-17 13:32:53: Run 0
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-17 14:20:57: Run 1
- 2025-09-17 14:56:47: Run 0
- 2025-09-17 15:49:21: Run 1
- 2025-09-17 16:05:18: Run 1
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-17 17:16:09: Run 2
- 2025-09-17 18:17:02: Run 2
- 2025-09-17 18:31:15: Run 1
- 2025-09-17 21:47:42: Run 1
- 2025-09-17 22:49:39: Run 2
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-18 07:41:42: Run 3
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-18 13:06:52: Run 4
- 2025-09-18 20:13:15: Run 5
- 2025-09-18 21:58:47: Run 5
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-19 09:02:27: Run 0
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-19 15:17:35: Run 1
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-19 21:15:14: Run 2
- payu archive: documentation of MOM6 run-time configuration
- 2025-09-20 02:31:10: Run 3
- 2025-09-20 08:20:46: Run 4
- 2025-09-20 13:56:12: Run 5
- 2025-09-20 19:47:36: Run 6
- 2025-09-21 01:21:12: Run 7
- 2025-09-21 07:06:06: Run 8
- 2025-09-21 12:55:30: Run 9
- 2025-09-21 18:38:29: Run 10
- 2025-09-22 00:34:06: Run 11
- 2025-09-22 06:27:31: Run 12
- 2025-10-01 09:00:41: Run 13
- 2025-10-01 10:44:58: Run 13
- 2025-10-01 14:45:34: Run 13
- payu archive: documentation of MOM6 run-time configuration
- 2025-10-03 12:01:58: Run 14
- 2025-10-03 17:19:01: Run 15
- Update file paths to tm70
- update diag table stuff to use make diag table functionality
- Update config.yaml with restart
- Add ice shelf instructions
- delete old stuff
- Add ice shelf to readme
- replace local exe with prerelease access-om3/pr142-17
- 2025-10-07 11:34:11: Run 0
- remove #override PARALLEL_RESTARTFILES in MOM_override
- Replace MOM_input, MOM_override and MOM_override_IS with the MOM_para…
- 2025-10-17 10:17:00: Run 0
- 2025-10-17 10:26:28: Run 0
- Update some ice shelf parameters and replace ICs, remove restart
- Set up run to start from rest
- update instructions
- Update core count and PE LAYOUT to be 2600 ocean cores https://github…
- Add Kd_interface and remove a few daily diagnostics
- 2025-10-23 13:24:26: Run 0
- payu archive: documentation of MOM6 run-time configuration
- Prepare for second part of first month run
- Testing ice shelf config
- Run spin up with Yamazaki ICs
- Tuning testing
- New control run RYF with somewhat tuned melt parameters and Yamazaki ICs
- 2025-11-27 15:46:01: Run 35
- Run RYF out for 5.5 years
- Delete ice_shelf_instructions.md

--------

Co-authored-by: minghangli-uni <24727729+minghangli-uni@users.noreply.github.com>
Co-authored-by: Dougie Squire <42455466+dougiesquire@users.noreply.github.com>
@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch from 26c3d46 to 4cf15dc Compare February 6, 2026 03:18
@dougiesquire dougiesquire marked this pull request as ready for review February 6, 2026 03:20
@dougiesquire
Copy link
Collaborator

@chrisb13, this is ready for review. Could you please take a look? I'll do a merge commit for this one, since it required additional changes requested by Claire

I'll update input locations, add MPI flags, update SW pen scheme etc in new PRs.

@dougiesquire dougiesquire force-pushed the 880-dev-MC_4km_jra_ryf+regionalpanan+isf_temp branch 2 times, most recently from 0be5cdb to b1e1493 Compare February 6, 2026 04:31
claireyung and others added 3 commits February 6, 2026 15:34
Update to JRA v1-6 as requested by Claire Yung
Update to access-om3/pr142-36 so ICE_SHELF_USTAR_FROM_VEL_BUGFIX parameter is available
Small tidy for consistency with other configs
@chrisb13
Copy link
Collaborator Author

@chrisb13, this is ready for review.

Thanks @dougiesquire, sorry for my delay. Is it now in the state that you'd like me to review? I ask as it seems there's been two force pushes after your comment so just double checking.

@dougiesquire
Copy link
Collaborator

@chrisb13 yup - ready for review

@helenmacdonald
Copy link
Contributor

@chrisb13 and I are reviewing now - looks good, thanks!
A couple of questions:

  1. Looks like DT_THERM was removed from MOM_input. This means that DT_THERM will be set to DT which is 150 - which would slow the model down a bit. Is this by design @claireyung
  2. the non ice shelf version is running off om3 2025.08.001 whereas the isf version is using prerelease pr142-36 is there a plan to use a released version for the isf one?

To note:

  1. We need to update file paths in config.yaml to point to files under the prerelease folder
  2. There are some diagnostics added in the ice-shelf configuration that could be worth adding to the non ice-shelf configuration
  3. Non ice-shelf config uses automask table and ice-shelf explicitly defines the layout. We could consider doing the explicit layout for the non ice shelf config

@claireyung
Copy link
Collaborator

Hi @helenmacdonald

Yes, in my experience the thermodynamic timestep needs to be the same as DT or not much bigger in the ice shelf config, otherwise it crashes fairly often (more details here claireyung/mom6-panAn-iceshelf-tools#49 ). However, note that DT=150 is only necessary for the first month with the unstable initialisation of ice shelves. After the first month, I recommend swapping to DT=400 and DT_THERM = 400 and coupling timestep 400 which speed things up a bit and is pretty stable. This is in the running instructions - https://access-om3-configs.access-hive.org.au/pr-previews/573/configurations/pan-Antartic/run_panan_isf/ (maybe the link to the running instructions can be made clearer somehow? It is not the only change that is needed after the first month!)

With the executable - to run, the ice shelf version requires changes to MOM6 that aren't yet in the main ACCESS-NRI MOM6 branch. Hopefully they make it there at some point, but some of the changes are a bit hacky at the moment and might need some improvement. Some comments here ACCESS-NRI/MOM6#38

@helenmacdonald
Copy link
Contributor

Thanks for your comprehensive answers @claireyung!

With the executable - to run, the ice shelf version requires changes to MOM6 that aren't yet in the main ACCESS-NRI MOM6 branch. Hopefully they make it there at some point, but some of the changes are a bit hacky at the moment and might need some improvement.

Good to know, thanks. As an aside, my understanding is that these changes still need some cleaning up before being merged and eventually ending up in a released executable. Do we have a plan for this? Is it you, or someone at ACCESS-NRI responsible for this? Similarly, has someone taken on the responsibity for pushing some of these back to MOM central (not saying you have to, just wanting to make sure it is clear who should be doing it!)?

@chrisb13 - I think we are ready to merge? We will need to remember to update the executable at a later date.

helenmacdonald
helenmacdonald previously approved these changes Feb 16, 2026
@chrisb13
Copy link
Collaborator Author

chrisb13 commented Feb 16, 2026

Thanks @claireyung and @helenmacdonald

note that DT=150 is only necessary for the first month with the unstable initialisation of ice shelves. After the first month, I recommend swapping to DT=400 and DT_THERM = 400 and coupling timestep 400 which speed things up a bit and is pretty stable

My memory from chats with @dougiesquire is that our plan is to release the panan+noisf from a cold start and this config from a warm start (given it's expense and the challneges of initilisation). Is my memory correct @dougiesquire?

If that's the case, then I think we should move over all the defaults to DT=400 etc.

It is not the only change that is needed after the first month!

Ok, so looking at the docs, I see changing the restart and this:

Run - ensure DT is 400 (ish), input.nml has input_filename = 'n' commented out, CLOCK time in nuopc.runconfig is months

Is there anything else @claireyung?

maybe the link to the running instructions can be made clearer somehow
When we do an alpha release we can include in the announcement that highlights the doc's if useful? (We don't typically do a "formal" announcement for alphas but we could do a forum post / put something in a github release note).

As an aside, my understanding is that these changes still need some cleaning up before being merged and eventually ending up in a released executable. Do we have a plan for this? Is it you, or someone at ACCESS-NRI responsible for this? Similarly, has someone taken on the responsibity for pushing some of these back to MOM central

I think @dougiesquire is aware of this. We've been chatting about which changes may eventually go upstream too (Angus is likely putting a MOM-ocean PR together -- so there is likely an upcoming opportunity fyi @claireyung). Technically, I think we can do an alpha release off a pre-release but I leave to @dougiesquire and @anton-seaice as to whether they'd prefer it based off an actual build release..? I note that it's not currently on @dougiesquire alpha to do list.

@chrisb13 - I think we are ready to merge? We will need to remember to update the executable at a later date.

Would be helpful to get clarity on the "to note" points we raised here first. Or at least add them to the mega-list.

@helenmacdonald
Copy link
Contributor

Would be helpful to get clarity on the "to note" points we raised here first. Or at least add them to the mega-list.

I have added them to the mega-list

@dougiesquire
Copy link
Collaborator

My memory from chats with @dougiesquire is that our plan is to release the panan+noisf from a cold start and this config from a warm start (given it's expense and the challneges of initilisation). Is my memory correct @dougiesquire?

Nope. @claireyung's original branch did start from a restart so that is what I originally squashed/rebased to make sure that I could reproduce her answers. But, @claireyung requested on Zulip that the configuration be modified to start cold using her instructions here. I made this change in this commit.

  1. We need to update file paths in config.yaml to point to files under the prerelease folder

Yes, that will be done in a follow-up PR, along with a few other changes. E.g. see the follow up PR for the non-isf configuration here: #1118

2. There are some diagnostics added in the ice-shelf configuration that could be worth adding to the non ice-shelf configuration

Other that changing a few variables to use their CMIP names, I just copied what Claire had in her diag_table as she knows the configuration best. In a general sense, different configurations will call for different diagnostics (e.g. the ice shelf diagnostics are obviously only relevant in the ice shelf configuration). If you think there's a need to change the diagnostics, this should be done in another PR with an associated issue that describes the reason for the change.

3. Non ice-shelf config uses automask table and ice-shelf explicitly defines the layout. We could consider doing the explicit layout for the non ice shelf config

Maybe. But not urgent, and again not in this PR. Please feel free to open an issue.

Good to know, thanks. As an aside, my understanding is that these changes still need some cleaning up before being merged and eventually ending up in a released executable. Do we have a plan for this? Is it you, or someone at ACCESS-NRI responsible for this? Similarly, has someone taken on the responsibity for pushing some of these back to MOM central (not saying you have to, just wanting to make sure it is clear who should be doing it!)?

Yes, I am aware of this as @chrisb13 says and it's on my todo list to get these changes into one of our release branches and eventually upstream. But, as @claireyung mentions, there's a little bit of work to do before they are ready. Alpha releases can use prerelease builds so I don't think this needs to hold us up.

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 17, 2026

Hmmm, actually after the cold start and parameter updates requested by @claireyung, this now fails after a few hours with:

WARNING from PE    76: MOM_diabatic_aux.F90, applyBoundaryFluxesInOut(): Mass created. x,y,dh=      -4.938E+01     -8.196E+01      6.006E-12

WARNING from PE    76: MOM_diabatic_aux.F90, applyBoundaryFluxesInOut(): Mass created. x,y,dh=      -4.929E+01     -8.196E+01      4.487E-08

[Lots more of ^these]

FATAL from PE   491:  Could not find target coordinate   1036.52521775451      in get_polynomial_coordinate. This is caused by an inconsistent interpolant (perhaps not monotonically increasing):   1036.52521775451        1036.31691947091                          NaN

Image              PC                Routine            Line        Source
access-om3-MOM6-C  00000000024E4472  mpp_error_basic            80  mpp_util_mpi.inc
access-om3-MOM6-C  0000000001409A58  build_and_interpo         453  regrid_interp.F90
access-om3-MOM6-C  0000000001404CC5  build_rho_column          140  coord_rho.F90
access-om3-MOM6-C  00000000013F4976  diag_remap_update         361  MOM_diag_remap.F90
access-om3-MOM6-C  00000000013E758D  diag_update_remap        3632  MOM_diag_mediator.F90
access-om3-MOM6-C  00000000018816A6  diabatic_ale             1605  MOM_diabatic_driver.F90
access-om3-MOM6-C  000000000187412B  diabatic                  413  MOM_diabatic_driver.F90
access-om3-MOM6-C  00000000017C84E1  step_mom_thermo          1674  MOM.F90
access-om3-MOM6-C  00000000017A894F  step_mom                 1007  MOM.F90
access-om3-MOM6-C  00000000012C182C  update_ocean_mode         642  mom_ocean_model_nuopc.F90
access-om3-MOM6-C  00000000011EDD87  modeladvance             1979  mom_cap.F90
libesmf.so         00001479DD9C8417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         00001479DD9C82B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         00001479DDB767FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         00001479DE2DC9EE  nuopc_modelbase_m     Unknown  Unknown
libesmf.so         00001479DD7205A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001479DD71FFEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001479DDA7D584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001479DD720C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001479DDC7F2B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001479DDEFC409  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         00001479DE2966A5  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001479DE2982C8  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         00001479DD9C8417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         00001479DD9C82B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         00001479DDB767FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         00001479DE2951EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001479DD7205A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001479DD71FFEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001479DDA7D584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001479DD720C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001479DDC7F2B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001479DDEFC409  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         00001479DE2966A5  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001479DE2982C8  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         00001479DD9C8417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         00001479DD9C82B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         00001479DDB767FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         00001479DE2951EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001479DD7205A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001479DD71FFEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001479DDA7D584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001479DD720C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001479DDC7F2B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001479DDEFC409  esmf_gridcompmod_     Unknown  Unknown
access-om3-MOM6-C  00000000011B3794  esmapp                    141  esmApp.F90
access-om3-MOM6-C  0000000000A6874D  Unknown               Unknown  Unknown
libc-2.28.so       00001479D9935865  __libc_start_main     Unknown  Unknown
access-om3-MOM6-C  0000000000A6866E  Unknown               Unknown  Unknown
--------------------------------------------------------------------------

I didn't notice this previously because it does run long enough to update the checksums.

Turning off the diagnostics on rho2 coords gets us past this error, but then it fails a bit later with:

WARNING from PE   491: MOM_tracer_diabatic.F90, applyTracerBoundaryFluxesInOut(): Tracer created. x,y,dh=      -8.062E+01     -7.271E+01            NaN

FATAL from PE   491: NaN in input field of reproducing_EFP_sum(_2d).

Image              PC                Routine            Line        Source
access-om3-MOM6-C  00000000024E4472  mpp_error_basic            80  mpp_util_mpi.inc
access-om3-MOM6-C  00000000013AD342  reproducing_efp_s         210  MOM_coms.F90
access-om3-MOM6-C  00000000013AE1A1  reproducing_sum_2         308  MOM_coms.F90
access-om3-MOM6-C  0000000001491EEC  global_volume_mea         378  MOM_spatial_means.F90
access-om3-MOM6-C  0000000001EE83C5  calculate_diagnos         462  MOM_diagnostics.F90
access-om3-MOM6-C  00000000017AD3A4  step_mom                 1052  MOM.F90
access-om3-MOM6-C  00000000012C182C  update_ocean_mode         642  mom_ocean_model_nuopc.F90
access-om3-MOM6-C  00000000011EDD87  modeladvance             1979  mom_cap.F90
libesmf.so         000014F220CAF417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         000014F220CAF2B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         000014F220E5D7FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         000014F2215C39EE  nuopc_modelbase_m     Unknown  Unknown
libesmf.so         000014F220A075A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         000014F220A06FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         000014F220D64584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         000014F220A07C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         000014F220F662B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         000014F2211E3409  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         000014F22157D6A5  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         000014F22157F2C8  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         000014F220CAF417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         000014F220CAF2B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         000014F220E5D7FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         000014F22157C1EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         000014F220A075A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         000014F220A06FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         000014F220D64584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         000014F220A07C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         000014F220F662B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         000014F2211E3409  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         000014F22157D6A5  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         000014F22157F2C8  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         000014F220CAF417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         000014F220CAF2B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         000014F220E5D7FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         000014F22157C1EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         000014F220A075A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         000014F220A06FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         000014F220D64584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         000014F220A07C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         000014F220F662B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         000014F2211E3409  esmf_gridcompmod_     Unknown  Unknown
access-om3-MOM6-C  00000000011B3794  esmapp                    141  esmApp.F90
access-om3-MOM6-C  0000000000A6874D  Unknown               Unknown  Unknown
libc-2.28.so       000014F21CC1C865  __libc_start_main     Unknown  Unknown
access-om3-MOM6-C  0000000000A6866E  Unknown               Unknown  Unknown
--------------------------------------------------------------------------

@dougiesquire
Copy link
Collaborator

> FATAL from PE   491: NaN in input field of reproducing_EFP_sum(_2d).

This error is introduced after updating to access-om3/pr142-36 (from access-om3/pr142-26). @claireyung have you see this before with these configs?

@dougiesquire
Copy link
Collaborator

The error occurs when trying to write global temperature diagnostics (thetaogm and tosga are configured). If I turn these off, I get:

WARNING from PE   491: MOM_tracer_diabatic.F90, applyTracerBoundaryFluxesInOut(): Tracer created. x,y,dh=      -8.062E+01     -7.271E+01            NaN

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 67 in communicator MPI COMMUNICATOR 3 CREATE FROM 0
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
forrtl: error (78): process killed (SIGTERM)
Image              PC                Routine            Line        Source
libpthread-2.28.s  00001499F0B09990  Unknown               Unknown  Unknown
libuct_ib.so.0.0.  00001499D217C701  Unknown               Unknown  Unknown
libucp.so.0.0.0    00001499EC3EF3CA  ucp_worker_progre     Unknown  Unknown
libopen-pal.so.40  00001499EC877923  opal_progress         Unknown  Unknown
libopen-pal.so.40  00001499EC877AD5  ompi_sync_wait_mt     Unknown  Unknown
libmpi.so.40.30.7  00001499F1AE1C79  ompi_request_defa     Unknown  Unknown
libmpi.so.40.30.7  00001499F1A958F2  MPI_Wait              Unknown  Unknown
libesmf.so         00001499F48BA01D  _ZN5ESMCI3VMK8com     Unknown  Unknown
libesmf.so         00001499F4407888  _ZN5ESMCI3XXE4exe     Unknown  Unknown
libesmf.so         00001499F4406FE2  _ZN5ESMCI3XXE4exe     Unknown  Unknown
libesmf.so         00001499F4406FE2  _ZN5ESMCI3XXE4exe     Unknown  Unknown
libesmf.so         00001499F43A1DB2  _ZN5ESMCI11ArrayB     Unknown  Unknown
libesmf.so         00001499F43A97BA  c_esmc_arraybundl     Unknown  Unknown
libesmf.so         00001499F493B56A  esmf_arraybundlem     Unknown  Unknown
libesmf.so         00001499F4B23319  esmf_fieldbundlem     Unknown  Unknown
libesmf.so         00001499F5087824  nuopc_connector_m     Unknown  Unknown
libesmf.so         00001499F45465A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001499F4545FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001499F48A3584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001499F4546C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001499F4AA52B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001499F4ACA6E9  esmf_cplcompmod_m     Unknown  Unknown
libesmf.so         00001499F50BCEDC  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001499F50BDFFE  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         00001499F47EE417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         00001499F47EE2B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         00001499F499C7FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         00001499F50BB1EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001499F45465A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001499F4545FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001499F48A3584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001499F4546C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001499F4AA52B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001499F4D22409  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         00001499F50BC6A5  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001499F50BE2C8  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         00001499F47EE417  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         00001499F47EE2B5  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         00001499F499C7FB  esmf_attachmethod     Unknown  Unknown
libesmf.so         00001499F50BB1EB  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         00001499F45465A8  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         00001499F4545FEC  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         00001499F48A3584  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         00001499F4546C80  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         00001499F4AA52B8  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         00001499F4D22409  esmf_gridcompmod_     Unknown  Unknown
access-om3-MOM6-C  00000000011B3794  esmapp                    141  esmApp.F90
access-om3-MOM6-C  0000000000A6874D  Unknown               Unknown  Unknown
libc-2.28.so       00001499F075B865  __libc_start_main     Unknown  Unknown
access-om3-MOM6-C  0000000000A6866E  Unknown               Unknown  Unknown

which is probably just the same NaN issue manifesting in a different place.

@claireyung
Copy link
Collaborator

Hi @dougiesquire yeah I get these...
Usually making DT/DT_THERM smaller helps. But if DT is already 150, making it smaller means it probably won't finish in the normalsr walltime limit for this number of cores.

You could try ICE_SHELF_USTAR_FROM_VEL_BUGFIX = False. For some reason, since I made this True it seems a lot more sensitive than it was before (in my current run I get very big error logs with all those warnings, and sometimes it eventually crashes and I bring the timestep down but sometimes it just keeps running).

Alternatively I believe GFDL people recommend RESCALE_STRONG_DRAG = True and BT_STRONG_DRAG = True NOAA-GFDL/MOM6#971 (comment) which may be relevant. (I haven't tested it yet in this config)

@dougiesquire
Copy link
Collaborator

Thanks @claireyung for the suggestions. That issue looks very relevant.

Alternatively I believe GFDL people recommend RESCALE_STRONG_DRAG = True and BT_STRONG_DRAG = True NOAA-GFDL/MOM6#971 (comment) which may be relevant. (I haven't tested it yet in this config)

What extent of testing would justify adding these? If this configuration runs with those changes is that sufficient to include them? Are there some outputs/diagnostics we should be looking at (sorry, I know nothing about ice shelves)?

@claireyung
Copy link
Collaborator

I think if it runs, I'd be happy :)
Normally I'd look at maps of melt rate and a T-S plot to make sure the thermodynamics is right. If you have some output I'd be happy to make a evaluation notebook of that data with some plots and share it :)

@dougiesquire
Copy link
Collaborator

Thanks heaps @claireyung.

Setting ICE_SHELF_USTAR_FROM_VEL_BUGFIX = False doesn't help the issue unfortunately.

Alternatively I believe GFDL people recommend RESCALE_STRONG_DRAG = True and BT_STRONG_DRAG = True NOAA-GFDL/MOM6#971 (comment) which may be relevant. (I haven't tested it yet in this config)

For testing this, I've set up a new branch in the ACCESS-NRI MOM6 fork that is based on the latest NOAA-GFDL/MOM6:dev/gfdl and includes our ACCESS changes and your ice shelf changes. The branch is called dev/gfdl+access+isf and it is currently being deployed in an ACCESS-OM3 prerelease using this PR.

Just letting you know as that branch and prerelease could potentially replace ACCESS-NRI/MOM6#14 and ACCESS-NRI/ACCESS-OM3#142 if they suit your needs.

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 19, 2026

I finally managed to get an ACCESS-OM3 prerelease deployed using the dev/gfdl+access+isf branch. Unfortunately, even with BT_STRONG_DRAG = True and RESCALE_STRONG_DRAG = True, this still crashes within the first few hours with the same error:

FATAL from PE   801: NaN in input field of reproducing_EFP_sum(_2d).

Image              PC                Routine            Line        Source
access-om3-MOM6-C  000000000256E2B2  mpp_error_basic            80  mpp_util_mpi.inc
access-om3-MOM6-C  000000000179E652  reproducing_efp_s         211  MOM_coms.F90
access-om3-MOM6-C  000000000179F4B1  reproducing_sum_2         309  MOM_coms.F90
access-om3-MOM6-C  000000000172060C  global_volume_mea         378  MOM_spatial_means.F90
access-om3-MOM6-C  00000000016E34C9  calculate_diagnos         466  MOM_diagnostics.F90
access-om3-MOM6-C  00000000012AAFA3  step_mom                 1045  MOM.F90
access-om3-MOM6-C  000000000126B2AC  update_ocean_mode         639  mom_ocean_model_nuopc.F90
access-om3-MOM6-C  000000000121B127  modeladvance             1990  mom_cap.F90
libesmf.so         0000151A97F35363  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         0000151A97F34FFE  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         0000151A98110BEB  esmf_attachmethod     Unknown  Unknown
libesmf.so         0000151A9889D964  nuopc_modelbase_m     Unknown  Unknown
libesmf.so         0000151A97C34E0E  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         0000151A97C34C37  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         0000151A98003B5F  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         0000151A97C35ACD  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         0000151A98218C75  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         0000151A98495659  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         0000151A98856A35  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         0000151A98858658  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         0000151A97F35363  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         0000151A97F34FFE  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         0000151A98110BEB  esmf_attachmethod     Unknown  Unknown
libesmf.so         0000151A98855311  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         0000151A97C34E0E  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         0000151A97C34C37  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         0000151A98003B5F  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         0000151A97C35ACD  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         0000151A98218C75  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         0000151A98495659  esmf_gridcompmod_     Unknown  Unknown
libesmf.so         0000151A98856A35  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         0000151A98858658  nuopc_driver_mp_e     Unknown  Unknown
libesmf.so         0000151A97F35363  _ZN5ESMCI11Method     Unknown  Unknown
libesmf.so         0000151A97F34FFE  c_esmc_methodtabl     Unknown  Unknown
libesmf.so         0000151A98110BEB  esmf_attachmethod     Unknown  Unknown
libesmf.so         0000151A98855311  nuopc_driver_mp_r     Unknown  Unknown
libesmf.so         0000151A97C34E0E  _ZN5ESMCI6FTable1     Unknown  Unknown
libesmf.so         0000151A97C34C37  ESMCI_FTableCallE     Unknown  Unknown
libesmf.so         0000151A98003B5F  _ZN5ESMCI2VM5ente     Unknown  Unknown
libesmf.so         0000151A97C35ACD  c_esmc_ftablecall     Unknown  Unknown
libesmf.so         0000151A98218C75  esmf_compmod_mp_e     Unknown  Unknown
libesmf.so         0000151A98495659  esmf_gridcompmod_     Unknown  Unknown
access-om3-MOM6-C  00000000011E0994  esmapp                    141  esmApp.F90
access-om3-MOM6-C  0000000000A9360D  Unknown               Unknown  Unknown
libc-2.28.so       0000151A93E24865  __libc_start_main     Unknown  Unknown
access-om3-MOM6-C  0000000000A9352E  Unknown               Unknown  Unknown
--------------------------------------------------------------------------

@claireyung
Copy link
Collaborator

claireyung commented Feb 19, 2026

Bummer. I'm sorry this has been so painful @dougiesquire

It's weird because it worked fine with the very similar IAF config. My records of the first month in that IAF config say DT=150 worked. Some options are: try DT=120 (I've done this last year with an RYF when it crashed with dt=150, which i'm pretty sure was the same layout, and it just finished within 10 hours), or we could try dropping the timesteps even smaller and run the first month in two segments (which is annoying for diagnostics, but at least may not crash)

Did I muck up something with files - I feel like I saw a notification where you said one of the hashes had changed, but I can't find it now. (Possibly I was hallucinating)

Also just clarifying, do you mean it crashes after a few hours of model time or real time?

@dougiesquire
Copy link
Collaborator

dougiesquire commented Feb 20, 2026

Bummer. I'm sorry this has been so painful @dougiesquire

No stress at all :)

Some options are: try DT=120 (I've done this last year with an RYF when it crashed with dt=150, which i'm pretty sure was the same layout, and it just finished within 10 hours), or we could try dropping the timesteps even smaller and run the first month in two segments (which is annoying for diagnostics, but at least may not crash)

I've tried reducing the time step (to both 120s and 90s) without success unfortunately.

Also just clarifying, do you mean it crashes after a few hours of model time or real time?

Model time. Using the dev/gfdl+access+isf branch (access-om3/pr189-6) with BT_STRONG_DRAG = True and RESCALE_STRONG_DRAG = True, the model crashes on the 87th time step (i.e. ~3.5 hours).

Did I muck up something with files - I feel like I saw a notification where you said one of the hashes had changed, but I can't find it now. (Possibly I was hallucinating)

I'm still using your original files in this config (i.e. not the ones on vk83). But comparing to the runlog commit you provided for your IAF run (thanks!), that is one of the main differences. Possibly by using your original branch, I'm using an out-of-date input? I'll try updating that now.

See #880 (comment) for why md5 hash changed for OBC forcing
@dougiesquire
Copy link
Collaborator

The crash still occurs using the input files on vk83 (answers don't change at all). Using access-om3/pr142-36, the model still crashes on the 225th time step (i.e. ~9.5 hours).

There's now very little difference between this configuration and the IAF one @claireyung successfully ran. The main remaining differences are:

  • RYF vs IAF
  • This configuration uses /g/data/vk83/prerelease/configurations/inputs/access-om3/panan.4km/2026.01.08/OBC37S_forcing_access_yr2_4km.nc for OBC forcing, whereas the IAF config uses /g/data/x77/cy8964/mom6/input/input-8km/ryf_gregorian_1980-1984_forcing_access_yr2_8km_fill.nc.

I've checked that the additional MOM parameter changes that we're brought in by the rebase are not responsible for the crash.

@claireyung
Copy link
Collaborator

Hmm. That is quite early to crash. Thanks for the comparison.

As for the boundary forcing file, I just copied the RYF version to make the gregorian IAF one and changed the dates (interpolating for Feb 29 if a leap year). So shouldn't have caused the difference, and it looks like the errors were coming from ice-shelfy latitudes not the northern boundary?

Just confirming, by checking it's not parameter changes brought in the rebase, you mean that the changes to USE_RIVER_HEAT_CONTENT , USE_CALVING_HEAT_CONTENT, MAX_TR_DIFFUSION_CFL and ENTHALPY_FROM_COUPLER, VELOCITY_TOLERANCE, LOTW_BBL_ANSWER_DATE are definitely not the problem?

@dougiesquire
Copy link
Collaborator

Just confirming, by checking it's not parameter changes brought in the rebase, you mean that the changes to USE_RIVER_HEAT_CONTENT , USE_CALVING_HEAT_CONTENT, MAX_TR_DIFFUSION_CFL and ENTHALPY_FROM_COUPLER, VELOCITY_TOLERANCE, LOTW_BBL_ANSWER_DATE are definitely not the problem?

Yup. With access-om3/pr142-36 if I revert all these parameters (and CORIOLIS_SCHEME) to the values you have in the IAF config it fails on the 215th time step with:

MOM_forcing_type, forcing_SinglePointPrint: Called from applyBoundaryFluxesInOut (grounding)
MOM_forcing_type, forcing_SinglePointPrint: lon,lat =      -4.646E+01     -8.196E+01
MOM_forcing_type, forcing_SinglePointPrint: ustar =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: tau_mag =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: buoy is not associated.
MOM_forcing_type, forcing_SinglePointPrint: sw =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: sw_vis_dir =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: sw_vis_dif =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: sw_nir_dir =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: sw_nir_dif =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: lw =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: latent =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: latent_evap_diag =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: latent_fprec_diag =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: latent_frunoff_diag =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: latent_frunoff_glc_diag =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: sens =      -3.176E+01
MOM_forcing_type, forcing_SinglePointPrint: evap =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: lprec =       8.568E-05
MOM_forcing_type, forcing_SinglePointPrint: fprec =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: vprec =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: seaice_melt =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: seaice_melt_heat =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: p_surf =       9.180E+06
MOM_forcing_type, forcing_SinglePointPrint: salt_flux =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: BBL_tidal_dis =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: ustar_tidal =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: lrunoff =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: lrunoff_glc =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: frunoff =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: frunoff_glc =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_lrunoff =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_lrunoff_glc =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_frunoff =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_frunoff_glc =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_lprec =      -7.111E-01
MOM_forcing_type, forcing_SinglePointPrint: heat_content_fprec =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_vprec =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_cond =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_massout =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_evap is not associated.
MOM_forcing_type, forcing_SinglePointPrint: heat_content_massout =       0.000E+00
MOM_forcing_type, forcing_SinglePointPrint: heat_content_massin =      -7.056E-01

WARNING from PE    77: MOM_diabatic_aux.F90, applyBoundaryFluxesInOut(): Mass created. x,y,dh=      -4.646E+01     -8.196E+01      1.430E-11


WARNING from PE   491: MOM_tracer_diabatic.F90, applyTracerBoundaryFluxesInOut(): Tracer created. x,y,dh=      -8.062E+01     -7.271E+01            NaN


FATAL from PE   491: NaN in input field of reproducing_EFP_sum(_2d).

@dougiesquire
Copy link
Collaborator

I was able to run @claireyung's IAF config without issue (for one day). But, if I update only the datm.streams.xml and drof.streams.xml files to use JRA v1.6 RYF forcing, the model crashes with NaN in input field of reproducing_EFP_sum(_2d) after 262 time steps.

With JRA v1.4 RYF forcing I can run for at least 1 day. I'm trying the full month at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants