Skip to content

Add zero_gradient and linear_extrap boundary conditions#1094

Merged
apcraig merged 6 commits intoCICE-Consortium:mainfrom
apcraig:newbcs
Mar 25, 2026
Merged

Add zero_gradient and linear_extrap boundary conditions#1094
apcraig merged 6 commits intoCICE-Consortium:mainfrom
apcraig:newbcs

Conversation

@apcraig
Copy link
Copy Markdown
Contributor

@apcraig apcraig commented Feb 17, 2026

PR checklist

  • Short (1 sentence) summary of your PR:
    Add zero_gradient and linear_extrap boundary conditions
  • Developer(s):
    apcraig
  • Suggest PR reviewers from list in the column to the right.
  • Please copy the PR test results link or provide a summary of testing completed below.
    All tests pass and bit-for-bit as expected. https://github.com/CICE-Consortium/Test-Results/wiki/cice_by_hash_forks#0e59535ce2009a3175e67156793b3ec83f2a3c69
  • How much do the PR code changes differ from the unmodified code?
    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on Icepack or any other models?
    • Yes
    • No
  • Does this PR update the Icepack submodule? If so, the Icepack submodule must point to a hash on Icepack's main branch.
    • Yes
    • No
  • Does this PR add any new test cases?
    • Yes
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. A test build of the technical docs will be performed as part of the PR testing.)
    • Yes
    • No, does the documentation need to be updated at a later time?
      • Yes
      • No
  • Please document the changes in detail, including why the changes are made. This will become part of the PR commit log.

Add zero_gradient and linear_extrap boundary conditions

This provides options to set the boundary conditions around the edge for a regional grid. Only halo values on the outside halo are affected. This is implemented largely in the HaloUpdates in ice_boundary.F90, both in the mpi and serial implementations. The boundary conditions are computed locally for each block and don't require any explicit communications. They are implemented consistent with mixing boundary conditions in the ew and ns directions. The zero_gradient boundary condition sets the halo equal to the values on the boundary of the domain, basically zero gradient into the halo. The linear_extrap boundary condition sets the halo values such that the gradient across the halo matches the gradient on the edge of the active domain, basically constant gradient.

The zero_gradient and linear_extrap boundary conditions have been implemented as an interim step toward a regional grid capability. Until restoring options are complete and the regional capability is fully tested, these boundary conditions may produce nonphysical values such as negative ice thickness.

  • Added zero_gradient and linear_extrap code to HaloUpdate subroutines.
  • Refactored the ice_boundary.F90 to leverage automated code generation and sync the mpi and serial versions of the code.
  • Created a generate_haloUpdates.sh script to generate the various haloUpdate subroutines quickly and consistently. This script lives in cicecore/cicedyn/infrastructure/comm/mpi and has a README. This reduces effort and risk if the haloUpdate needs to be updated in the future.
  • Synchronized the mpi and serial versions of ice_boundary.F90. Now the only difference in the files is a #define in the serial version to turn off the MPI code.
  • Update the halochk subroutines to validate zero_gradient and linear_extrap boundary condition options.
  • Updated some initialization to account for the new boundary condition options.
  • Added explicit checks for valid boundary_type settings in ice_domain.F90. This means checks for invalid values later in the code can be removed.
  • Add set_nml.boxgauss which is a box test with a gauss distibution of ice in the center. Added several tests leveraging this new configuration. Added bczerogradient and bclinearextrap set_nml options and tests.
  • Update the set_nml filenames for boundary condition settings, add "bc" to start of string.
  • Update the test suite to cover the new boundary conditions and improve halochk unit test coverage overall.
  • Update the documentation.

Improved the plabeld argument type in debug_ice so the string length is more flexible. This issue was detected while testing.

Update initialization of uvmCD in ice_grid.F90 due to uninitialized values triggering and error in some compilers.

Implemented largely in the HaloUpdates in ice_boundary.F90, both the
mpi and serial implementations.  The dirichlet and neumann boundary
conditions are computed locally for each block and don't require
any explicit communications.  They are implemented consistent
with mixed boundary conditions in different directions.  The
dirichlet boundary condition sets the halo equal to the values
on the boundary of the domain, basically zero gradient into the
halo.  The neumann boundary conditions set the halo values such that
the gradient across the halo matches the gradient on the edge of
the active domain, basically constant gradient.

- Added dirichlet and neumann code to HaloUpdate subroutines.
- Refactored the ice_boundary.F90 to leverage automated code generation
  and sync the mpi and serial versions of the code.
- Created a generate_haloUpdates.sh script to generate the various
  haloUpdate subroutines quickly and consistently.  This script
  lives in cicecore/cicedyn/infrastructure/comm/mpi and has
  a README.  This reduces effort and risk if the haloUpdate
  needs to be updated in the future.
- Synchronized the mpi and serial versions of ice_boundary.F90.
  Now the only difference in the files is a #define in the serial
  version to turn off the MPI code.
- Update the halochk subroutines to validate dirichlet and neumann
  boundary condition options.
- Updated some initialization to account for the new boundary
  condition options.
- Added explicit checks for valid boundary_type settings in
  ice_domain.F90.  This means checks for invalid values later
  in the code can be removed.
- Add set_nml.boxgauss which is a box test with a gauss distibution
  of ice in the center.  Added several tests leveraging this
  new configuration.  Added dirichlet and neumann set_nml options
  and tests.
- Update the test suite to cover the new boundary conditions and
  improve halochk unit test coverage overall.

Improved the plabeld argument type in debug_ice so the string
length is more flexible.  This issue was detected while testing.

Several issues still to address
- Update documentation
- Values to use for 'dirichlet' and 'neumann' boundary_type namelist
- Exact restart issue with dirichlet and neumann.  Only restarts
  exactly with restart_ext=true.  This needs to be debugged at some
  point.
@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 17, 2026

Several issues still to address

  • Update documentation
  • Values to use for 'dirichlet' and 'neumann' boundary_type namelist
  • Exact restart issue with dirichlet and neumann. Only restarts exactly with restart_ext=true. This needs to be debugged at some point.

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 17, 2026

This looks like a lot of work, but most of the changes are related to cleaning up the twenty HaloUpdates, leveraging a code generation script, and syncing the mpi and serial versions of the code. None of that fundamentally changed the implementation or performance, but it means the code is now completely self consistent and should be easier to maintain and improve in the future.

Copy link
Copy Markdown
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the 'dirichlet' boundary conditions here are actually zero-gradient neumann conditions. Dirichlet BCs would set the boundary to a specified value or values that are a function of space but not time. Perhaps it would be better to name these something a bit more intuitive. E.g. a true dirichlet BC could be 'fixed' and a nonzero neumann BC could be 'gradient' or 'flux'. The special case could be 'zero-gradient' or 'zero-flux'. These could alternatively be specified in terms of order of the gradient extrapolation (first-order, second-order, etc). For velocities there's the added complication of having a direction, hence 'no-slip' might be an option in that case.

@eclare108213
Copy link
Copy Markdown
Contributor

Nice job cleaning all of this up!

Exact restart issue with dirichlet and neumann. Only restarts exactly with restart_ext=true. This needs to be debugged at some point.

I would guess this is because the corner halo values are not well defined. For restarts they have to be assigned values at initialization in the same order as in the main code, which is difficult and maybe not even sufficient.

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 18, 2026

I think the 'dirichlet' boundary conditions here are actually zero-gradient neumann conditions. Dirichlet BCs would set the boundary to a specified value or values that are a function of space but not time. Perhaps it would be better to name these something a bit more intuitive. E.g. a true dirichlet BC could be 'fixed' and a nonzero neumann BC could be 'gradient' or 'flux'. The special case could be 'zero-gradient' or 'zero-flux'. These could alternatively be specified in terms of order of the gradient extrapolation (first-order, second-order, etc). For velocities there's the added complication of having a direction, hence 'no-slip' might be an option in that case.

Good points all around. I have been wondering how to handle the options/names. So dirichlet sets the values and neumann the gradients at the boundaries, but neither should vary over time. That's a subtle aspect that wasn't clear to me. I agree that my implementation of dirichlet is actually zero-gradient neumann while my neumann implementation is neither.

Are the bcs implemented here useful? Are there other options we want to implement that don't require external data?

No slip should be doable, but that would apply to velocities in particular? I assume those bc's would set the halo value as the reflection of the first active layer so the value at the boundary is zero. Is there an equivalent for scalar fields, does it make sense to have negative ice thickness in the halo? Is that a valid bc from a technical sense even if it makes no sense physically? Would that be a no-flux+no-slip bc?

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 18, 2026

Exact restart issue with dirichlet and neumann. Only restarts exactly with restart_ext=true. This needs to be debugged at some point.

I would guess this is because the corner halo values are not well defined. For restarts they have to be assigned values at initialization in the same order as in the main code, which is difficult and maybe not even sufficient.

I'm not sure that's true. The bc's implemented now are well defined in the corners. That's one thing that makes them possible to implement. If they weren't well defined in the corners, I think they wouldn't be at every timestep. I cannot come up with an obvious reason why the code restarts exactly for cyclic but not for the new bcs, but maybe I'm missing something. I'd love to have a theory before I start to debug.

@eclare108213
Copy link
Copy Markdown
Contributor

neither should vary over time. That's a subtle aspect that wasn't clear to me. I agree that my implementation of dirichlet is actually zero-gradient neumann while my neumann implementation is neither.

I think your neumann conditions are fine. Maybe the way to think about the subtlety is that dirichlet BCs do not depend on the state inside the domain (so perhaps they could vary in time?). For us the dirichlet case is mostly covered by closed/land boundaries, so I wouldn't worry about it too much. Neumann conditions specify the values based on the desired gradient, which necessarily does depend on the state inside the domain and hence varies in time.

Are the bcs implemented here useful? Are there other options we want to implement that don't require external data?

I think they're useful for testing.

No slip should be doable, but that would apply to velocities in particular? I assume those bc's would set the halo value as the reflection of the first active layer so the value at the boundary is zero.

Yes, since velocity has two components, you need to have an additional boundary condition in order for the problem to be properly constrained. What's generally done on rectangular meshes is to set a normal condition and a tangent one (associated with normal and tangent velocity components), although there are other options. As long as this is just for testing, again I wouldn't try to make it overly general, but if a more general implementation is needed for regional grids then maybe @TillRasmussen or @isievers would weigh in.

Is there an equivalent for scalar fields, does it make sense to have negative ice thickness in the halo? Is that a valid bc from a technical sense even if it makes no sense physically?

I doubt that having negative values of a non-negative quantity in the halo would be desirable. Best to stick with physically realistic values unless we have a really good reason not to!

I would guess this is because the corner halo values are not well defined. For restarts they have to be assigned values at initialization in the same order as in the main code, which is difficult and maybe not even sufficient.

I'm not sure that's true. The bc's implemented now are well defined in the corners. That's one thing that makes them possible to implement. If they weren't well defined in the corners, I think they wouldn't be at every timestep. I cannot come up with an obvious reason why the code restarts exactly for cyclic but not for the new bcs, but maybe I'm missing something. I'd love to have a theory before I start to debug.

What I meant by "not well defined in the corners": If you are using neumann/gradient/flux boundary conditions and the problem is not strictly symmetric along the diagonal that goes through that corner, then do you use the neighboring cell value in the x direction or in y? I think the order of halo cell assignments needs to match the order of operations in the code. If the same halo calls are used at initialization as in the code prior to restart, then maybe it works, but for example, if some halo cells are calculated in the code rather than using the halo calls (which used to be true, not sure if it still is), then they might not match. Just a suggestion, which might not be the actual culprit.

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 18, 2026

I was actually hoping these bc's might be useful in application. If we don't have any "internal" regional bc's, then they always have to be specified via some external data and that seems like a huge hassle to always have to setup those datasets, read them in, etc. Is there a potential case of a regional grid, with ice at the boundary, but far enough from the area of scientific interest, that a no-gradient or constant-gradient edge bc might be reasonable and useful? Think about the NCAR example of a regional grid using cyclic boundary conditions "and looking OK in the interior". That bc is very problematic. It seems a no-gradient or constant-gradient bc would be much better and potentially useful in practice.

What about the situation where we have external data but only for a few fields (i.e. ice concentration and ice thickness). What are the bc's for the other fields? My assumption was that we'd need an internal bc to work with a restoring bc if restoring is only done on a subset of fields. The restoring bc is full of challenges like not having data for some fields, having time series data at longer timescales than the model timestep, and where/how to apply the restoring bc in the ice timestep given issues like dynamics subcycling. I think the new bc's might provide a way to "fill in the gaps". My idea is that with restoring, we still always set an ew and ns boundary_type (bc) and that the bc's will work with the restoring data to try to generate something usable on the boundary. I think the restoring implementation is a huge can of worms technically and scientifically and that some useful bc's could be a key.

Are there other internally computed or specified bc's that are actually potentially useful in application? Some other ideas

  • no-slip and/or no-flux. Isn't this the same as closed?
  • slip (normal velocity=0, grad(tangential velocity)=0). Only makes sense for velocity and I don't think it has a practical application for the CICE model boundary.
  • specify zero (or user defined constant) on the boundary. This does not seem useful in practice because it's physically meaningless.
  • reflective bc. We've decide to avoid bc's that are not physically consistent.

@eclare108213
Copy link
Copy Markdown
Contributor

Excellent ideas and questions. Let's talk about this in the regional WG meeting tomorrow.

up implementation.  Rename boundary condition options to "zero_gradient"
and "linear_extrap".

Update bc namelist options and test suites

Update documentation
@apcraig apcraig changed the title Add dirichlet and neumann boundary conditions Add zero_gradient and linear_extrap boundary conditions Feb 24, 2026
@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Feb 24, 2026

I have been debugging the exact restart issue. As a reminder, the model only restarts exactly for the new boundary conditions if restart_ext is true. I believe this is going to be a requirement for these bcs unless we fix something. The issue is that several restart fields (strocnxT, strocnyT, stress[p,m][1,2,3,4], stress12[1,2,3,4], iceumask, frzmlt, frz_onset, ffrac) have values computed on the outer halo that are unique. In other words, unique values are computed on the halo during the model timestep and that data has to be stored on the restart file for exact restart.

This is not an issue with cyclic (or tripole) because values on the outer halo are NOT computed uniquely, they are computed redundantly with something on the other side of the grid and those values are used to fill the halo on restart. This is not an issue with closed because the halo is not an active region. This is only an issue with the regional / open bcs.

This raises several questions. Why are there values on the halo for these restart fields and are they computed correctly? Does the current implementation rely on a cyclic or closed bc and does it lack some implementation required for regional bcs? Does this issue contribute to the odd behavior we're seeing with the restoring bcs (concurrent work on another branch) near the boundary? Again, my main concern is whether the current CICE implementation properly supports the regional / open bcs with respect to the solution computed internally? Do we need some extra haloUpdates during the dynamics computation, to set some fields to zero on the outer boundary explicitly, a bigger halo, or something else? Do we need to update the internal calculation in specific places to take into account the open bcs properly? Up to now, the outer halo has been trivial to compute, maybe the implementation needs some updates?

Just to give an idea of the stressp_1 field on restart with zero_gradient bcs and restart_ext=true. The field (including the halo) has the following properties

  • 0s on the west and south boundary/halo, this is imposed by subroutine restartfile.
  • values in the 1000s everywhere else including the east and north boundary/halo EXCEPT
    • values of order (1.e-7) in the 4 corners of the non-zeroed grid (2,2), (nx_global+2,2), (2,nx_global+2), (nx_global+2, ny_global+2)
  • all values in the field including on the north and east boundary/halo are unique

I am testing using a simple 12x12 grid with 1 block and 1 pe, so it's very easy to debug. If anyone has any thoughts about how to proceed (short of just accepting the restart_ext=true setting), let me know. I am concerned that CICE is not setup to solve the open bc case properly and that this could hinder the restoring work as we move forward with that.

@TillRasmussen
Copy link
Copy Markdown
Contributor

TillRasmussen commented Feb 24, 2026

I think that we should look at the variables:
The following are hard to get on the boundary if you dont have restarts.
strocnxT, strocnyT,
stress[p,m][1,2,3,4]
stress12[1,2,3,4],
ffrac:

iceumask: Function of aice and hi in four points. Can be calculated on the halo. But is it needed? It is not used in stepu.
frzmlt, frz_onset, I dont understand why we need these. Coupling maybe?

@isievers
Copy link
Copy Markdown

I have been debugging the exact restart issue. As a reminder, the model only restarts exactly for the new boundary conditions if restart_ext is true. I believe this is going to be a requirement for these bcs unless we fix something. The issue is that several restart fields (strocnxT, strocnyT, stress[p,m][1,2,3,4], stress12[1,2,3,4], iceumask, frzmlt, frz_onset, ffrac) have values computed on the outer halo that are unique. In other words, unique values are computed on the halo during the model timestep and that data has to be stored on the restart file for exact restart.

Do I understand it correctly, that after the restart (during time step) the values for the listed fields are equal to the restart values now? Or are they only equal directly after reading the restart files and change during the time step?

This is not an issue with cyclic (or tripole) because values on the outer halo are NOT computed uniquely, they are computed redundantly with something on the other side of the grid and those values are used to fill the halo on restart. This is not an issue with closed because the halo is not an active region. This is only an issue with the regional / open bcs.

This raises several questions. Why are there values on the halo for these restart fields and are they computed correctly? Does the current implementation rely on a cyclic or closed bc and does it lack some implementation required for regional bcs? Does this issue contribute to the odd behavior we're seeing with the restoring bcs (concurrent work on another branch) near the boundary? Again, my main concern is whether the current CICE implementation properly supports the regional / open bcs with respect to the solution computed internally? Do we need some extra haloUpdates during the dynamics computation, to set some fields to zero on the outer boundary explicitly, a bigger halo, or something else? Do we need to update the internal calculation in specific places to take into account the open bcs properly? Up to now, the outer halo has been trivial to compute, maybe the implementation needs some updates?

Just to give an idea of the stressp_1 field on restart with zero_gradient bcs and restart_ext=true. The field (including the halo) has the following properties

  • 0s on the west and south boundary/halo, this is imposed by subroutine restartfile.

  • values in the 1000s everywhere else including the east and north boundary/halo EXCEPT

    • values of order (1.e-7) in the 4 corners of the non-zeroed grid (2,2), (nx_global+2,2), (2,nx_global+2), (nx_global+2, ny_global+2)
  • all values in the field including on the north and east boundary/halo are unique

Might this have to do with the read in of the field iceUmask in cicecore/cicedyn/infrastructure/ice_restart_driver.F90 line 547-561 together with the changes you made to get rid of the "wall" in the u and v fields in the history fields a while ago? At least all fields you are naming dependent on the iceUmask and if it's not read in on the halo, as is the case when restart_ext=false they should be =0, or?

I am testing using a simple 12x12 grid with 1 block and 1 pe, so it's very easy to debug. If anyone has any thoughts about how to proceed (short of just accepting the restart_ext=true setting), let me know. I am concerned that CICE is not setup to solve the open bc case properly and that this could hinder the restoring work as we move forward with that.

@apcraig apcraig marked this pull request as ready for review March 17, 2026 14:33
@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Mar 17, 2026

This is ready to merge.

The restart_ext issue is caused by icellT which leads to required calculations of stresses on north and east ghost cells.

file ice_dyn_shared.F90:
subroutine dyn_prep2:

      !-----------------------------------------------------------------                  
      ! Identify cells where iceTmask = .true.                                            
      ! Note: The icellT mask includes north and east ghost cells                         
      !       where stresses are needed.                                                  
      !-----------------------------------------------------------------                  

      icellT = 0
      do j = jlo, jhi+1
      do i = ilo, ihi+1
         if (iceTmask(i,j)) then
            icellT = icellT + 1
            indxTi(icellT) = i
            indxTj(icellT) = j
         endif
      enddo
      enddo

Cyclic works without restart_ext because

  • the calculations on the north and east halo are bit-for-bit with the cyclic values. I have not verified this, but there is no way the code can restart exactly if it isn't.
  • on restart, the halo cells are filled because they global indices are associated with the cyclic values and the restart scatter fills in the halo values. The halo cells of the non-cyclic cases are not filled unless the extended restart feature is used.

The other final question is whether the names for the new boundary conditions, "zero_gradient" and "linear_extrap" need to be updated before we formally adopt those names with the merge.

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Mar 17, 2026

I have run several tests that compare the cyclic, zero_gradient, and linear_extrap features. In the case below, a 12x12 grid with a centered gaussian initialization is forced with simple northeast winds. After a day, the gaussian "iceberg" is advecting northeast at something close to the free drift velocity. The plots below are hi from day 4 when the iceberg is centered a couple gridcells northeast from the center. The results away from the west and south boundaries are very similar for all cases. What's more interesting is what's happening on the west and south boundaries. In the cyclic case, the iceberg is starting to wrap around the boundaries and ice is increasing on those boundaries in time. For the zero_gradient case, the west and south boundaries indicate small gradients of hi. That is expected as the boundary condition should fill the west and south boundaries with neighbor values as the flow moves northeast. For the linear_extrap case, the ice is getting smaller in time on the west and south boundaries which is also consistent with the interior solution.

newbcs_test.pdf

One question is whether the new bcs could lead to unphysical results like negative ice thickness. If that's possible, we probably need to add some min or max checks after haloUpdates on fields that have physical or mathematical bounds.

@TillRasmussen
Copy link
Copy Markdown
Contributor

This is ready to merge.

The restart_ext issue is caused by icellT which leads to required calculations of stresses on north and east ghost cells.

file ice_dyn_shared.F90:
subroutine dyn_prep2:

      !-----------------------------------------------------------------                  
      ! Identify cells where iceTmask = .true.                                            
      ! Note: The icellT mask includes north and east ghost cells                         
      !       where stresses are needed.                                                  
      !-----------------------------------------------------------------                  

      icellT = 0
      do j = jlo, jhi+1
      do i = ilo, ihi+1
         if (iceTmask(i,j)) then
            icellT = icellT + 1
            indxTi(icellT) = i
            indxTj(icellT) = j
         endif
      enddo
      enddo

Cyclic works without restart_ext because

  • the calculations on the north and east halo are bit-for-bit with the cyclic values. I have not verified this, but there is no way the code can restart exactly if it isn't.
  • on restart, the halo cells are filled because they global indices are associated with the cyclic values and the restart scatter fills in the halo values. The halo cells of the non-cyclic cases are not filled unless the extended restart feature is used.

The other final question is whether the names for the new boundary conditions, "zero_gradient" and "linear_extrap" need to be updated before we formally adopt those names with the merge.

Very nice that you have identified the problem.

This brings in a different question. Should we ever calculate anything on the halo or should this only be filled with halo swaps? The downside of doing halo swaps is that it is expensive especially witihin the evp iterations

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Mar 20, 2026

I carried out some 100 day tests. 12x12 grid, gaussian initial condition, full physics/dynamics, northeast winds; gridc and gridb; cyclic, zero_gradient, and linear_extrap bcs.

The cyclic and zero_gradient cases all run to completion, both gridb and gridc.

The linear_extrap bcs fail at day 5 with both gridb and gridc with "negative area". I don't think this is surprising. I think this is consistent with one of my concerns from earlier today, whether the bcs could lead to unphysical results. I'm not sure there is an easy fix. It really comes down to the problem. Ultimately, I think we'll need to figure out how to handle the open bcs so we get reasonably results, but I think that's a next step.

I think this can be merged at this point. Someone should review and approve the PR.

@eclare108213
Copy link
Copy Markdown
Contributor

For the failing linear_extrap case, is the negative area in the interior of the domain or on the halo?

If it is (first) on the halo, then I suggest adding a conditional on the boundary condition so that the value remains within physical limits and hopefully the interior will too.

If the halo contains all physical values and an interior cell goes negative (due to advection or whatever), then that's a different problem altogether, and one I'd rather not introduce into the code. As you note, though, this might happen with restoring boundary conditions for the same or similar reasons, and this case will be useful for debugging.

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Mar 24, 2026

The unphysical value is happening at i=1, j=8 in a 12x12 domain. What I think is happening is that the halo is becoming unphysical then that is influencing the first grid cell next to the halo at some point, via advection or something else. There are no checks on the halo for valid values.

It is not easy to "test" the halo values, at least not inside the halo update. The halo update doesn't know anything about which field it is operating on. We could add some checks to the outer domain after the haloupdate call. We'd have to decide which haloupdate calls to check and so forth.

What I propose we do is add some info the documentation to say these new bcs can generate unphysical values. I think we should just consider this PR as new features in the infrastructure (that may not be totally ready to use).

Separately, I really think we're going to have to think through how we want to implement the restoring and new bcs. I think we still have a bunch of work to do to come up with a few "options" that would mix zero gradient bcs, linear extrap bcs and restoring bcs depending on field and depending on what fields we're trying to restore. I could see an argument made that u and v should ALWAYS be specified on the halo in open bcs with the zero gradient, never restored. And that aice and hi can take on any of the bc options but they will have to be constrained to be >=0. And everything else that gets a halo update cannot be restored to so has to either have zero gradient or linear extrapolation applied. Right now, we testing ALL haloupdates with zero gradient or ALL with linear extrap or ALL with restoring. In practice, I think none of those options are physically reasonable nor what we will ultimately offer to users to do science.

Copy link
Copy Markdown
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome, thank you. I made some suggestions for the documentation and rather than having another review round, will go ahead and approve the PR. I also suggest copying the comment about potential nonphysical values using the new BCs into the PR merge comments.


#define SERIAL_REMOVE_MPI

! to the top of the serial/ice_boundary.F90 code to turn off MPI in that version
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for this PR, but does it now make sense to combine the MPI and serial codes into a single, multi-purpose code?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is the long term goal, getting there slowly. Even in the interim, there are large advantages to having the codes basically be the same. As I slowly synced up the serial and mpi versions, I noticed several cases where features were added to one and not the other. In some ways, that was a large motivator to fully sync up the code.

communications among processors when MPI is in use and among blocks
whenever there is more than one block per processor.
communications between blocks in CICE whether those blocks are not the
same or different MPI tasks. Neighbor data is communicated between
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence does not make sense. Maybe what was intended is "... whether or not those blocks are the same or different MPI tasks." ? Actually, "or not" is not necessary, grammatically.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -426,16 +426,23 @@ Boundary Conditions

Much of the infrastructure used in CICE, including the boundary
routines, is adopted from POP. The boundary routines perform boundary
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can remove this first sentence. It was adopted from POP, but that was a really long time ago and a lot has changed since then.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

zero flux, or other boundary conditions. Mathematically specified boundary
conditions are currently not supported in the CICE model.
an external file.

Copy link
Copy Markdown
Contributor

@eclare108213 eclare108213 Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you could add a statement like

The zero_gradient and linear_extrap boundary conditions have been implemented as an interim step toward a regional grid capability. Until restoring options are complete and the regional capability is fully tested, these boundary conditions may produce nonphysical values such as negative ice thickness.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

@apcraig
Copy link
Copy Markdown
Contributor Author

apcraig commented Mar 24, 2026

I have updated the documentation as suggested. Will wait for the github actions to complete and then merge. Thanks!

@apcraig apcraig merged commit ce4dd0b into CICE-Consortium:main Mar 25, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants