Conversation
| KMT_reidx = KMT - 1 | ||
| KMT_reidx[KMT_reidx == -1] = 0 |
There was a problem hiding this comment.
This deals with the issues @klindsay28 mentioned about IndexErrors. Since the index from KMT is not zero-based.
Question: Would one anticipate the HT/HU output for land to be np.nan or 0? Currently it's 0.
|
|
||
|
|
||
| @pytest.mark.parametrize('grid', pop_tools.grid_defs.keys()) | ||
| def test_HT_HU_KMU_in_grid(grid): |
There was a problem hiding this comment.
Probably unnecessary, but I like to add comprehensive testing for any PR. I know this is at odds with the fact that other variables aren't checked. Let me know if we should retain this.
|
Note that with using Here is a plot of This looks like what I'd expect. The large differences are organized around smaller coastal shelves where the cell-center and cell-edge might have drastically different column sizes. |
| KMU = np.zeros_like(KMT) | ||
| for i in prange(KMT.shape[0]): | ||
| for j in prange(KMT.shape[1]): | ||
| KMU[i, j] = min(KMT[i, j], KMT[i - 1, j], KMT[i, j - 1], KMT[i - 1, j - 1]) |
There was a problem hiding this comment.
You may want to use np.min() here. Python's builtin min() function doesn't appear on the list of array operations that can be parallelized by numba
See: https://numba.pydata.org/numba-doc/latest/user/parallel.html#supported-operations
There was a problem hiding this comment.
Thinking more about this, np.min() is probably not needed since you are operating on scalars in min(KMT[i, j], KMT[i - 1, j], KMT[i, j - 1], KMT[i - 1, j - 1])
There was a problem hiding this comment.
So keep it as is? My thinking is the loops are parallelized with prange and numba such that min is just comparing 4 scalars, but in parallel at the (i, j) level.
pop_tools/grid.py
Outdated
| @jit(nopython=True, parallel=True) | ||
| def _generate_KMU(KMT): | ||
| """Computes KMU from KMT.""" | ||
| KMU = np.zeros_like(KMT) |
There was a problem hiding this comment.
How about initializing KMU outside of the function so as to avoid having to initialize it for every function call? This change would involve passing KMU as an input, and updating it inside the function. We could then change the function signature to
def _generate_KMU(KMT, KMU)There was a problem hiding this comment.
This will only be called once upon calling get_grid() so I figure it's fine to initialize in there. I went ahead and switched it though. I'm returning the modified KMU since that's generally how I write this kind of thing.
But per your pass-by-reference model, do I need to return it at all? Or will it automatically be modified? I'm still blown away by this. I thought modifications inside functions were just to the private variable, not the global variable.

This PR derives and adds
HU,HT, andKMUas default output for theget_gridfunction. It follows @matt-long's equations in #14, per the POP reference manual (3.2 and 3.3).Closes #14.