Refactor & Expand Baryon Model Tests + Add Benchmarks#1266
Refactor & Expand Baryon Model Tests + Add Benchmarks#1266nikosarcevic wants to merge 9 commits intomasterfrom
Conversation
Pull Request Test Coverage Report for Build 20352494647Details
💛 - Coveralls |
There was a problem hiding this comment.
I assume this is the CCL prediction you will be comparing to an external benchmark. Is this right? Then why can't this prediction be produced on the fly during the test itself?
There was a problem hiding this comment.
Thanks, Elisa! Yes — for vd19 this isn’t an external benchmark.
The paper doesn’t provide public code or tables, so I generated the file once using the ccl implementation and froze it purely as a regression guard, not as a physical benchmark.
I can relabel this as a "regression test" (or something appropriate) and rename/document the generator script so it’s clear that it’s not meant to be produced during the test. I’ve also reached out to Marcel to see whether an external reference table exists — if he provides one, I’ll switch the test to use that.
Let me know if you’d prefer I convert it immediately into an on-the-fly regression check or keep the frozen file until we get external data.
ETA: I just saw your next comment, I will convert to on the fly test.
There was a problem hiding this comment.
See my other comment, I wonder why we cannot produce this on the fly during the test
There was a problem hiding this comment.
Thanks, Elisa! I’ve clarified that the vd19 file is a frozen regression target, not an external benchmark (vd19 doesn’t provide public code or tables). I updated the test docstring accordingly and kept the make script so the provenance is explicit.
This was done in 175ce5a
|
|
||
| import pyccl as ccl | ||
|
|
||
| DATA_PATH = Path(__file__).parent / "data" / "baccoemu_baryons_fk.txt" |
There was a problem hiding this comment.
This file was already there, right?
There was a problem hiding this comment.
indeed.
All baccoemu codes were there but file naming was obscure and there was no documentation in the tests at all. i just polished it (this should be done for everything anyway). So my thinking was: while I am at it (meaning cleaning up baryons, checking what is there), why not polish everything baryon related so it is clear which file does what and also add short docstrings for each test to follow best practices.
| wa=0.0, | ||
| ) | ||
|
|
||
| _check_baccoemu_baryons(cosmo, baryons) |
There was a problem hiding this comment.
What happens to sigma_8 after the boost factor is applied?
There was a problem hiding this comment.
The boost factor does not modify sigma8 itself.
In CCL the baryonic model multiplies the matter power spectrum by f(k,a), which changes the amplitude on nonlin scales but leaves the underlying linear normalization (either sigma8 or A_s in the input cosmology) untouched.
So sigma8 remains whatever was set in the cosmology -- only the shape and small-scale amplitude of P(k,a) change after applying the baryonic correction.
| Omega_c=0.27, | ||
| Omega_b=0.05, | ||
| h=0.67, | ||
| A_s=2.2194e-9, |
There was a problem hiding this comment.
What happens to the sigma_8 which corresponds to A_s after the boost factor is applied?
There was a problem hiding this comment.
The boost factor doesn’t change sigma8 at all.
In CCL, sigma8 is fixed when the cosmology is initialized (even when using A_s), and the baryonic model only rescales Pk not the normalization.
I’m planning to add a small CCLX notebook to document this behaviour clearly, since it’s not widely known.
|
|
||
| bar = _NoneBaryons() | ||
| pk_out = bar.include_baryonic_effects(cosmo, pk) | ||
| assert pk_out is None |
There was a problem hiding this comment.
Do we want pk_out to be None or just the pk without baryons?
There was a problem hiding this comment.
Here I’m only testing the base-class API, not a physical “no baryons” option.
We already have _DummyBaryons, which checks the “returns the pk unchanged” behaviour. _NoneBaryons is there to show that the base method can also legitimately return None, as allowed by the docstring (pk2d or None) and that this is passed through correctly.
In real use, “no baryons” is handled by setting baryonic_effects=None on the Cosmology, and all concrete baryon models return a pk2d -- this file is just unit-testing that the base class behaves as specified; model-specific behaviour is tested in the other baryon test files
|
|
||
| camb = pytest.importorskip("camb") | ||
|
|
||
| TOL_MEAD20 = 3e-5 |
There was a problem hiding this comment.
Why is this so high? Shouldn't we be getting much better agreement given that we should be calling camb in exactly the same way?
There was a problem hiding this comment.
This tolerance is inherited from the existing test_nonlin_camb_power test, which already used rtol=3e-5 for CAMB+Mead2020 vs CCL
In local runs the max relative difference between CAMB’s nonlinear P(k) and CCL’s matter_power_spectrum="camb" with halofit_version="mead2020_feedback" is at the ~1e-5 level once we restrict to a safe k-range, so 3 times that is just a small safety margin for interpoltaion / platform differences
I can try tightening it to 1e-5 and see if CI is stable; if that passes I’m happy to lower the tolerance
There was a problem hiding this comment.
I have tested -- it passes for 1e-5
I have changed the tol level to be that in 1b1a0c9
| pkb = bar.include_baryonic_effects(COSMO, COSMO.get_nonlin_power()) | ||
| pk_wbar = pkb(k_arr, a) | ||
|
|
||
| np.testing.assert_allclose(pk_wbar, pk_nobar * fka, rtol=1e-5, atol=0.0) |
There was a problem hiding this comment.
I also recommend lowering the tolerance here. This is just 2 ways of getting the same thing from CCL, it should be better than 1e-5.
There was a problem hiding this comment.
I have tested: it passes for 1e-6
I have changed it to that tolerance in 4ff0da8
| def test_baryons_in_cosmology_error(): | ||
| """Test that passing invalid baryonic_effects raises ValueError.""" | ||
| with pytest.raises(ValueError): | ||
| ccl.CosmologyVanillaLCDM(baryonic_effects=3.1416) |
There was a problem hiding this comment.
I have been working for 18 hours straight so I had to add this tiny easter egg.
Glad you caught it XD ahahahah
There was a problem hiding this comment.
Thanks for adding this
There was a problem hiding this comment.
my pleasure! you know how much I love ccl
57ba641 to
a062cef
Compare
Closes #1265
This PR cleans up and expadns the baryon test suite:
ETA: added Van Daalen benchmarks (benchmark files sent by Marcel to Niko directly)
All baryon tests and benchmarks now pass. Linting healthy.
CI is failing now only for MacOS build and this s inherited from main. See this issue I explained #1259.