Skip to content

Conversation

@fonnesbeck
Copy link
Member

@fonnesbeck fonnesbeck commented Nov 22, 2025

Description

PSD method and test added. A simple HSGP fits successfully with the resulting RatQuad kernel.

Related Issue

  • Closes #
  • Related to #

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

@fonnesbeck fonnesbeck requested a review from bwengals November 22, 2025 03:07
@codecov
Copy link

codecov bot commented Nov 22, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 91.49%. Comparing base (f69e159) to head (2a3e420).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #7973   +/-   ##
=======================================
  Coverage   91.49%   91.49%           
=======================================
  Files         116      116           
  Lines       18962    18973   +11     
=======================================
+ Hits        17349    17360   +11     
  Misses       1613     1613           
Files with missing lines Coverage Δ
pymc/gp/cov.py 98.69% <100.00%> (+0.03%) ⬆️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@fonnesbeck
Copy link
Member Author

BTW, one limitation here is that pytensor.kv does not appear to support gradient calculation so NUTS can't be fully supported. There is an alternative parameterization that may not need Bessel functions, unless said support is coming.

@jessegrabowski
Copy link
Member

BTW, one limitation here is that pytensor.kv does not appear to support gradient calculation so NUTS can't be fully supported. There is an alternative parameterization that may not need Bessel functions, unless said support is coming.

Does jax or tfp have the gradients we could use as the basis for an implementation?

@fonnesbeck
Copy link
Member Author

fonnesbeck commented Nov 24, 2025

Apparently not.

image

Looks like a numerical approximation (via finite differences) is as close as you can get.

@jessegrabowski
Copy link
Member

@ricardoV94
Copy link
Member

ricardoV94 commented Nov 24, 2025

In the past I've implemented a couple gradients using stan math / wolfram as reference. Never had a case where tfp was ahead of the curve for these special functions

@ricardoV94
Copy link
Member

ricardoV94 commented Nov 24, 2025

Gradient wrt second input is the trivial one in these functions, we also have it

@fonnesbeck
Copy link
Member Author

There has been some useful discussion on Jax: jax-ml/jax#12402

@fonnesbeck
Copy link
Member Author

FWIW, it does work without a fully differentiable function -- it just uses a slice sampler for alpha.

ratquad_hsgp.html

npt.assert_allclose(true_1d_psd, test_1d_psd, atol=1e-5)

def test_psd_eigenvalues(self):
# Test PSD implementation using Szegő’s Theorem
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this work for a 2 or higher dimensional X?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

)
npt.assert_allclose(true_1d_psd, test_1d_psd, atol=1e-5)

def test_psd_eigenvalues(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to cook up something more accurate, and gippity came up with Rayleigh quotients. It seems to be more accurate than using linalg.eigh. I also found that a smaller length scale increased accuracy, I guess because we can lay down more gridpoints more densely in a small region. Cranking up N (or decreasing dx) did not seem to help.

import numpy as np
from scipy import linalg

alpha = 1.5
ls = 0.1
L = 50
dx = 0.05
N = int(L / dx)
X = (np.arange(N) * dx)[:, None]

with pm.Model():
    cov = pm.gp.cov.RatQuad(1, alpha=alpha, ls=ls)

K = cov(X).eval()

theta = 2 * np.pi * np.fft.fftfreq(N, d=1)

omegas = theta / dx
psd = cov.power_spectral_density(omegas[:, None]).eval()
psd_scaled = psd.ravel() / dx

j = np.arange(N)
modes = np.exp(1j * np.outer(theta, j))
num = modes.conj() @ (K @ modes.T)
den = np.sum(np.abs(modes)**2, axis=1)
rayleigh_quotient = np.real(np.diag(num) / den)


np.testing.assert_allclose(psd_scaled, rayleigh_quotient, atol=0.05)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, plotting the approximation, it seems like all of the error comes from the boundaries. So it might make more sense to do a test like:

trim = 100
np.testing.assert_allclose(psd_scaled[trim:-trim], rayleigh_quotient[trim:-trim], atol=1e-2)

Plot:

import matplotlib.pyplot as plt
fig, ax = plt.subplots()
order = np.argsort(theta)

ax.plot(theta[order], psd_scaled[order], label="Closed-Form")
ax.plot(theta[order], Rq[order], "--", label="Rayleigh quotient")
ax.set(xlabel = "θ", ylabel='spectral density', yscale='log')

plt.legend()
image

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do this. Did you come across a reference for this to add to the docstring?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just cite the wiki for the formula: https://en.wikipedia.org/wiki/Rayleigh_quotient (or use one of the citations therein)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants