FINN provides a framework for computing analytical integrals of learned functions. The framework provides a learnable function
In addition to this standard functionality, we provide two useful constraints that can optionally be applied:
-
Integral constraint. This allows us to apply equality or inequality constraints to the integral of
$f$ . For example, this allows us to parametrise the class of functions such that$\int f(x) dx \leq \epsilon$ -
Positivity constraint. This simply ensures that
$f(x) \geq 0$ . While relatively simple, this constraint is required in most (but not all) applications of FINN. For example, we must use the positivity constraint when using FINN to represent probability distributions.
While FINN can be used in many applications, a few of the most prominent use cases are:
- Integrating functions without closed-form solutions
- Applying integral-based constraints to neural networks
- Representing arbitrary continuous probability distributions
Let us consider an example. Imagine we wish to learn a vector-valued function
cd finn
pip install -e .
steps = 1000
x_lim_lower = -1.
x_lim_upper = 1.
area = 1.
condition = lambda area: True
f = Finn(
dim=1,
condition=condition,
area=area,
x_lim_lower=x_lim_lower*torch.ones(1),
x_lim_upper=x_lim_upper*torch.ones(1),
)
x = torch.linspace(x_lim_lower, x_lim_upper, steps).unsqueeze(1)
y = f(x)
dx = x[1,0] - x[0,0]
integral = torch.sum(y) * dx
print("integral:", integral) # numerically validate that the integral is 1.0
integral: tensor(1.0008)