You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
function LogDensityProblems.dimension(model::NormalLogNormal)
32
40
return length(model.μ_y) + 1
33
41
end
34
42
35
43
function LogDensityProblems.capabilities(::Type{<:NormalLogNormal})
36
-
return LogDensityProblems.LogDensityOrder{0}()
44
+
return LogDensityProblems.LogDensityOrder{1}()
37
45
end
38
46
```
39
47
48
+
Notice that the model supports first-order differentiation [capability](https://www.tamaspapp.eu/LogDensityProblems.jl/stable/#LogDensityProblems.capabilities).
49
+
The required order of differentiation capability will vary depending on the VI algorithm.
50
+
In this example, we will use `KLMinRepGradDescent`, which requires first-order capability.
Since the `y` follows a log-normal prior, its support is bounded to be the positive half-space ``\mathbb{R}_+``.
66
+
Let's now load `AdvancedVI`.
67
+
In addition to gradients of the target log-density, `KLMinRepGradDescent` internally uses automatic differentiation.
68
+
Therefore, we have to select an AD framework to be used within `KLMinRepGradDescent`.
69
+
(This does not need to be the same as the AD backend used for the first-order capability of `model`.)
70
+
The selected AD framework needs to be communicated to `AdvancedVI` using the [ADTypes](https://github.com/SciML/ADTypes.jl) interface.
71
+
Here, we will use `ForwardDiff`, which can be selected by later passing `ADTypes.AutoForwardDiff()`.
72
+
73
+
```@example elboexample
74
+
using ADTypes, ReverseDiff
75
+
using AdvancedVI
76
+
77
+
alg = KLMinRepGradDescent(AutoReverseDiff());
78
+
nothing
79
+
```
80
+
81
+
Now, `KLMinRepGradDescent` requires the variational approximation and the target log-density to have the same support.
82
+
Since `y` follows a log-normal prior, its support is bounded to be the positive half-space ``\mathbb{R}_+``.
55
83
Thus, we will use [Bijectors](https://github.com/TuringLang/Bijectors.jl) to match the support of our target posterior and the variational approximation.
56
84
57
85
```@example elboexample
@@ -70,24 +98,6 @@ binv = inverse(b)
70
98
nothing
71
99
```
72
100
73
-
Let's now load `AdvancedVI`.
74
-
Since BBVI relies on automatic differentiation (AD), we need to load an AD library, *before* loading `AdvancedVI`.
75
-
Also, the selected AD framework needs to be communicated to `AdvancedVI` using the [ADTypes](https://github.com/SciML/ADTypes.jl) interface.
76
-
Here, we will use `ForwardDiff`, which can be selected by later passing `ADTypes.AutoForwardDiff()`.
77
-
78
-
```@example elboexample
79
-
using Optimisers
80
-
using ADTypes, ForwardDiff
81
-
using AdvancedVI
82
-
```
83
-
84
-
We now need to select 1. a variational objective, and 2. a variational family.
85
-
Here, we will use the [`RepGradELBO` objective](@ref repgradelbo), which expects an object implementing the [`LogDensityProblems`](https://github.com/tpapp/LogDensityProblems.jl) interface, and the inverse bijector.
86
-
87
-
```@example elboexample
88
-
alg = KLMinRepGradDescent(AutoForwardDiff())
89
-
```
90
-
91
101
For the variational family, we will use the classic mean-field Gaussian family.
0 commit comments