Skip to content

Commit

Permalink
Added navbar and removed insert_navbar.sh
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Dec 6, 2024
1 parent da8fdbf commit 87e61ef
Show file tree
Hide file tree
Showing 15 changed files with 3,228 additions and 7 deletions.
461 changes: 460 additions & 1 deletion dev/elbo/overview/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/elbo/repgradelbo/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/examples/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/families/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/general/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/index.html

Large diffs are not rendered by default.

461 changes: 460 additions & 1 deletion dev/optimization/index.html

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions index.html
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
<!--This file is automatically generated by Documenter.jl-->
<meta http-equiv="refresh" content="0; url=./stable/"/>

1 change: 1 addition & 0 deletions previews/PR99/elbo/overview/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -457,5 +457,6 @@
});
</script>
<!-- NAVBAR END -->

<div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit"><a href="../../">AdvancedVI.jl</a></span></div><button class="docs-search-query input is-rounded is-small is-clickable my-2 mx-auto py-1 px-2" id="documenter-search-query">Search docs (Ctrl + /)</button><ul class="docs-menu"><li><a class="tocitem" href="../../">AdvancedVI</a></li><li><a class="tocitem" href="../../general/">General Usage</a></li><li><a class="tocitem" href="../../examples/">Examples</a></li><li><span class="tocitem">ELBO Maximization</span><ul><li class="is-active"><a class="tocitem" href>Overview</a><ul class="internal"><li><a class="tocitem" href="#Introduction"><span>Introduction</span></a></li><li><a class="tocitem" href="#Algorithms"><span>Algorithms</span></a></li></ul></li><li><a class="tocitem" href="../repgradelbo/">Reparameterization Gradient Estimator</a></li></ul></li><li><a class="tocitem" href="../../families/">Variational Families</a></li><li><a class="tocitem" href="../../optimization/">Optimization</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><a class="docs-sidebar-button docs-navbar-link fa-solid fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">ELBO Maximization</a></li><li class="is-active"><a href>Overview</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Overview</a></li></ul></nav><div class="docs-right"><a class="docs-navbar-link" href="https://github.com/TuringLang/AdvancedVI.jl/blob/master/docs/src/elbo/overview.md#" title="Edit source on GitHub"><span class="docs-icon fa-solid"></span></a><a class="docs-settings-button docs-navbar-link fa-solid fa-gear" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-article-toggle-button fa-solid fa-chevron-up" id="documenter-article-toggle-button" href="javascript:;" title="Collapse all docstrings"></a></div></header><article class="content" id="documenter-page"><h1 id="elbomax"><a class="docs-heading-anchor" href="#elbomax">Evidence Lower Bound Maximization</a><a id="elbomax-1"></a><a class="docs-heading-anchor-permalink" href="#elbomax" title="Permalink"></a></h1><h2 id="Introduction"><a class="docs-heading-anchor" href="#Introduction">Introduction</a><a id="Introduction-1"></a><a class="docs-heading-anchor-permalink" href="#Introduction" title="Permalink"></a></h2><p>Evidence lower bound (ELBO) maximization<sup class="footnote-reference"><a id="citeref-JGJS1999" href="#footnote-JGJS1999">[JGJS1999]</a></sup> is a general family of algorithms that minimize the exclusive (or reverse) Kullback-Leibler (KL) divergence between the target distribution <span>$\pi$</span> and a variational approximation <span>$q_{\lambda}$</span>. More generally, they aim to solve the following problem:</p><p class="math-container">\[ \mathrm{minimize}_{q \in \mathcal{Q}}\quad \mathrm{KL}\left(q, \pi\right),\]</p><p>where <span>$\mathcal{Q}$</span> is some family of distributions, often called the variational family. Since the target distribution <span>$\pi$</span> is intractable in general, the KL divergence is also intractable. Instead, the ELBO maximization strategy maximizes a surrogate objective, the <em>ELBO</em>:</p><p class="math-container">\[ \mathrm{ELBO}\left(q\right) \triangleq \mathbb{E}_{\theta \sim q} \log \pi\left(\theta\right) + \mathbb{H}\left(q\right),\]</p><p>which serves as a lower bound to the KL. The ELBO and its gradient can be readily estimated through various strategies. Overall, ELBO maximization algorithms aim to solve the problem:</p><p class="math-container">\[ \mathrm{maximize}_{q \in \mathcal{Q}}\quad \mathrm{ELBO}\left(q\right).\]</p><p>Multiple ways to solve this problem exist, each leading to a different variational inference algorithm.</p><h2 id="Algorithms"><a class="docs-heading-anchor" href="#Algorithms">Algorithms</a><a id="Algorithms-1"></a><a class="docs-heading-anchor-permalink" href="#Algorithms" title="Permalink"></a></h2><p>Currently, <code>AdvancedVI</code> only provides the approach known as black-box variational inference (also known as Monte Carlo VI, Stochastic Gradient VI). (Introduced independently by two groups <sup class="footnote-reference"><a id="citeref-RGB2014" href="#footnote-RGB2014">[RGB2014]</a></sup><sup class="footnote-reference"><a id="citeref-TL2014" href="#footnote-TL2014">[TL2014]</a></sup> in 2014.) In particular, <code>AdvancedVI</code> focuses on the reparameterization gradient estimator<sup class="footnote-reference"><a id="citeref-TL2014" href="#footnote-TL2014">[TL2014]</a></sup><sup class="footnote-reference"><a id="citeref-RMW2014" href="#footnote-RMW2014">[RMW2014]</a></sup><sup class="footnote-reference"><a id="citeref-KW2014" href="#footnote-KW2014">[KW2014]</a></sup>, which is generally superior compared to alternative strategies<sup class="footnote-reference"><a id="citeref-XQKS2019" href="#footnote-XQKS2019">[XQKS2019]</a></sup>, discussed in the following section:</p><ul><li><a href="../repgradelbo/#repgradelbo">RepGradELBO</a></li></ul><section class="footnotes is-size-7"><ul><li class="footnote" id="footnote-JGJS1999"><a class="tag is-link" href="#citeref-JGJS1999">JGJS1999</a>Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., &amp; Saul, L. K. (1999). An introduction to variational methods for graphical models. Machine learning, 37, 183-233.</li><li class="footnote" id="footnote-TL2014"><a class="tag is-link" href="#citeref-TL2014">TL2014</a>Titsias, M., &amp; Lázaro-Gredilla, M. (2014). Doubly stochastic variational Bayes for non-conjugate inference. In <em>International Conference on Machine Learning</em>.</li><li class="footnote" id="footnote-RMW2014"><a class="tag is-link" href="#citeref-RMW2014">RMW2014</a>Rezende, D. J., Mohamed, S., &amp; Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In <em>International Conference on Machine Learning</em>.</li><li class="footnote" id="footnote-KW2014"><a class="tag is-link" href="#citeref-KW2014">KW2014</a>Kingma, D. P., &amp; Welling, M. (2014). Auto-encoding variational bayes. In <em>International Conference on Learning Representations</em>.</li><li class="footnote" id="footnote-XQKS2019"><a class="tag is-link" href="#citeref-XQKS2019">XQKS2019</a>Xu, M., Quiroz, M., Kohn, R., &amp; Sisson, S. A. (2019). Variance reduction properties of the reparameterization trick. In *The International Conference on Artificial Intelligence and Statistics.</li><li class="footnote" id="footnote-RGB2014"><a class="tag is-link" href="#citeref-RGB2014">RGB2014</a>Ranganath, R., Gerrish, S., &amp; Blei, D. (2014). Black box variational inference. In <em>Artificial Intelligence and Statistics</em>.</li></ul></section></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../../examples/">« Examples</a><a class="docs-footer-nextpage" href="../repgradelbo/">Reparameterization Gradient Estimator »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.8.0 on <span class="colophon-date" title="Thursday 5 December 2024 19:21">Thursday 5 December 2024</span>. Using Julia version 1.10.7.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>

1 change: 1 addition & 0 deletions previews/PR99/elbo/repgradelbo/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -457,6 +457,7 @@
});
</script>
<!-- NAVBAR END -->

<div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit"><a href="../../">AdvancedVI.jl</a></span></div><button class="docs-search-query input is-rounded is-small is-clickable my-2 mx-auto py-1 px-2" id="documenter-search-query">Search docs (Ctrl + /)</button><ul class="docs-menu"><li><a class="tocitem" href="../../">AdvancedVI</a></li><li><a class="tocitem" href="../../general/">General Usage</a></li><li><a class="tocitem" href="../../examples/">Examples</a></li><li><span class="tocitem">ELBO Maximization</span><ul><li><a class="tocitem" href="../overview/">Overview</a></li><li class="is-active"><a class="tocitem" href>Reparameterization Gradient Estimator</a><ul class="internal"><li><a class="tocitem" href="#Overview"><span>Overview</span></a></li><li><a class="tocitem" href="#The-RepGradELBO-Objective"><span>The <code>RepGradELBO</code> Objective</span></a></li><li><a class="tocitem" href="#bijectors"><span>Handling Constraints with <code>Bijectors</code></span></a></li><li><a class="tocitem" href="#entropygrad"><span>Entropy Estimators</span></a></li><li><a class="tocitem" href="#Advanced-Usage"><span>Advanced Usage</span></a></li></ul></li></ul></li><li><a class="tocitem" href="../../families/">Variational Families</a></li><li><a class="tocitem" href="../../optimization/">Optimization</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><a class="docs-sidebar-button docs-navbar-link fa-solid fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a><nav class="breadcrumb"><ul class="is-hidden-mobile"><li><a class="is-disabled">ELBO Maximization</a></li><li class="is-active"><a href>Reparameterization Gradient Estimator</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Reparameterization Gradient Estimator</a></li></ul></nav><div class="docs-right"><a class="docs-navbar-link" href="https://github.com/TuringLang/AdvancedVI.jl/blob/master/docs/src/elbo/repgradelbo.md#" title="Edit source on GitHub"><span class="docs-icon fa-solid"></span></a><a class="docs-settings-button docs-navbar-link fa-solid fa-gear" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-article-toggle-button fa-solid fa-chevron-up" id="documenter-article-toggle-button" href="javascript:;" title="Collapse all docstrings"></a></div></header><article class="content" id="documenter-page"><h1 id="repgradelbo"><a class="docs-heading-anchor" href="#repgradelbo">Reparameterization Gradient Estimator</a><a id="repgradelbo-1"></a><a class="docs-heading-anchor-permalink" href="#repgradelbo" title="Permalink"></a></h1><h2 id="Overview"><a class="docs-heading-anchor" href="#Overview">Overview</a><a id="Overview-1"></a><a class="docs-heading-anchor-permalink" href="#Overview" title="Permalink"></a></h2><p>The reparameterization gradient<sup class="footnote-reference"><a id="citeref-TL2014" href="#footnote-TL2014">[TL2014]</a></sup><sup class="footnote-reference"><a id="citeref-RMW2014" href="#footnote-RMW2014">[RMW2014]</a></sup><sup class="footnote-reference"><a id="citeref-KW2014" href="#footnote-KW2014">[KW2014]</a></sup> is an unbiased gradient estimator of the ELBO. Consider some variational family</p><p class="math-container">\[\mathcal{Q} = \{q_{\lambda} \mid \lambda \in \Lambda \},\]</p><p>where <span>$\lambda$</span> is the <em>variational parameters</em> of <span>$q_{\lambda}$</span>. If its sampling process can be described by some differentiable reparameterization function <span>$\mathcal{T}_{\lambda}$</span> and a <em>base distribution</em> <span>$\varphi$</span> independent of <span>$\lambda$</span> such that</p><p class="math-container">\[z \sim q_{\lambda} \qquad\Leftrightarrow\qquad
z \stackrel{d}{=} \mathcal{T}_{\lambda}\left(\epsilon\right);\quad \epsilon \sim \varphi\]</p><p>we can effectively estimate the gradient of the ELBO by directly differentiating</p><p class="math-container">\[ \widehat{\mathrm{ELBO}}\left(\lambda\right) = \frac{1}{M}\sum^M_{m=1} \log \pi\left(\mathcal{T}_{\lambda}\left(\epsilon_m\right)\right) + \mathbb{H}\left(q_{\lambda}\right),\]</p><p>where <span>$\epsilon_m \sim \varphi$</span> are Monte Carlo samples, with respect to <span>$\lambda$</span>. This estimator is called the reparameterization gradient estimator.</p><p>In addition to the reparameterization gradient, <code>AdvancedVI</code> provides the following features:</p><ol><li><strong>Posteriors with constrained supports</strong> are handled through <a href="https://github.com/TuringLang/Bijectors.jl"><code>Bijectors</code></a>, which is known as the automatic differentiation VI (ADVI; <sup class="footnote-reference"><a id="citeref-KTRGB2017" href="#footnote-KTRGB2017">[KTRGB2017]</a></sup>) formulation. (See <a href="#bijectors">this section</a>.)</li><li><strong>The gradient of the entropy</strong> can be estimated through various strategies depending on the capabilities of the variational family. (See <a href="#entropygrad">this section</a>.)</li></ol><h2 id="The-RepGradELBO-Objective"><a class="docs-heading-anchor" href="#The-RepGradELBO-Objective">The <code>RepGradELBO</code> Objective</a><a id="The-RepGradELBO-Objective-1"></a><a class="docs-heading-anchor-permalink" href="#The-RepGradELBO-Objective" title="Permalink"></a></h2><p>To use the reparameterization gradient, <code>AdvancedVI</code> provides the following variational objective:</p><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="AdvancedVI.RepGradELBO" href="#AdvancedVI.RepGradELBO"><code>AdvancedVI.RepGradELBO</code></a><span class="docstring-category">Type</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">RepGradELBO(n_samples; kwargs...)</code></pre><p>Evidence lower-bound objective with the reparameterization gradient formulation<sup class="footnote-reference"><a id="citeref-TL2014" href="#footnote-TL2014">[TL2014]</a></sup><sup class="footnote-reference"><a id="citeref-RMW2014" href="#footnote-RMW2014">[RMW2014]</a></sup><sup class="footnote-reference"><a id="citeref-KW2014" href="#footnote-KW2014">[KW2014]</a></sup>. This computes the evidence lower-bound (ELBO) through the formulation:</p><p class="math-container">\[\begin{aligned}
\mathrm{ELBO}\left(\lambda\right)
Expand Down
1 change: 1 addition & 0 deletions previews/PR99/examples/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -457,6 +457,7 @@
});
</script>
<!-- NAVBAR END -->

<div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit"><a href="../">AdvancedVI.jl</a></span></div><button class="docs-search-query input is-rounded is-small is-clickable my-2 mx-auto py-1 px-2" id="documenter-search-query">Search docs (Ctrl + /)</button><ul class="docs-menu"><li><a class="tocitem" href="../">AdvancedVI</a></li><li><a class="tocitem" href="../general/">General Usage</a></li><li class="is-active"><a class="tocitem" href>Examples</a><ul class="internal"><li><a class="tocitem" href="#examples"><span>Evidence Lower Bound Maximization</span></a></li></ul></li><li><span class="tocitem">ELBO Maximization</span><ul><li><a class="tocitem" href="../elbo/overview/">Overview</a></li><li><a class="tocitem" href="../elbo/repgradelbo/">Reparameterization Gradient Estimator</a></li></ul></li><li><a class="tocitem" href="../families/">Variational Families</a></li><li><a class="tocitem" href="../optimization/">Optimization</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><a class="docs-sidebar-button docs-navbar-link fa-solid fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Examples</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Examples</a></li></ul></nav><div class="docs-right"><a class="docs-navbar-link" href="https://github.com/TuringLang/AdvancedVI.jl/blob/master/docs/src/examples.md#" title="Edit source on GitHub"><span class="docs-icon fa-solid"></span></a><a class="docs-settings-button docs-navbar-link fa-solid fa-gear" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-article-toggle-button fa-solid fa-chevron-up" id="documenter-article-toggle-button" href="javascript:;" title="Collapse all docstrings"></a></div></header><article class="content" id="documenter-page"><h2 id="examples"><a class="docs-heading-anchor" href="#examples">Evidence Lower Bound Maximization</a><a id="examples-1"></a><a class="docs-heading-anchor-permalink" href="#examples" title="Permalink"></a></h2><p>In this tutorial, we will work with a <code>normal-log-normal</code> model.</p><p class="math-container">\[\begin{aligned}
x &amp;\sim \mathrm{LogNormal}\left(\mu_x, \sigma_x^2\right) \\
y &amp;\sim \mathcal{N}\left(\mu_y, \sigma_y^2\right)
Expand Down
Loading

0 comments on commit 87e61ef

Please sign in to comment.