Skip to content

DOC: Updated a few URLs #799

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
487 changes: 475 additions & 12 deletions examples/gaussian_processes/GP-MaunaLoa.ipynb

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions examples/gaussian_processes/GP-MaunaLoa.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ jupytext:
format_name: myst
format_version: 0.13
kernelspec:
display_name: default
display_name: Python [conda env:base] *
language: python
name: python3
name: conda-base-py
myst:
substitutions:
extra_dependencies: bokeh
Expand Down Expand Up @@ -41,8 +41,8 @@ Not much was known about how fossil fuel burning influences the climate in the l

The history behind these measurements and their influence on climatology today and other interesting reading:

- http://scrippsco2.ucsd.edu/history_legacy/early_keeling_curve#
- https://scripps.ucsd.edu/programs/keelingcurve/2016/05/23/why-has-a-drop-in-global-co2-emissions-not-caused-co2-levels-in-the-atmosphere-to-stabilize/#more-1412
- [History & Legacy: The Early Keeling Curve](http://scrippsco2.ucsd.edu/history_legacy/early_keeling_curve)
- [Why Has a Drop in Global CO2 Emissions Not Caused CO2 Levels in the Atmosphere to Stabilize?](https://keelingcurve.ucsd.edu/2016/05/23/why-has-a-drop-in-global-co2-emissions-not-caused-co2-levels-in-the-atmosphere-to-stabilize/)

Let's load in the data, tidy it up, and have a look. The [raw data set is located here](http://scrippsco2.ucsd.edu/data/atmospheric_co2/mlo). This notebook uses the [Bokeh package](http://bokeh.pydata.org/en/latest/) for plots that benefit from interactivity.

Expand Down
6 changes: 3 additions & 3 deletions examples/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ @article{bali2003gev
pages = {423--427},
year = {2003},
issn = {0165-1765},
doi = {https://doi.org/10.1016/S0165-1765(03)00035-1},
doi = {10.1016/S0165-1765(03)00035-1},
url = {https://www.sciencedirect.com/science/article/pii/S0165176503000351},
author = {Turan G. Bali}
}
Expand Down Expand Up @@ -118,7 +118,7 @@ @inproceedings{caprani2009gev
editor = "Hitoshi Furuta and Frangopol, {Dan M} and Masanobu Shinozuka",
booktitle = "Proceedings of the 10th International Conference on Structural Safety and Reliability (ICOSSAR2009)",
publisher = "CRC Press",
url = {https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.722.6789\&rep=rep1\&type=pdf}
url = {https://www.colincaprani.com/files/papers/Conferences/ICOSSAR%2009%20-%20Caprani%20&%20OBrien.pdf}
}
@article{caprani2010gev,
title = {The use of predictive likelihood to estimate the distribution of extreme bridge traffic load effect},
Expand All @@ -128,7 +128,7 @@ @article{caprani2010gev
pages = {138--144},
year = {2010},
issn = {0167-4730},
doi = {https://doi.org/10.1016/j.strusafe.2009.09.001},
doi = {10.1016/j.strusafe.2009.09.001},
url = {https://www.sciencedirect.com/science/article/pii/S016747300900071X},
author = {Colin C. Caprani and Eugene J. OBrien}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -920,7 +920,7 @@
"plt.bar(ppd_unique, ppd_counts, width=0.2, color=\"C1\", label=\"posterior predictive\")\n",
"plt.xlabel(r\"$\\hat n_W$\")\n",
"plt.ylabel(\"count\")\n",
"plt.title(f\"number of W samples predicted $\\hat n_W$ from {N_draws_for_prediction} globe flips\")\n",
"plt.title(rf\"number of W samples predicted $\\hat n_W$ from {N_draws_for_prediction} globe flips\")\n",
"plt.legend();"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ plt.bar(
plt.bar(ppd_unique, ppd_counts, width=0.2, color="C1", label="posterior predictive")
plt.xlabel(r"$\hat n_W$")
plt.ylabel("count")
plt.title(f"number of W samples predicted $\hat n_W$ from {N_draws_for_prediction} globe flips")
plt.title(rf"number of W samples predicted $\hat n_W$ from {N_draws_for_prediction} globe flips")
plt.legend();
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3119,7 +3119,7 @@
" fill_kwargs=dict(alpha=1),\n",
" )\n",
" terms = \"+\".join([f\"\\\\beta_{o} H_i^{o}\" for o in range(1, order + 1)])\n",
" plt.title(f\"$\\mu_i = \\\\alpha + {terms}$\")\n",
" plt.title(f\"$\\\\mu_i = \\\\alpha + {terms}$\")\n",
"\n",
"\n",
"for order in [2, 4, 6]:\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -904,7 +904,7 @@ def plot_polynomial_model_posterior_predictive(model, inference, data, order):
fill_kwargs=dict(alpha=1),
)
terms = "+".join([f"\\beta_{o} H_i^{o}" for o in range(1, order + 1)])
plt.title(f"$\mu_i = \\alpha + {terms}$")
plt.title(f"$\\mu_i = \\alpha + {terms}$")


for order in [2, 4, 6]:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1954,7 +1954,7 @@
"plt.title(\n",
" \"Causal effect of increasing Marriage Rate\\n\"\n",
" f\"by {n_std_increase} standard deviations on Divorce Rate\\n\"\n",
" f\"$p(\\delta < 0) = {prob_lt_zero:1.2}$\"\n",
" rf\"$p(\\delta < 0) = {prob_lt_zero:1.2}$\"\n",
");"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -674,7 +674,7 @@ plt.axvline(0, color="k", linestyle="--")
plt.title(
"Causal effect of increasing Marriage Rate\n"
f"by {n_std_increase} standard deviations on Divorce Rate\n"
f"$p(\delta < 0) = {prob_lt_zero:1.2}$"
rf"$p(\delta < 0) = {prob_lt_zero:1.2}$"
);
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1305,7 +1305,7 @@
" az.plot_dist(ps, color=\"C0\")\n",
" plt.xlim([0, 1])\n",
" plt.ylabel(\"density\")\n",
" plt.title(f\"$\\\\alpha \\sim \\mathcal{{N}}(0, {std})$\")\n",
" plt.title(f\"$\\\\alpha \\\\sim \\\\mathcal{{N}}(0, {std})$\")\n",
" if ii == 2:\n",
" plt.xlabel(\"p(event)\")"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -493,7 +493,7 @@ for ii, std in enumerate([10, 1.5, 1]):
az.plot_dist(ps, color="C0")
plt.xlim([0, 1])
plt.ylabel("density")
plt.title(f"$\\alpha \sim \mathcal{{N}}(0, {std})$")
plt.title(f"$\\alpha \\sim \\mathcal{{N}}(0, {std})$")
if ii == 2:
plt.xlabel("p(event)")
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2511,7 +2511,7 @@
" # since log(lambda) = alpha, if\n",
" # alpha is Normally distributed, therefore lambda is log-normal\n",
" lambda_prior_dist = stats.lognorm(s=prior_sigma, scale=np.exp(prior_mu))\n",
" label = f\"$\\\\alpha \\sim \\mathcal{{N}}{prior_mu, prior_sigma}$\"\n",
" label = f\"$\\\\alpha \\\\sim \\\\mathcal{{N}}{prior_mu, prior_sigma}$\"\n",
" pdf = lambda_prior_dist.pdf(num_tools)\n",
" plt.plot(num_tools, pdf, label=label, linewidth=3)\n",
"\n",
Expand All @@ -2534,7 +2534,9 @@
" for sample_idx, lambda_ in enumerate(lambdas):\n",
" pmf = stats.poisson(lambda_).pmf(num_tools)\n",
"\n",
" label = f\"$\\\\alpha \\sim \\mathcal{{N}}{prior_mu, prior_sigma}$\" if sample_idx == 1 else None\n",
" label = (\n",
" f\"$\\\\alpha \\\\sim \\\\mathcal{{N}}{prior_mu, prior_sigma}$\" if sample_idx == 1 else None\n",
" )\n",
" color = f\"C{ii}\"\n",
" plt.plot(num_tools, pmf, color=color, label=label, alpha=0.1)\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ for prior_mu, prior_sigma in normal_alpha_prior_params:
# since log(lambda) = alpha, if
# alpha is Normally distributed, therefore lambda is log-normal
lambda_prior_dist = stats.lognorm(s=prior_sigma, scale=np.exp(prior_mu))
label = f"$\\alpha \sim \mathcal{{N}}{prior_mu, prior_sigma}$"
label = f"$\\alpha \\sim \\mathcal{{N}}{prior_mu, prior_sigma}$"
pdf = lambda_prior_dist.pdf(num_tools)
plt.plot(num_tools, pdf, label=label, linewidth=3)

Expand All @@ -581,7 +581,9 @@ for ii, (prior_mu, prior_sigma) in enumerate(normal_alpha_prior_params):
for sample_idx, lambda_ in enumerate(lambdas):
pmf = stats.poisson(lambda_).pmf(num_tools)

label = f"$\\alpha \sim \mathcal{{N}}{prior_mu, prior_sigma}$" if sample_idx == 1 else None
label = (
f"$\\alpha \\sim \\mathcal{{N}}{prior_mu, prior_sigma}$" if sample_idx == 1 else None
)
color = f"C{ii}"
plt.plot(num_tools, pmf, color=color, label=label, alpha=0.1)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1545,7 +1545,7 @@
" utils.plot_2d_function(xs, ys, prior, colors=\"gray\", ax=axs[ii])\n",
" plt.xlabel(\"x\")\n",
" plt.ylabel(\"v\")\n",
" plt.title(f\"$v \\sim Normal(0, {sigma_v})$\\n# Divergences: {n_divergences}\")"
" plt.title(f\"$v \\\\sim Normal(0, {sigma_v})$\\n# Divergences: {n_divergences}\")"
]
},
{
Expand Down Expand Up @@ -1722,7 +1722,7 @@
" utils.plot_2d_function(xs, ys, prior, colors=\"gray\", ax=axs[ii])\n",
" plt.xlabel(\"z\")\n",
" plt.ylabel(\"v\")\n",
" plt.title(f\"$v \\sim$ Normal(0, {sigma_v})\\n# Divergences: {n_divergences}\")"
" plt.title(f\"$v \\\\sim$ Normal(0, {sigma_v})\\n# Divergences: {n_divergences}\")"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -708,7 +708,7 @@ for ii, sigma_v in enumerate(sigma_vs):
utils.plot_2d_function(xs, ys, prior, colors="gray", ax=axs[ii])
plt.xlabel("x")
plt.ylabel("v")
plt.title(f"$v \sim Normal(0, {sigma_v})$\n# Divergences: {n_divergences}")
plt.title(f"$v \\sim Normal(0, {sigma_v})$\n# Divergences: {n_divergences}")
```

## What to do?
Expand Down Expand Up @@ -754,7 +754,7 @@ for ii, sigma_v in enumerate(sigma_vs):
utils.plot_2d_function(xs, ys, prior, colors="gray", ax=axs[ii])
plt.xlabel("z")
plt.ylabel("v")
plt.title(f"$v \sim$ Normal(0, {sigma_v})\n# Divergences: {n_divergences}")
plt.title(f"$v \\sim$ Normal(0, {sigma_v})\n# Divergences: {n_divergences}")
```

- By reparameterizing, we get to sample multi-dimensional Normal distributions, which are smoother parabolas in the log space.
Expand Down
8 changes: 4 additions & 4 deletions examples/survival_analysis/censored_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
"metadata": {},
"source": [
"[This example notebook on Bayesian survival\n",
"analysis](http://docs.pymc.io/notebooks/survival_analysis.html) touches on the\n",
"analysis](https://www.pymc.io/projects/examples/en/latest/survival_analysis/survival_analysis.html) touches on the\n",
"point of censored data. _Censoring_ is a form of missing-data problem, in which\n",
"observations greater than a certain threshold are clipped down to that\n",
"threshold, or observations less than a certain threshold are clipped up to that\n",
Expand Down Expand Up @@ -726,9 +726,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pymc",
"display_name": "Python [conda env:base] *",
"language": "python",
"name": "python3"
"name": "conda-base-py"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -740,7 +740,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0 | packaged by conda-forge | (main, Oct 25 2022, 06:24:40) [GCC 10.4.0]"
"version": "3.11.5"
},
"vscode": {
"interpreter": {
Expand Down
6 changes: 3 additions & 3 deletions examples/survival_analysis/censored_data.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ jupytext:
format_name: myst
format_version: 0.13
kernelspec:
display_name: pymc
display_name: Python [conda env:base] *
language: python
name: python3
name: conda-base-py
---

(censored_data)=
Expand Down Expand Up @@ -38,7 +38,7 @@ az.style.use("arviz-darkgrid")
```

[This example notebook on Bayesian survival
analysis](http://docs.pymc.io/notebooks/survival_analysis.html) touches on the
analysis](https://www.pymc.io/projects/examples/en/latest/survival_analysis/survival_analysis.html) touches on the
point of censored data. _Censoring_ is a form of missing-data problem, in which
observations greater than a certain threshold are clipped down to that
threshold, or observations less than a certain threshold are clipped up to that
Expand Down