Skip to content

Commit ef0976f

Browse files
committed
Deploying to gh-pages from @ efce447 🚀
1 parent b2c86be commit ef0976f

File tree

5 files changed

+51
-51
lines changed

5 files changed

+51
-51
lines changed

versions/v0.34.1/search.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -868,7 +868,7 @@
868868
"href": "tutorials/docs-10-using-turing-autodiff/index.html#compositional-sampling-with-differing-ad-modes",
869869
"title": "Automatic Differentiation",
870870
"section": "Compositional Sampling with Differing AD Modes",
871-
"text": "Compositional Sampling with Differing AD Modes\nTuring supports intermixed automatic differentiation methods for different variable spaces. The snippet below shows using ForwardDiff to sample the mean (m) parameter, and using ReverseDiff for the variance (s) parameter:\n\nusing Turing\nusing ReverseDiff\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x, y)\n s² ~ InverseGamma(2, 3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n return y ~ Normal(m, sqrt(s²))\nend\n\n# Sample using Gibbs and varying autodiff backends.\nc = sample(\n gdemo(1.5, 2),\n Gibbs(\n HMC(0.1, 5, :m; adtype=AutoForwardDiff(; chunksize=0)),\n HMC(0.1, 5, :s²; adtype=AutoReverseDiff(false)),\n ),\n 1000,\n progress=false,\n)\n\nChains MCMC chain (1000×3×1 Array{Float64, 3}):\n\nIterations = 1:1:1000\nNumber of chains = 1\nSamples per chain = 1000\nWall duration = 7.95 seconds\nCompute duration = 7.95 seconds\nparameters = s², m\ninternals = lp\n\nSummary Statistics\n parameters mean std mcse ess_bulk ess_tail rhat e ⋯\n Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯\n\n s² 2.1741 1.6261 0.1331 136.2478 264.0546 1.0009 ⋯\n m 1.2010 0.8503 0.0807 115.3836 177.0764 1.0006 ⋯\n 1 column omitted\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n Symbol Float64 Float64 Float64 Float64 Float64\n\n s² 0.5841 1.1432 1.6930 2.6915 6.3371\n m -0.3430 0.6991 1.1333 1.6847 3.0490\n\n\nGenerally, reverse-mode AD, for instance ReverseDiff, is faster when sampling from variables of high dimensionality (greater than 20), while forward-mode AD, for instance ForwardDiff, is more efficient for lower-dimension variables. This functionality allows those who are performance sensitive to fine tune their automatic differentiation for their specific models.\nIf the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to ForwardDiff.",
871+
"text": "Compositional Sampling with Differing AD Modes\nTuring supports intermixed automatic differentiation methods for different variable spaces. The snippet below shows using ForwardDiff to sample the mean (m) parameter, and using ReverseDiff for the variance (s) parameter:\n\nusing Turing\nusing ReverseDiff\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x, y)\n s² ~ InverseGamma(2, 3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n return y ~ Normal(m, sqrt(s²))\nend\n\n# Sample using Gibbs and varying autodiff backends.\nc = sample(\n gdemo(1.5, 2),\n Gibbs(\n HMC(0.1, 5, :m; adtype=AutoForwardDiff(; chunksize=0)),\n HMC(0.1, 5, :s²; adtype=AutoReverseDiff(false)),\n ),\n 1000,\n progress=false,\n)\n\nChains MCMC chain (1000×3×1 Array{Float64, 3}):\n\nIterations = 1:1:1000\nNumber of chains = 1\nSamples per chain = 1000\nWall duration = 8.11 seconds\nCompute duration = 8.11 seconds\nparameters = s², m\ninternals = lp\n\nSummary Statistics\n parameters mean std mcse ess_bulk ess_tail rhat e ⋯\n Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯\n\n s² 1.8512 1.4006 0.1214 154.5559 204.1832 0.9992 ⋯\n m 1.1362 0.8128 0.0755 120.6305 118.7485 0.9997 ⋯\n 1 column omitted\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n Symbol Float64 Float64 Float64 Float64 Float64\n\n s² 0.5325 0.9711 1.4129 2.2053 5.6801\n m -0.6538 0.6757 1.1308 1.6484 2.7591\n\n\nGenerally, reverse-mode AD, for instance ReverseDiff, is faster when sampling from variables of high dimensionality (greater than 20), while forward-mode AD, for instance ForwardDiff, is more efficient for lower-dimension variables. This functionality allows those who are performance sensitive to fine tune their automatic differentiation for their specific models.\nIf the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to ForwardDiff.",
872872
"crumbs": [
873873
"Get Started",
874874
"Users",
@@ -1522,7 +1522,7 @@
15221522
"href": "tutorials/docs-13-using-turing-performance-tips/index.html#ensure-that-types-in-your-model-can-be-inferred",
15231523
"title": "Performance Tips",
15241524
"section": "Ensure that types in your model can be inferred",
1525-
"text": "Ensure that types in your model can be inferred\nFor efficient gradient-based inference, e.g. using HMC, NUTS or ADVI, it is important to ensure the types in your model can be inferred.\nThe following example with abstract types\n\n@model function tmodel(x, y)\n p, n = size(x)\n params = Vector{Real}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(), 0, Inf)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 2 methods)\n\n\ncan be transformed into the following representation with concrete types:\n\n@model function tmodel(x, y, ::Type{T}=Float64) where {T}\n p, n = size(x)\n params = Vector{T}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(), 0, Inf)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nAlternatively, you could use filldist in this example:\n\n@model function tmodel(x, y)\n params ~ filldist(truncated(Normal(), 0, Inf), size(x, 2))\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nNote that you can use @code_warntype to find types in your model definition that the compiler cannot infer. They are marked in red in the Julia REPL.\nFor example, consider the following simple program:\n\n@model function tmodel(x)\n p = Vector{Real}(undef, 1)\n p[1] ~ Normal()\n p = p .+ 1\n return x ~ Normal(p[1])\nend\n\ntmodel (generic function with 6 methods)\n\n\nWe can use\n\nusing Random\n\nmodel = tmodel(1.0)\n\n@code_warntype model.f(\n model,\n Turing.VarInfo(model),\n Turing.SamplingContext(\n Random.default_rng(), Turing.SampleFromPrior(), Turing.DefaultContext()\n ),\n model.args...,\n)\n\nto inspect type inference in the model.",
1525+
"text": "Ensure that types in your model can be inferred\nFor efficient gradient-based inference, e.g. using HMC, NUTS or ADVI, it is important to ensure the types in your model can be inferred.\nThe following example with abstract types\n\n@model function tmodel(x, y)\n p, n = size(x)\n params = Vector{Real}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(); lower=0)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 2 methods)\n\n\ncan be transformed into the following representation with concrete types:\n\n@model function tmodel(x, y, ::Type{T}=Float64) where {T}\n p, n = size(x)\n params = Vector{T}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(); lower=0)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nAlternatively, you could use filldist in this example:\n\n@model function tmodel(x, y)\n params ~ filldist(truncated(Normal(); lower=0), size(x, 2))\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nNote that you can use @code_warntype to find types in your model definition that the compiler cannot infer. They are marked in red in the Julia REPL.\nFor example, consider the following simple program:\n\n@model function tmodel(x)\n p = Vector{Real}(undef, 1)\n p[1] ~ Normal()\n p = p .+ 1\n return x ~ Normal(p[1])\nend\n\ntmodel (generic function with 6 methods)\n\n\nWe can use\n\nusing Random\n\nmodel = tmodel(1.0)\n\n@code_warntype model.f(\n model,\n Turing.VarInfo(model),\n Turing.SamplingContext(\n Random.default_rng(), Turing.SampleFromPrior(), Turing.DefaultContext()\n ),\n model.args...,\n)\n\nto inspect type inference in the model.",
15261526
"crumbs": [
15271527
"Get Started",
15281528
"Users",

versions/v0.34.1/search_original.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -868,7 +868,7 @@
868868
"href": "tutorials/docs-10-using-turing-autodiff/index.html#compositional-sampling-with-differing-ad-modes",
869869
"title": "Automatic Differentiation",
870870
"section": "Compositional Sampling with Differing AD Modes",
871-
"text": "Compositional Sampling with Differing AD Modes\nTuring supports intermixed automatic differentiation methods for different variable spaces. The snippet below shows using ForwardDiff to sample the mean (m) parameter, and using ReverseDiff for the variance (s) parameter:\n\nusing Turing\nusing ReverseDiff\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x, y)\n s² ~ InverseGamma(2, 3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n return y ~ Normal(m, sqrt(s²))\nend\n\n# Sample using Gibbs and varying autodiff backends.\nc = sample(\n gdemo(1.5, 2),\n Gibbs(\n HMC(0.1, 5, :m; adtype=AutoForwardDiff(; chunksize=0)),\n HMC(0.1, 5, :s²; adtype=AutoReverseDiff(false)),\n ),\n 1000,\n progress=false,\n)\n\nChains MCMC chain (1000×3×1 Array{Float64, 3}):\n\nIterations = 1:1:1000\nNumber of chains = 1\nSamples per chain = 1000\nWall duration = 7.95 seconds\nCompute duration = 7.95 seconds\nparameters = s², m\ninternals = lp\n\nSummary Statistics\n parameters mean std mcse ess_bulk ess_tail rhat e ⋯\n Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯\n\n s² 2.1741 1.6261 0.1331 136.2478 264.0546 1.0009 ⋯\n m 1.2010 0.8503 0.0807 115.3836 177.0764 1.0006 ⋯\n 1 column omitted\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n Symbol Float64 Float64 Float64 Float64 Float64\n\n s² 0.5841 1.1432 1.6930 2.6915 6.3371\n m -0.3430 0.6991 1.1333 1.6847 3.0490\n\n\nGenerally, reverse-mode AD, for instance ReverseDiff, is faster when sampling from variables of high dimensionality (greater than 20), while forward-mode AD, for instance ForwardDiff, is more efficient for lower-dimension variables. This functionality allows those who are performance sensitive to fine tune their automatic differentiation for their specific models.\nIf the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to ForwardDiff.",
871+
"text": "Compositional Sampling with Differing AD Modes\nTuring supports intermixed automatic differentiation methods for different variable spaces. The snippet below shows using ForwardDiff to sample the mean (m) parameter, and using ReverseDiff for the variance (s) parameter:\n\nusing Turing\nusing ReverseDiff\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x, y)\n s² ~ InverseGamma(2, 3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n return y ~ Normal(m, sqrt(s²))\nend\n\n# Sample using Gibbs and varying autodiff backends.\nc = sample(\n gdemo(1.5, 2),\n Gibbs(\n HMC(0.1, 5, :m; adtype=AutoForwardDiff(; chunksize=0)),\n HMC(0.1, 5, :s²; adtype=AutoReverseDiff(false)),\n ),\n 1000,\n progress=false,\n)\n\nChains MCMC chain (1000×3×1 Array{Float64, 3}):\n\nIterations = 1:1:1000\nNumber of chains = 1\nSamples per chain = 1000\nWall duration = 8.11 seconds\nCompute duration = 8.11 seconds\nparameters = s², m\ninternals = lp\n\nSummary Statistics\n parameters mean std mcse ess_bulk ess_tail rhat e ⋯\n Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯\n\n s² 1.8512 1.4006 0.1214 154.5559 204.1832 0.9992 ⋯\n m 1.1362 0.8128 0.0755 120.6305 118.7485 0.9997 ⋯\n 1 column omitted\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n Symbol Float64 Float64 Float64 Float64 Float64\n\n s² 0.5325 0.9711 1.4129 2.2053 5.6801\n m -0.6538 0.6757 1.1308 1.6484 2.7591\n\n\nGenerally, reverse-mode AD, for instance ReverseDiff, is faster when sampling from variables of high dimensionality (greater than 20), while forward-mode AD, for instance ForwardDiff, is more efficient for lower-dimension variables. This functionality allows those who are performance sensitive to fine tune their automatic differentiation for their specific models.\nIf the differentiation method is not specified in this way, Turing will default to using whatever the global AD backend is. Currently, this defaults to ForwardDiff.",
872872
"crumbs": [
873873
"Get Started",
874874
"Users",
@@ -1522,7 +1522,7 @@
15221522
"href": "tutorials/docs-13-using-turing-performance-tips/index.html#ensure-that-types-in-your-model-can-be-inferred",
15231523
"title": "Performance Tips",
15241524
"section": "Ensure that types in your model can be inferred",
1525-
"text": "Ensure that types in your model can be inferred\nFor efficient gradient-based inference, e.g. using HMC, NUTS or ADVI, it is important to ensure the types in your model can be inferred.\nThe following example with abstract types\n\n@model function tmodel(x, y)\n p, n = size(x)\n params = Vector{Real}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(), 0, Inf)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 2 methods)\n\n\ncan be transformed into the following representation with concrete types:\n\n@model function tmodel(x, y, ::Type{T}=Float64) where {T}\n p, n = size(x)\n params = Vector{T}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(), 0, Inf)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nAlternatively, you could use filldist in this example:\n\n@model function tmodel(x, y)\n params ~ filldist(truncated(Normal(), 0, Inf), size(x, 2))\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nNote that you can use @code_warntype to find types in your model definition that the compiler cannot infer. They are marked in red in the Julia REPL.\nFor example, consider the following simple program:\n\n@model function tmodel(x)\n p = Vector{Real}(undef, 1)\n p[1] ~ Normal()\n p = p .+ 1\n return x ~ Normal(p[1])\nend\n\ntmodel (generic function with 6 methods)\n\n\nWe can use\n\nusing Random\n\nmodel = tmodel(1.0)\n\n@code_warntype model.f(\n model,\n Turing.VarInfo(model),\n Turing.SamplingContext(\n Random.default_rng(), Turing.SampleFromPrior(), Turing.DefaultContext()\n ),\n model.args...,\n)\n\nto inspect type inference in the model.",
1525+
"text": "Ensure that types in your model can be inferred\nFor efficient gradient-based inference, e.g. using HMC, NUTS or ADVI, it is important to ensure the types in your model can be inferred.\nThe following example with abstract types\n\n@model function tmodel(x, y)\n p, n = size(x)\n params = Vector{Real}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(); lower=0)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 2 methods)\n\n\ncan be transformed into the following representation with concrete types:\n\n@model function tmodel(x, y, ::Type{T}=Float64) where {T}\n p, n = size(x)\n params = Vector{T}(undef, n)\n for i in 1:n\n params[i] ~ truncated(Normal(); lower=0)\n end\n\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nAlternatively, you could use filldist in this example:\n\n@model function tmodel(x, y)\n params ~ filldist(truncated(Normal(); lower=0), size(x, 2))\n a = x * params\n return y ~ MvNormal(a, I)\nend\n\ntmodel (generic function with 4 methods)\n\n\nNote that you can use @code_warntype to find types in your model definition that the compiler cannot infer. They are marked in red in the Julia REPL.\nFor example, consider the following simple program:\n\n@model function tmodel(x)\n p = Vector{Real}(undef, 1)\n p[1] ~ Normal()\n p = p .+ 1\n return x ~ Normal(p[1])\nend\n\ntmodel (generic function with 6 methods)\n\n\nWe can use\n\nusing Random\n\nmodel = tmodel(1.0)\n\n@code_warntype model.f(\n model,\n Turing.VarInfo(model),\n Turing.SamplingContext(\n Random.default_rng(), Turing.SampleFromPrior(), Turing.DefaultContext()\n ),\n model.args...,\n)\n\nto inspect type inference in the model.",
15261526
"crumbs": [
15271527
"Get Started",
15281528
"Users",

0 commit comments

Comments
 (0)