Replies: 2 comments
-
tagging @penelopeysm for a look |
Beta Was this translation helpful? Give feedback.
0 replies
-
for reference: TuringLang/DynamicPPL.jl#900 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
TuringLang/DynamicPPL.jl#691
The goal is to design a flexible, user-friendly interface for log density functions that can handle various model operations, especially in higher-order contexts like Gibbs sampling and Bayesian workflows.
Evaluation functions:
evaluate
Query functions:
is_parametric(model)
dimension(model)
(only defined whenis_parametric(model) == true
)is_conditioned(model)
is_fixed(model)
logjoint(model, params)
loglikelihood(model, params)
logprior(model, params)
where
params
can beVector
,NamedTuple
,Dict
, etc.Transformation functions:
condition(model, conditioned_vars)
fix(model, fixed_vars)
factorize(model, variables_in_the_factor)
condition
andfactor
are similar, butfactorize
effectively generates a sub-model.Higher-order functions:
generated_quantities(model, sample, [, expr])
orgenerated_quantities(model, sample, f, args...)
generated_quantities
computes things from the sampling result.DynamicPPL
, this is the model's return value. For more flexibility, we should allow passing an expression or function. (Currently, users can rewrite the model definition to achieve this inDynamicPPL
, but with limitations. We want to make this more generic.)rand
is a special case ofgenerated_quantities
(when no sample is passed).predict(model, sample)
simulation_based_calibration
Runs the full model + inference pipeline N times to check calibration:
inference(condition(model, y))
to obtain posterior draws{θᵢ}
.summary(θ₀)
withinsummary.(θᵢ)
.discrepancy
(default KS) to return anSBCResult
with ranks, histogram, p-value, etc.generated_quantities
can be implemented byfix
ing the model onsample
and callingevaluate
.predict
can be implemented byuncondition
ing the model ondata
, fixing it onsample
, and callingevaluate
.Beta Was this translation helpful? Give feedback.
All reactions