API
Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.
Model
Macros
A core component of DynamicPPL is the @model
macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~
statements. These statements are rewritten by @model
as calls of internal functions for sampling the variables and computing their log densities.
DynamicPPL.@model
— Macro@model(expr[, warn = false])
Macro to specify a probabilistic model.
If warn
is true
, a warning is displayed if internal variable names are used in the model definition.
Examples
Model definition:
@model function model(x, y = 42)
+API · DynamicPPL API
Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.
Model
Macros
A core component of DynamicPPL is the @model
macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~
statements. These statements are rewritten by @model
as calls of internal functions for sampling the variables and computing their log densities.
DynamicPPL.@model
— Macro@model(expr[, warn = false])
Macro to specify a probabilistic model.
If warn
is true
, a warning is displayed if internal variable names are used in the model definition.
Examples
Model definition:
@model function model(x, y = 42)
...
-end
To generate a Model
, call model(xvalue)
or model(xvalue, yvalue)
.
sourceOne can nest models and call another model inside the model function with @submodel
.
DynamicPPL.@submodel
— Macro@submodel model
-@submodel ... = model
Run a Turing model
nested inside of a Turing model.
Examples
julia> @model function demo1(x)
- x ~ Normal()
- return 1 + abs(x)
- end;
-
-julia> @model function demo2(x, y)
- @submodel a = demo1(x)
- return y ~ Uniform(0, a)
- end;
When we sample from the model demo2(missing, 0.4)
random variable x
will be sampled:
julia> vi = VarInfo(demo2(missing, 0.4));
-
-julia> @varname(x) in keys(vi)
-true
Variable a
is not tracked since it can be computed from the random variable x
that was tracked when running demo1
:
julia> @varname(a) in keys(vi)
-false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> x = vi[@varname(x)];
-
-julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
-true
source@submodel prefix=... model
-@submodel prefix=... ... = model
Run a Turing model
nested inside of a Turing model and add "prefix
." as a prefix to all random variables inside of the model
.
Valid expressions for prefix=...
are:
prefix=false
: no prefix is used.prefix=true
: attempt to automatically determine the prefix from the left-hand side ... = model
by first converting into a VarName
, and then calling Symbol
on this.prefix=expression
: results in the prefix Symbol(expression)
.
The prefix makes it possible to run the same Turing model multiple times while keeping track of all random variables correctly.
Examples
Example models
julia> @model function demo1(x)
- x ~ Normal()
- return 1 + abs(x)
- end;
-
-julia> @model function demo2(x, y, z)
- @submodel prefix="sub1" a = demo1(x)
- @submodel prefix="sub2" b = demo1(y)
- return z ~ Uniform(-a, b)
- end;
When we sample from the model demo2(missing, missing, 0.4)
random variables sub1.x
and sub2.x
will be sampled:
julia> vi = VarInfo(demo2(missing, missing, 0.4));
-
-julia> @varname(var"sub1.x") in keys(vi)
-true
-
-julia> @varname(var"sub2.x") in keys(vi)
-true
Variables a
and b
are not tracked since they can be computed from the random variables sub1.x
and sub2.x
that were tracked when running demo1
:
julia> @varname(a) in keys(vi)
-false
-
-julia> @varname(b) in keys(vi)
-false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> sub1_x = vi[@varname(var"sub1.x")];
-
-julia> sub2_x = vi[@varname(var"sub2.x")];
-
-julia> logprior = logpdf(Normal(), sub1_x) + logpdf(Normal(), sub2_x);
-
-julia> loglikelihood = logpdf(Uniform(-1 - abs(sub1_x), 1 + abs(sub2_x)), 0.4);
-
-julia> getlogp(vi) ≈ logprior + loglikelihood
-true
Different ways of setting the prefix
julia> @model inner() = x ~ Normal()
-inner (generic function with 2 methods)
-
-julia> # When `prefix` is unspecified, no prefix is used.
- @model submodel_noprefix() = @submodel a = inner()
-submodel_noprefix (generic function with 2 methods)
-
-julia> @varname(x) in keys(VarInfo(submodel_noprefix()))
-true
-
-julia> # Explicitely don't use any prefix.
- @model submodel_prefix_false() = @submodel prefix=false a = inner()
-submodel_prefix_false (generic function with 2 methods)
-
-julia> @varname(x) in keys(VarInfo(submodel_prefix_false()))
-true
-
-julia> # Automatically determined from `a`.
- @model submodel_prefix_true() = @submodel prefix=true a = inner()
-submodel_prefix_true (generic function with 2 methods)
-
-julia> @varname(var"a.x") in keys(VarInfo(submodel_prefix_true()))
-true
-
-julia> # Using a static string.
- @model submodel_prefix_string() = @submodel prefix="my prefix" a = inner()
-submodel_prefix_string (generic function with 2 methods)
-
-julia> @varname(var"my prefix.x") in keys(VarInfo(submodel_prefix_string()))
-true
-
-julia> # Using string interpolation.
- @model submodel_prefix_interpolation() = @submodel prefix="$(nameof(inner()))" a = inner()
-submodel_prefix_interpolation (generic function with 2 methods)
-
-julia> @varname(var"inner.x") in keys(VarInfo(submodel_prefix_interpolation()))
-true
-
-julia> # Or using some arbitrary expression.
- @model submodel_prefix_expr() = @submodel prefix=1 + 2 a = inner()
-submodel_prefix_expr (generic function with 2 methods)
-
-julia> @varname(var"3.x") in keys(VarInfo(submodel_prefix_expr()))
-true
-
-julia> # (×) Automatic prefixing without a left-hand side expression does not work!
- @model submodel_prefix_error() = @submodel prefix=true inner()
-ERROR: LoadError: cannot automatically prefix with no left-hand side
-[...]
Notes
- The choice
prefix=expression
means that the prefixing will incur a runtime cost. This is also the case for prefix=true
, depending on whether the expression on the the right-hand side of ... = model
requires runtime-information or not, e.g. x = model
will result in the static prefix x
, while x[i] = model
will be resolved at runtime.
sourceType
A Model
can be created by calling the model function, as defined by @model
.
DynamicPPL.Model
— Typestruct Model{F,argnames,defaultnames,missings,Targs,Tdefaults,Ctx<:AbstactContext}
+end
To generate a Model
, call model(xvalue)
or model(xvalue, yvalue)
.
sourceType
A Model
can be created by calling the model function, as defined by @model
.
DynamicPPL.Model
— Typestruct Model{F,argnames,defaultnames,missings,Targs,Tdefaults,Ctx<:AbstactContext}
f::F
args::NamedTuple{argnames,Targs}
defaults::NamedTuple{defaultnames,Tdefaults}
@@ -565,7 +13,7 @@
Model{typeof(f),(:x, :y),(:x,),(),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
julia> Model{(:y,)}(f, (x = 1.0, y = 2.0), (x = 42,)) # with special definition of missings
-Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
sourceModel
s are callable structs.
DynamicPPL.Model
— Method(model::Model)([rng, varinfo, sampler, context])
Sample from the model
using the sampler
with random number generator rng
and the context
, and store the sample and log joint probability in varinfo
.
The method resets the log joint probability of varinfo
and increases the evaluation number of sampler
.
sourceBasic properties of a model can be accessed with getargnames
, getmissings
, and nameof
.
Base.nameof
— Methodnameof(model::Model)
Get the name of the model
as Symbol
.
sourceDynamicPPL.getargnames
— Functiongetargnames(model::Model)
Get a tuple of the argument names of the model
.
sourceDynamicPPL.getmissings
— Functiongetmissings(model::Model)
Get a tuple of the names of the missing arguments of the model
.
sourceEvaluation
With rand
one can draw samples from the prior distribution of a Model
.
Base.rand
— Functionrand([rng=Random.default_rng()], [T=NamedTuple], model::Model)
Generate a sample of type T
from the prior distribution of the model
.
sourceOne can also evaluate the log prior, log likelihood, and log joint probability.
DynamicPPL.logprior
— Functionlogprior(model::Model, varinfo::AbstractVarInfo)
Return the log prior probability of variables varinfo
for the probabilistic model
.
See also logjoint
and loglikelihood
.
sourcelogprior(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log prior probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
sourceModel
s are callable structs.
DynamicPPL.Model
— Method(model::Model)([rng, varinfo, sampler, context])
Sample from the model
using the sampler
with random number generator rng
and the context
, and store the sample and log joint probability in varinfo
.
The method resets the log joint probability of varinfo
and increases the evaluation number of sampler
.
sourceBasic properties of a model can be accessed with getargnames
, getmissings
, and nameof
.
Base.nameof
— Methodnameof(model::Model)
Get the name of the model
as Symbol
.
sourceDynamicPPL.getargnames
— Functiongetargnames(model::Model)
Get a tuple of the argument names of the model
.
sourceDynamicPPL.getmissings
— Functiongetmissings(model::Model)
Get a tuple of the names of the missing arguments of the model
.
sourceEvaluation
With rand
one can draw samples from the prior distribution of a Model
.
Base.rand
— Functionrand([rng=Random.default_rng()], [T=NamedTuple], model::Model)
Generate a sample of type T
from the prior distribution of the model
.
sourceOne can also evaluate the log prior, log likelihood, and log joint probability.
DynamicPPL.logprior
— Functionlogprior(model::Model, varinfo::AbstractVarInfo)
Return the log prior probability of variables varinfo
for the probabilistic model
.
See also logjoint
and loglikelihood
.
sourcelogprior(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log prior probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
julia> @model function demo_model(x)
s ~ InverseGamma(2, 3)
@@ -578,7 +26,7 @@
julia> # construct a chain of samples using MCMCChains
chain = Chains(rand(10, 2, 3), [:s, :m]);
-julia> logprior(demo_model([1., 2.]), chain);
sourcelogprior(model::Model, θ)
Return the log prior probability of variables θ
for the probabilistic model
.
See also logjoint
and loglikelihood
.
Examples
julia> @model function demo(x)
+julia> logprior(demo_model([1., 2.]), chain);
sourcelogprior(model::Model, θ)
Return the log prior probability of variables θ
for the probabilistic model
.
See also logjoint
and loglikelihood
.
Examples
julia> @model function demo(x)
m ~ Normal()
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
@@ -596,7 +44,7 @@
julia> # Truth.
logpdf(Normal(), 100.0)
--5000.918938533205
sourceStatsAPI.loglikelihood
— Functionloglikelihood(model::Model, varinfo::AbstractVarInfo)
Return the log likelihood of variables varinfo
for the probabilistic model
.
sourceloglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log likelihoods evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+-5000.918938533205
sourceStatsAPI.loglikelihood
— Functionloglikelihood(model::Model, varinfo::AbstractVarInfo)
Return the log likelihood of variables varinfo
for the probabilistic model
.
sourceloglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log likelihoods evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
julia> @model function demo_model(x)
s ~ InverseGamma(2, 3)
@@ -609,7 +57,7 @@
julia> # construct a chain of samples using MCMCChains
chain = Chains(rand(10, 2, 3), [:s, :m]);
-julia> loglikelihood(demo_model([1., 2.]), chain);
sourceloglikelihood(model::Model, θ)
Return the log likelihood of variables θ
for the probabilistic model
.
See also logjoint
and logprior
.
Examples
julia> @model function demo(x)
+julia> loglikelihood(demo_model([1., 2.]), chain);
sourceloglikelihood(model::Model, θ)
Return the log likelihood of variables θ
for the probabilistic model
.
See also logjoint
and logprior
.
Examples
julia> @model function demo(x)
m ~ Normal()
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
@@ -627,7 +75,7 @@
julia> # Truth.
logpdf(Normal(100.0, 1.0), 1.0)
--4901.418938533205
sourceDynamicPPL.logjoint
— Functionlogjoint(model::Model, varinfo::AbstractVarInfo)
Return the log joint probability of variables varinfo
for the probabilistic model
.
See logprior
and loglikelihood
.
sourcelogjoint(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log joint probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+-4901.418938533205
sourceDynamicPPL.logjoint
— Functionlogjoint(model::Model, varinfo::AbstractVarInfo)
Return the log joint probability of variables varinfo
for the probabilistic model
.
See logprior
and loglikelihood
.
sourcelogjoint(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log joint probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
julia> @model function demo_model(x)
s ~ InverseGamma(2, 3)
@@ -640,7 +88,7 @@
julia> # construct a chain of samples using MCMCChains
chain = Chains(rand(10, 2, 3), [:s, :m]);
-julia> logjoint(demo_model([1., 2.]), chain);
sourcelogjoint(model::Model, θ)
Return the log joint probability of variables θ
for the probabilistic model
.
See logprior
and loglikelihood
.
Examples
julia> @model function demo(x)
+julia> logjoint(demo_model([1., 2.]), chain);
sourcelogjoint(model::Model, θ)
Return the log joint probability of variables θ
for the probabilistic model
.
See logprior
and loglikelihood
.
Examples
julia> @model function demo(x)
m ~ Normal()
for i in eachindex(x)
x[i] ~ Normal(m, 1.0)
@@ -658,7 +106,7 @@
julia> # Truth.
logpdf(Normal(100.0, 1.0), 1.0) + logpdf(Normal(), 100.0)
--9902.33787706641
sourceLogDensityProblems.jl interface
The LogDensityProblems.jl interface is also supported by simply wrapping a Model
in a DynamicPPL.LogDensityFunction
:
DynamicPPL.LogDensityFunction
— TypeLogDensityFunction
A callable representing a log density function of a model
.
Fields
varinfo
: varinfo used for evaluation
model
: model used for evaluation
context
: context used for evaluation; if nothing
, leafcontext(model.context)
will be used when applicable
Examples
julia> using Distributions
+-9902.33787706641
sourceLogDensityProblems.jl interface
The LogDensityProblems.jl interface is also supported by simply wrapping a Model
in a DynamicPPL.LogDensityFunction
:
DynamicPPL.LogDensityFunction
— TypeLogDensityFunction
A callable representing a log density function of a model
.
Fields
varinfo
: varinfo used for evaluation
model
: model used for evaluation
context
: context used for evaluation; if nothing
, leafcontext(model.context)
will be used when applicable
Examples
julia> using Distributions
julia> using DynamicPPL: LogDensityFunction, contextualize
@@ -691,7 +139,7 @@
f_prior = LogDensityFunction(contextualize(model, DynamicPPL.PriorContext()), VarInfo(model));
julia> LogDensityProblems.logdensity(f_prior, [0.0]) == logpdf(Normal(), 0.0)
-true
sourceCondition and decondition
A Model
can be conditioned on a set of observations with AbstractPPL.condition
or its alias |
.
Base.:|
— Methodmodel | (x = 1.0, ...)
Return a Model
which now treats variables on the right-hand side as observations.
See condition
for more information and examples.
sourceAbstractPPL.condition
— Functioncondition(model::Model; values...)
+true
sourceCondition and decondition
A Model
can be conditioned on a set of observations with AbstractPPL.condition
or its alias |
.
Base.:|
— Methodmodel | (x = 1.0, ...)
Return a Model
which now treats variables on the right-hand side as observations.
See condition
for more information and examples.
sourceAbstractPPL.condition
— Functioncondition(model::Model; values...)
condition(model::Model, values::NamedTuple)
Return a Model
which now treats the variables in values
as observations.
See also: decondition
, conditioned
Limitations
This does currently not work with variables that are provided to the model as arguments, e.g. @model function demo(x) ... end
means that condition
will not affect the variable x
.
Therefore if one wants to make use of condition
and decondition
one should not be specifying any random variables as arguments.
This is done for the sake of backwards compatibility.
Examples
Simple univariate model
julia> using Distributions
julia> @model function demo()
@@ -754,12 +202,13 @@
# - `condition(model, Dict(@varname(m[2] => 1.0)))`
# (✓) `m[2]` is set to 1.0.
m = condition(model, @varname(m[2]) => 1.0)(); (m[1] ≠ 1.0 && m[2] == 1.0)
-true
Nested models
condition
of course also supports the use of nested models through the use of @submodel
.
julia> @model demo_inner() = m ~ Normal()
+true
Nested models
condition
of course also supports the use of nested models through the use of to_submodel
.
julia> @model demo_inner() = m ~ Normal()
demo_inner (generic function with 2 methods)
julia> @model function demo_outer()
- @submodel m = demo_inner()
- return m
+ # By default, `to_submodel` prefixes the variables using the left-hand side of `~`.
+ inner ~ to_submodel(demo_inner())
+ return inner
end
demo_outer (generic function with 2 methods)
@@ -768,40 +217,22 @@
julia> model() ≠ 1.0
true
-julia> conditioned_model = model | (m = 1.0, );
-
-julia> conditioned_model()
-1.0
But one needs to be careful when prefixing variables in the nested models:
julia> @model function demo_outer_prefix()
- @submodel prefix="inner" m = demo_inner()
- return m
- end
-demo_outer_prefix (generic function with 2 methods)
-
-julia> # (×) This doesn't work now!
- conditioned_model = demo_outer_prefix() | (m = 1.0, );
-
-julia> conditioned_model() == 1.0
-false
-
-julia> # (✓) `m` in `demo_inner` is referred to as `inner.m` internally, so we do:
- conditioned_model = demo_outer_prefix() | (var"inner.m" = 1.0, );
+julia> # To condition the variable inside `demo_inner` we need to refer to it as `inner.m`.
+ conditioned_model = model | (var"inner.m" = 1.0, );
julia> conditioned_model()
1.0
-julia> # Note that the above `var"..."` is just standard Julia syntax:
- keys((var"inner.m" = 1.0, ))
-(Symbol("inner.m"),)
And similarly when using Dict
:
julia> conditioned_model_dict = demo_outer_prefix() | (@varname(var"inner.m") => 1.0);
+julia> # However, it's not possible to condition `inner` directly.
+ conditioned_model_fail = model | (inner = 1.0, );
-julia> conditioned_model_dict()
-1.0
The difference is maybe more obvious once we look at how these different in their trace/VarInfo
:
julia> keys(VarInfo(demo_outer()))
-1-element Vector{VarName{:m, typeof(identity)}}:
- m
+julia> conditioned_model_fail()
+ERROR: ArgumentError: `~` with a model on the right-hand side of an observe statement is not supported
+[...]
And similarly when using Dict
:
julia> conditioned_model_dict = model | (@varname(var"inner.m") => 1.0);
-julia> keys(VarInfo(demo_outer_prefix()))
-1-element Vector{VarName{Symbol("inner.m"), typeof(identity)}}:
- inner.m
From this we can tell what the correct way to condition m
within demo_inner
is in the two different models.
sourcecondition([context::AbstractContext,] values::NamedTuple)
-condition([context::AbstractContext]; values...)
Return ConditionContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: decondition
sourceDynamicPPL.conditioned
— Functionconditioned(model::Model)
Return the conditioned values in model
.
Examples
julia> using Distributions
+julia> conditioned_model_dict()
+1.0
sourcecondition([context::AbstractContext,] values::NamedTuple)
+condition([context::AbstractContext]; values...)
Return ConditionContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: decondition
sourceDynamicPPL.conditioned
— Functionconditioned(model::Model)
Return the conditioned values in model
.
Examples
julia> using Distributions
julia> using DynamicPPL: conditioned, contextualize
@@ -839,7 +270,7 @@
1.0
julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
sourceconditioned(context::AbstractContext)
Return NamedTuple
of values that are conditioned on under context`.
Note that this will recursively traverse the context stack and return a merged version of the condition values.
sourceSimilarly, one can specify with AbstractPPL.decondition
that certain, or all, random variables are not observed.
AbstractPPL.decondition
— Functiondecondition(model::Model)
+VarName[]
sourceconditioned(context::AbstractContext)
Return NamedTuple
of values that are conditioned on under context`.
Note that this will recursively traverse the context stack and return a merged version of the condition values.
sourceSimilarly, one can specify with AbstractPPL.decondition
that certain, or all, random variables are not observed.
AbstractPPL.decondition
— Functiondecondition(model::Model)
decondition(model::Model, variables...)
Return a Model
for which variables...
are not considered observations. If no variables
are provided, then all variables currently considered observations will no longer be.
This is essentially the inverse of condition
. This also means that it suffers from the same limitiations.
Note that currently we only support variables
to take on explicit values provided to condition
.
Examples
julia> using Distributions
julia> @model function demo()
@@ -917,7 +348,7 @@
deconditioned_model_2 = deconditioned_model | (@varname(m[1]) => missing);
julia> m = deconditioned_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
sourcedecondition(context::AbstractContext, syms...)
Return context
but with syms
no longer conditioned on.
Note that this recursively traverses contexts, deconditioning all along the way.
See also: condition
sourceFixing and unfixing
We can also fix a collection of variables in a Model
to certain using fix
.
This might seem quite similar to the aforementioned condition
and its siblings, but they are indeed different operations:
condition
ed variables are considered to be observations, and are thus included in the computation logjoint
and loglikelihood
, but not in logprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
The differences are more clearly spelled out in the docstring of fix
below.
DynamicPPL.fix
— Functionfix(model::Model; values...)
+true
sourcedecondition(context::AbstractContext, syms...)
Return context
but with syms
no longer conditioned on.
Note that this recursively traverses contexts, deconditioning all along the way.
See also: condition
sourceFixing and unfixing
We can also fix a collection of variables in a Model
to certain using fix
.
This might seem quite similar to the aforementioned condition
and its siblings, but they are indeed different operations:
condition
ed variables are considered to be observations, and are thus included in the computation logjoint
and loglikelihood
, but not in logprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
The differences are more clearly spelled out in the docstring of fix
below.
DynamicPPL.fix
— Functionfix(model::Model; values...)
fix(model::Model, values::NamedTuple)
Return a Model
which now treats the variables in values
as fixed.
Examples
Simple univariate model
julia> using Distributions
julia> @model function demo()
@@ -971,12 +402,12 @@
false
But you can do this if you use a Dict
as the underlying storage instead:
julia> # Alternative: `fix(model, Dict(@varname(m[2] => 1.0)))`
# (✓) `m[2]` is set to 1.0.
m = fix(model, @varname(m[2]) => 1.0)(); (m[1] ≠ 1.0 && m[2] == 1.0)
-true
Nested models
fix
of course also supports the use of nested models through the use of @submodel
.
julia> @model demo_inner() = m ~ Normal()
+true
Nested models
fix
of course also supports the use of nested models through the use of to_submodel
, similar to condition
.
julia> @model demo_inner() = m ~ Normal()
demo_inner (generic function with 2 methods)
julia> @model function demo_outer()
- @submodel m = demo_inner()
- return m
+ inner ~ to_submodel(demo_inner())
+ return inner
end
demo_outer (generic function with 2 methods)
@@ -985,39 +416,21 @@
julia> model() ≠ 1.0
true
-julia> fixed_model = model | (m = 1.0, );
+julia> fixed_model = fix(model, var"inner.m" = 1.0, );
julia> fixed_model()
-1.0
But one needs to be careful when prefixing variables in the nested models:
julia> @model function demo_outer_prefix()
- @submodel prefix="inner" m = demo_inner()
- return m
- end
-demo_outer_prefix (generic function with 2 methods)
-
-julia> # (×) This doesn't work now!
- fixed_model = demo_outer_prefix() | (m = 1.0, );
-
-julia> fixed_model() == 1.0
-false
-
-julia> # (✓) `m` in `demo_inner` is referred to as `inner.m` internally, so we do:
- fixed_model = demo_outer_prefix() | (var"inner.m" = 1.0, );
+1.0
However, unlike condition
, fix
can also be used to fix the return-value of the submodel:
julia> fixed_model = fix(model, inner = 2.0,);
julia> fixed_model()
+2.0
And similarly when using Dict
:
julia> fixed_model_dict = fix(model, @varname(var"inner.m") => 1.0);
+
+julia> fixed_model_dict()
1.0
-julia> # Note that the above `var"..."` is just standard Julia syntax:
- keys((var"inner.m" = 1.0, ))
-(Symbol("inner.m"),)
And similarly when using Dict
:
julia> fixed_model_dict = demo_outer_prefix() | (@varname(var"inner.m") => 1.0);
+julia> fixed_model_dict = fix(model, @varname(inner) => 2.0);
julia> fixed_model_dict()
-1.0
The difference is maybe more obvious once we look at how these different in their trace/VarInfo
:
julia> keys(VarInfo(demo_outer()))
-1-element Vector{VarName{:m, typeof(identity)}}:
- m
-
-julia> keys(VarInfo(demo_outer_prefix()))
-1-element Vector{VarName{Symbol("inner.m"), typeof(identity)}}:
- inner.m
From this we can tell what the correct way to fix m
within demo_inner
is in the two different models.
Difference from condition
A very similar functionality is also provided by condition
which, not surprisingly, conditions variables instead of fixing them. The only difference between fixing and conditioning is as follows:
condition
ed variables are considered to be observations, and are thus included in the computation logjoint
and loglikelihood
, but not in logprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
julia> @model function demo()
+2.0
Difference from condition
A very similar functionality is also provided by condition
which, not surprisingly, conditions variables instead of fixing them. The only difference between fixing and conditioning is as follows:
condition
ed variables are considered to be observations, and are thus included in the computation logjoint
and loglikelihood
, but not in logprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
julia> @model function demo()
m ~ Normal()
x ~ Normal(m, 1)
return (; m=m, x=x)
@@ -1039,8 +452,8 @@
julia> # And the difference is the missing log-probability of `m`:
logjoint(model_fixed, (x=1.0,)) + logpdf(Normal(), 1.0) == logjoint(model_conditioned, (x=1.0,))
-true
sourcefix([context::AbstractContext,] values::NamedTuple)
-fix([context::AbstractContext]; values...)
Return FixedContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: unfix
sourceDynamicPPL.fixed
— Functionfixed(model::Model)
Return the fixed values in model
.
Examples
julia> using Distributions
+true
sourcefix([context::AbstractContext,] values::NamedTuple)
+fix([context::AbstractContext]; values...)
Return FixedContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: unfix
sourceDynamicPPL.fixed
— Functionfixed(model::Model)
Return the fixed values in model
.
Examples
julia> using Distributions
julia> using DynamicPPL: fixed, contextualize
@@ -1078,7 +491,7 @@
1.0
julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
sourcefixed(context::AbstractContext)
Return the values that are fixed under context
.
Note that this will recursively traverse the context stack and return a merged version of the fix values.
sourceThe difference between fix
and condition
is described in the docstring of fix
above.
Similarly, we can unfix
variables, i.e. return them to their original meaning:
DynamicPPL.unfix
— Functionunfix(model::Model)
+VarName[]
sourcefixed(context::AbstractContext)
Return the values that are fixed under context
.
Note that this will recursively traverse the context stack and return a merged version of the fix values.
sourceThe difference between fix
and condition
is described in the docstring of fix
above.
Similarly, we can unfix
variables, i.e. return them to their original meaning:
DynamicPPL.unfix
— Functionunfix(model::Model)
unfix(model::Model, variables...)
Return a Model
for which variables...
are not considered fixed. If no variables
are provided, then all variables currently considered fixed will no longer be.
This is essentially the inverse of fix
. This also means that it suffers from the same limitiations.
Note that currently we only support variables
to take on explicit values provided to fix
.
Examples
julia> using Distributions
julia> @model function demo()
@@ -1156,7 +569,187 @@
unfixed_model_2 = fix(unfixed_model, @varname(m[1]) => missing);
julia> m = unfixed_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
sourceunfix(context::AbstractContext, syms...)
Return context
but with syms
no longer fixed.
Note that this recursively traverses contexts, unfixing all along the way.
See also: fix
sourceUtilities
It is possible to manually increase (or decrease) the accumulated log density from within a model function.
DynamicPPL.@addlogprob!
— Macro@addlogprob!(ex)
Add the result of the evaluation of ex
to the joint log probability.
Examples
This macro allows you to include arbitrary terms in the likelihood
julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
+true
sourceunfix(context::AbstractContext, syms...)
Return context
but with syms
no longer fixed.
Note that this recursively traverses contexts, unfixing all along the way.
See also: fix
sourceModels within models
One can include models and call another model inside the model function with left ~ to_submodel(model)
.
DynamicPPL.to_submodel
— Functionto_submodel(model::Model[, auto_prefix::Bool])
Return a model wrapper indicating that it is a sampleable model over the return-values.
This is mainly meant to be used on the right-hand side of a ~
operator to indicate that the model can be sampled from but not necessarily evaluated for its log density.
Warning Note that some other operations that one typically associate with expressions of the form left ~ right
such as condition
, will also not work with to_submodel
.
Warning To avoid variable names clashing between models, it is recommend leave argument auto_prefix
equal to true
. If one does not use automatic prefixing, then it's recommended to use prefix(::Model, input)
explicitly.
Arguments
model::Model
: the model to wrap.auto_prefix::Bool
: whether to automatically prefix the variables in the model using the left-hand side of the ~
statement. Default: true
.
Examples
Simple example
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2(x, y)
+ a ~ to_submodel(demo1(x))
+ return y ~ Uniform(0, a)
+ end;
When we sample from the model demo2(missing, 0.4)
random variable x
will be sampled:
julia> vi = VarInfo(demo2(missing, 0.4));
+
+julia> @varname(var"a.x") in keys(vi)
+true
The variable a
is not tracked. However, it will be assigned the return value of demo1
, and can be used in subsequent lines of the model, as shown above.
julia> @varname(a) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> x = vi[@varname(var"a.x")];
+
+julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
+true
Without automatic prefixing
As mentioned earlier, by default, the auto_prefix
argument specifies whether to automatically prefix the variables in the submodel. If auto_prefix=false
, then the variables in the submodel will not be prefixed.
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2_no_prefix(x, z)
+ a ~ to_submodel(demo1(x), false)
+ return z ~ Uniform(-a, 1)
+ end;
+
+julia> vi = VarInfo(demo2_no_prefix(missing, 0.4));
+
+julia> @varname(x) in keys(vi) # here we just use `x` instead of `a.x`
+true
However, not using prefixing is generally not recommended as it can lead to variable name clashes unless one is careful. For example, if we're re-using the same model twice in a model, not using prefixing will lead to variable name clashes: However, one can manually prefix using the prefix(::Model, input)
:
julia> @model function demo2(x, y, z)
+ a ~ to_submodel(prefix(demo1(x), :sub1), false)
+ b ~ to_submodel(prefix(demo1(y), :sub2), false)
+ return z ~ Uniform(-a, b)
+ end;
+
+julia> vi = VarInfo(demo2(missing, missing, 0.4));
+
+julia> @varname(var"sub1.x") in keys(vi)
+true
+
+julia> @varname(var"sub2.x") in keys(vi)
+true
Variables a
and b
are not tracked, but are assigned the return values of the respective calls to demo1
:
julia> @varname(a) in keys(vi)
+false
+
+julia> @varname(b) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> sub1_x = vi[@varname(var"sub1.x")];
+
+julia> sub2_x = vi[@varname(var"sub2.x")];
+
+julia> logprior = logpdf(Normal(), sub1_x) + logpdf(Normal(), sub2_x);
+
+julia> loglikelihood = logpdf(Uniform(-1 - abs(sub1_x), 1 + abs(sub2_x)), 0.4);
+
+julia> getlogp(vi) ≈ logprior + loglikelihood
+true
Usage as likelihood is illegal
Note that it is illegal to use a to_submodel
model as a likelihood in another model:
```jldoctest submodel-to_submodel-illegal; setup=:(using Distributions) julia> @model inner() = x ~ Normal() inner (generic function with 2 methods)
julia> @model illegallikelihood() = a ~ tosubmodel(inner()) illegal_likelihood (generic function with 2 methods)
julia> model = illegal_likelihood() | (a = 1.0,);
julia> model() ERROR: ArgumentError: ~
with a model on the right-hand side of an observe statement is not supported [...]
sourceNote that a [to_submodel](@ref)
is only sampleable; one cannot compute logpdf
for its realizations.
In the past, one would instead embed sub-models using @submodel
, which has been deprecated since the introduction of to_submodel(model)
DynamicPPL.@submodel
— Macro@submodel model
+@submodel ... = model
Run a Turing model
nested inside of a Turing model.
Warning This is deprecated and will be removed in a future release. Use left ~ to_submodel(model)
instead (see to_submodel
).
Examples
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2(x, y)
+ @submodel a = demo1(x)
+ return y ~ Uniform(0, a)
+ end;
When we sample from the model demo2(missing, 0.4)
random variable x
will be sampled:
julia> vi = VarInfo(demo2(missing, 0.4));
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+
+julia> @varname(x) in keys(vi)
+true
Variable a
is not tracked since it can be computed from the random variable x
that was tracked when running demo1
:
julia> @varname(a) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> x = vi[@varname(x)];
+
+julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
+true
source@submodel prefix=... model
+@submodel prefix=... ... = model
Run a Turing model
nested inside of a Turing model and add "prefix
." as a prefix to all random variables inside of the model
.
Valid expressions for prefix=...
are:
prefix=false
: no prefix is used.prefix=true
: attempt to automatically determine the prefix from the left-hand side ... = model
by first converting into a VarName
, and then calling Symbol
on this.prefix=expression
: results in the prefix Symbol(expression)
.
The prefix makes it possible to run the same Turing model multiple times while keeping track of all random variables correctly.
Warning This is deprecated and will be removed in a future release. Use left ~ to_submodel(model)
instead (see to_submodel(model)
).
Examples
Example models
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2(x, y, z)
+ @submodel prefix="sub1" a = demo1(x)
+ @submodel prefix="sub2" b = demo1(y)
+ return z ~ Uniform(-a, b)
+ end;
When we sample from the model demo2(missing, missing, 0.4)
random variables sub1.x
and sub2.x
will be sampled:
julia> vi = VarInfo(demo2(missing, missing, 0.4));
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+
+julia> @varname(var"sub1.x") in keys(vi)
+true
+
+julia> @varname(var"sub2.x") in keys(vi)
+true
Variables a
and b
are not tracked since they can be computed from the random variables sub1.x
and sub2.x
that were tracked when running demo1
:
julia> @varname(a) in keys(vi)
+false
+
+julia> @varname(b) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> sub1_x = vi[@varname(var"sub1.x")];
+
+julia> sub2_x = vi[@varname(var"sub2.x")];
+
+julia> logprior = logpdf(Normal(), sub1_x) + logpdf(Normal(), sub2_x);
+
+julia> loglikelihood = logpdf(Uniform(-1 - abs(sub1_x), 1 + abs(sub2_x)), 0.4);
+
+julia> getlogp(vi) ≈ logprior + loglikelihood
+true
Different ways of setting the prefix
julia> @model inner() = x ~ Normal()
+inner (generic function with 2 methods)
+
+julia> # When `prefix` is unspecified, no prefix is used.
+ @model submodel_noprefix() = @submodel a = inner()
+submodel_noprefix (generic function with 2 methods)
+
+julia> @varname(x) in keys(VarInfo(submodel_noprefix()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # Explicitely don't use any prefix.
+ @model submodel_prefix_false() = @submodel prefix=false a = inner()
+submodel_prefix_false (generic function with 2 methods)
+
+julia> @varname(x) in keys(VarInfo(submodel_prefix_false()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # Automatically determined from `a`.
+ @model submodel_prefix_true() = @submodel prefix=true a = inner()
+submodel_prefix_true (generic function with 2 methods)
+
+julia> @varname(var"a.x") in keys(VarInfo(submodel_prefix_true()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # Using a static string.
+ @model submodel_prefix_string() = @submodel prefix="my prefix" a = inner()
+submodel_prefix_string (generic function with 2 methods)
+
+julia> @varname(var"my prefix.x") in keys(VarInfo(submodel_prefix_string()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # Using string interpolation.
+ @model submodel_prefix_interpolation() = @submodel prefix="$(nameof(inner()))" a = inner()
+submodel_prefix_interpolation (generic function with 2 methods)
+
+julia> @varname(var"inner.x") in keys(VarInfo(submodel_prefix_interpolation()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # Or using some arbitrary expression.
+ @model submodel_prefix_expr() = @submodel prefix=1 + 2 a = inner()
+submodel_prefix_expr (generic function with 2 methods)
+
+julia> @varname(var"3.x") in keys(VarInfo(submodel_prefix_expr()))
+┌ Warning: `@submodel model` and `@submodel prefix=... model` are deprecated; see `to_submodel` for the up-to-date syntax.
+│ caller = ip:0x0
+└ @ Core :-1
+true
+
+julia> # (×) Automatic prefixing without a left-hand side expression does not work!
+ @model submodel_prefix_error() = @submodel prefix=true inner()
+ERROR: LoadError: cannot automatically prefix with no left-hand side
+[...]
Notes
- The choice
prefix=expression
means that the prefixing will incur a runtime cost. This is also the case for prefix=true
, depending on whether the expression on the the right-hand side of ... = model
requires runtime-information or not, e.g. x = model
will result in the static prefix x
, while x[i] = model
will be resolved at runtime.
sourceIn the context of including models within models, it's also useful to prefix the variables in sub-models to avoid variable names clashing:
DynamicPPL.prefix
— Functionprefix(model::Model, x)
Return model
but with all random variables prefixed by x
.
If x
is known at compile-time, use Val{x}()
to avoid runtime overheads for prefixing.
Examples
julia> using DynamicPPL: prefix
+
+julia> @model demo() = x ~ Dirac(1)
+demo (generic function with 2 methods)
+
+julia> rand(prefix(demo(), :my_prefix))
+(var"my_prefix.x" = 1,)
+
+julia> # One can also use `Val` to avoid runtime overheads.
+ rand(prefix(demo(), Val(:my_prefix)))
+(var"my_prefix.x" = 1,)
sourceUnder the hood, to_submodel
makes use of the following method to indicate that the model it's wrapping is a model over its return-values rather than something else
DynamicPPL.returned
— Methodreturned(model)
Return a model
wrapper indicating that it is a model over its return-values.
sourceUtilities
It is possible to manually increase (or decrease) the accumulated log density from within a model function.
DynamicPPL.@addlogprob!
— Macro@addlogprob!(ex)
Add the result of the evaluation of ex
to the joint log probability.
Examples
This macro allows you to include arbitrary terms in the likelihood
julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
julia> @model function demo(x)
μ ~ Normal()
@@ -1193,9 +786,9 @@
true
julia> loglikelihood(demo(x), (μ=0.2,)) ≈ myloglikelihood(x, 0.2)
-true
sourceReturn values of the model function for a collection of samples can be obtained with generated_quantities
.
DynamicPPL.generated_quantities
— Functiongenerated_quantities(model::Model, parameters::NamedTuple)
-generated_quantities(model::Model, values, keys)
-generated_quantities(model::Model, values, keys)
Execute model
with variables keys
set to values
and return the values returned by the model
.
If a NamedTuple
is given, keys=keys(parameters)
and values=values(parameters)
.
Example
julia> using DynamicPPL, Distributions
+true