API
Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.
Model
Macros
A core component of DynamicPPL is the @model
macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~
statements. These statements are rewritten by @model
as calls of internal functions for sampling the variables and computing their log densities.
DynamicPPL.@model
— Macro@model(expr[, warn = false])
Macro to specify a probabilistic model.
If warn
is true
, a warning is displayed if internal variable names are used in the model definition.
Examples
Model definition:
@model function model(x, y = 42)
+ ...
+end
To generate a Model
, call model(xvalue)
or model(xvalue, yvalue)
.
One can nest models and call another model inside the model function with @submodel
.
DynamicPPL.@submodel
— Macro@submodel model
+@submodel ... = model
Run a Turing model
nested inside of a Turing model.
Examples
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2(x, y)
+ @submodel a = demo1(x)
+ return y ~ Uniform(0, a)
+ end;
When we sample from the model demo2(missing, 0.4)
random variable x
will be sampled:
julia> vi = VarInfo(demo2(missing, 0.4));
+
+julia> @varname(x) in keys(vi)
+true
Variable a
is not tracked since it can be computed from the random variable x
that was tracked when running demo1
:
julia> @varname(a) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> x = vi[@varname(x)];
+
+julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
+true
@submodel prefix=... model
+@submodel prefix=... ... = model
Run a Turing model
nested inside of a Turing model and add "prefix
." as a prefix to all random variables inside of the model
.
Valid expressions for prefix=...
are:
prefix=false
: no prefix is used.prefix=true
: attempt to automatically determine the prefix from the left-hand side... = model
by first converting into aVarName
, and then callingSymbol
on this.prefix=expression
: results in the prefixSymbol(expression)
.
The prefix makes it possible to run the same Turing model multiple times while keeping track of all random variables correctly.
Examples
Example models
julia> @model function demo1(x)
+ x ~ Normal()
+ return 1 + abs(x)
+ end;
+
+julia> @model function demo2(x, y, z)
+ @submodel prefix="sub1" a = demo1(x)
+ @submodel prefix="sub2" b = demo1(y)
+ return z ~ Uniform(-a, b)
+ end;
When we sample from the model demo2(missing, missing, 0.4)
random variables sub1.x
and sub2.x
will be sampled:
julia> vi = VarInfo(demo2(missing, missing, 0.4));
+
+julia> @varname(var"sub1.x") in keys(vi)
+true
+
+julia> @varname(var"sub2.x") in keys(vi)
+true
Variables a
and b
are not tracked since they can be computed from the random variables sub1.x
and sub2.x
that were tracked when running demo1
:
julia> @varname(a) in keys(vi)
+false
+
+julia> @varname(b) in keys(vi)
+false
We can check that the log joint probability of the model accumulated in vi
is correct:
julia> sub1_x = vi[@varname(var"sub1.x")];
+
+julia> sub2_x = vi[@varname(var"sub2.x")];
+
+julia> logprior = logpdf(Normal(), sub1_x) + logpdf(Normal(), sub2_x);
+
+julia> loglikelihood = logpdf(Uniform(-1 - abs(sub1_x), 1 + abs(sub2_x)), 0.4);
+
+julia> getlogp(vi) ≈ logprior + loglikelihood
+true
Different ways of setting the prefix
julia> @model inner() = x ~ Normal()
+inner (generic function with 2 methods)
+
+julia> # When `prefix` is unspecified, no prefix is used.
+ @model submodel_noprefix() = @submodel a = inner()
+submodel_noprefix (generic function with 2 methods)
+
+julia> @varname(x) in keys(VarInfo(submodel_noprefix()))
+true
+
+julia> # Explicitely don't use any prefix.
+ @model submodel_prefix_false() = @submodel prefix=false a = inner()
+submodel_prefix_false (generic function with 2 methods)
+
+julia> @varname(x) in keys(VarInfo(submodel_prefix_false()))
+true
+
+julia> # Automatically determined from `a`.
+ @model submodel_prefix_true() = @submodel prefix=true a = inner()
+submodel_prefix_true (generic function with 2 methods)
+
+julia> @varname(var"a.x") in keys(VarInfo(submodel_prefix_true()))
+true
+
+julia> # Using a static string.
+ @model submodel_prefix_string() = @submodel prefix="my prefix" a = inner()
+submodel_prefix_string (generic function with 2 methods)
+
+julia> @varname(var"my prefix.x") in keys(VarInfo(submodel_prefix_string()))
+true
+
+julia> # Using string interpolation.
+ @model submodel_prefix_interpolation() = @submodel prefix="$(nameof(inner()))" a = inner()
+submodel_prefix_interpolation (generic function with 2 methods)
+
+julia> @varname(var"inner.x") in keys(VarInfo(submodel_prefix_interpolation()))
+true
+
+julia> # Or using some arbitrary expression.
+ @model submodel_prefix_expr() = @submodel prefix=1 + 2 a = inner()
+submodel_prefix_expr (generic function with 2 methods)
+
+julia> @varname(var"3.x") in keys(VarInfo(submodel_prefix_expr()))
+true
+
+julia> # (×) Automatic prefixing without a left-hand side expression does not work!
+ @model submodel_prefix_error() = @submodel prefix=true inner()
+ERROR: LoadError: cannot automatically prefix with no left-hand side
+[...]
Notes
- The choice
prefix=expression
means that the prefixing will incur a runtime cost. This is also the case forprefix=true
, depending on whether the expression on the the right-hand side of... = model
requires runtime-information or not, e.g.x = model
will result in the static prefixx
, whilex[i] = model
will be resolved at runtime.
Type
A Model
can be created by calling the model function, as defined by @model
.
DynamicPPL.Model
— Typestruct Model{F,argnames,defaultnames,missings,Targs,Tdefaults}
+ f::F
+ args::NamedTuple{argnames,Targs}
+ defaults::NamedTuple{defaultnames,Tdefaults}
+end
A Model
struct with model evaluation function of type F
, arguments of names argnames
types Targs
, default arguments of names defaultnames
with types Tdefaults
, and missing arguments missings
.
Here argnames
, defaultargnames
, and missings
are tuples of symbols, e.g. (:a, :b)
.
An argument with a type of Missing
will be in missings
by default. However, in non-traditional use-cases missings
can be defined differently. All variables in missings
are treated as random variables rather than observations.
The default arguments are used internally when constructing instances of the same model with different arguments.
Examples
julia> Model(f, (x = 1.0, y = 2.0))
+Model{typeof(f),(:x, :y),(),(),Tuple{Float64,Float64},Tuple{}}(f, (x = 1.0, y = 2.0), NamedTuple())
+
+julia> Model(f, (x = 1.0, y = 2.0), (x = 42,))
+Model{typeof(f),(:x, :y),(:x,),(),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
+
+julia> Model{(:y,)}(f, (x = 1.0, y = 2.0), (x = 42,)) # with special definition of missings
+Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
Model
s are callable structs.
DynamicPPL.Model
— Method(model::Model)([rng, varinfo, sampler, context])
Sample from the model
using the sampler
with random number generator rng
and the context
, and store the sample and log joint probability in varinfo
.
The method resets the log joint probability of varinfo
and increases the evaluation number of sampler
.
Basic properties of a model can be accessed with getargnames
, getmissings
, and nameof
.
Base.nameof
— Methodnameof(model::Model)
Get the name of the model
as Symbol
.
DynamicPPL.getargnames
— Functiongetargnames(model::Model)
Get a tuple of the argument names of the model
.
DynamicPPL.getmissings
— Functiongetmissings(model::Model)
Get a tuple of the names of the missing arguments of the model
.
Evaluation
With rand
one can draw samples from the prior distribution of a Model
.
Base.rand
— Functionrand([rng=Random.default_rng()], [T=NamedTuple], model::Model)
Generate a sample of type T
from the prior distribution of the model
.
One can also evaluate the log prior, log likelihood, and log joint probability.
DynamicPPL.logprior
— Functionlogprior(model::Model, varinfo::AbstractVarInfo)
Return the log prior probability of variables varinfo
for the probabilistic model
.
See also logjoint
and loglikelihood
.
logprior(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log prior probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+
+julia> @model function demo_model(x)
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, sqrt(s))
+ for i in eachindex(x)
+ x[i] ~ Normal(m, sqrt(s))
+ end
+ end;
+
+julia> # construct a chain of samples using MCMCChains
+ chain = Chains(rand(10, 2, 3), [:s, :m]);
+
+julia> logprior(demo_model([1., 2.]), chain);
logprior(model::Model, θ)
Return the log prior probability of variables θ
for the probabilistic model
.
See also logjoint
and loglikelihood
.
Examples
julia> @model function demo(x)
+ m ~ Normal()
+ for i in eachindex(x)
+ x[i] ~ Normal(m, 1.0)
+ end
+ end
+demo (generic function with 2 methods)
+
+julia> # Using a `NamedTuple`.
+ logprior(demo([1.0]), (m = 100.0, ))
+-5000.918938533205
+
+julia> # Using a `OrderedDict`.
+ logprior(demo([1.0]), OrderedDict(@varname(m) => 100.0))
+-5000.918938533205
+
+julia> # Truth.
+ logpdf(Normal(), 100.0)
+-5000.918938533205
StatsAPI.loglikelihood
— Functionloglikelihood(model::Model, varinfo::AbstractVarInfo)
Return the log likelihood of variables varinfo
for the probabilistic model
.
loglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log likelihoods evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+
+julia> @model function demo_model(x)
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, sqrt(s))
+ for i in eachindex(x)
+ x[i] ~ Normal(m, sqrt(s))
+ end
+ end;
+
+julia> # construct a chain of samples using MCMCChains
+ chain = Chains(rand(10, 2, 3), [:s, :m]);
+
+julia> loglikelihood(demo_model([1., 2.]), chain);
loglikelihood(model::Model, θ)
Return the log likelihood of variables θ
for the probabilistic model
.
See also logjoint
and logprior
.
Examples
julia> @model function demo(x)
+ m ~ Normal()
+ for i in eachindex(x)
+ x[i] ~ Normal(m, 1.0)
+ end
+ end
+demo (generic function with 2 methods)
+
+julia> # Using a `NamedTuple`.
+ loglikelihood(demo([1.0]), (m = 100.0, ))
+-4901.418938533205
+
+julia> # Using a `OrderedDict`.
+ loglikelihood(demo([1.0]), OrderedDict(@varname(m) => 100.0))
+-4901.418938533205
+
+julia> # Truth.
+ logpdf(Normal(100.0, 1.0), 1.0)
+-4901.418938533205
DynamicPPL.logjoint
— Functionlogjoint(model::Model, varinfo::AbstractVarInfo)
Return the log joint probability of variables varinfo
for the probabilistic model
.
See logprior
and loglikelihood
.
logjoint(model::Model, chain::AbstractMCMC.AbstractChains)
Return an array of log joint probabilities evaluated at each sample in an MCMC chain
.
Examples
julia> using MCMCChains, Distributions
+
+julia> @model function demo_model(x)
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, sqrt(s))
+ for i in eachindex(x)
+ x[i] ~ Normal(m, sqrt(s))
+ end
+ end;
+
+julia> # construct a chain of samples using MCMCChains
+ chain = Chains(rand(10, 2, 3), [:s, :m]);
+
+julia> logjoint(demo_model([1., 2.]), chain);
logjoint(model::Model, θ)
Return the log joint probability of variables θ
for the probabilistic model
.
See logprior
and loglikelihood
.
Examples
julia> @model function demo(x)
+ m ~ Normal()
+ for i in eachindex(x)
+ x[i] ~ Normal(m, 1.0)
+ end
+ end
+demo (generic function with 2 methods)
+
+julia> # Using a `NamedTuple`.
+ logjoint(demo([1.0]), (m = 100.0, ))
+-9902.33787706641
+
+julia> # Using a `OrderedDict`.
+ logjoint(demo([1.0]), OrderedDict(@varname(m) => 100.0))
+-9902.33787706641
+
+julia> # Truth.
+ logpdf(Normal(100.0, 1.0), 1.0) + logpdf(Normal(), 100.0)
+-9902.33787706641
LogDensityProblems.jl interface
The LogDensityProblems.jl interface is also supported by simply wrapping a Model
in a DynamicPPL.LogDensityFunction
:
DynamicPPL.LogDensityFunction
— TypeLogDensityFunction
A callable representing a log density function of a model
.
Fields
varinfo
: varinfo used for evaluationmodel
: model used for evaluationcontext
: context used for evaluation; ifnothing
,leafcontext(model.context)
will be used when applicable
Examples
julia> using Distributions
+
+julia> using DynamicPPL: LogDensityFunction, contextualize
+
+julia> @model function demo(x)
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo(1.0);
+
+julia> f = LogDensityFunction(model);
+
+julia> # It implements the interface of LogDensityProblems.jl.
+ using LogDensityProblems
+
+julia> LogDensityProblems.logdensity(f, [0.0])
+-2.3378770664093453
+
+julia> LogDensityProblems.dimension(f)
+1
+
+julia> # By default it uses `VarInfo` under the hood, but this is not necessary.
+ f = LogDensityFunction(model, SimpleVarInfo(model));
+
+julia> LogDensityProblems.logdensity(f, [0.0])
+-2.3378770664093453
+
+julia> # This also respects the context in `model`.
+ f_prior = LogDensityFunction(contextualize(model, DynamicPPL.PriorContext()), VarInfo(model));
+
+julia> LogDensityProblems.logdensity(f_prior, [0.0]) == logpdf(Normal(), 0.0)
+true
Condition and decondition
A Model
can be conditioned on a set of observations with AbstractPPL.condition
or its alias |
.
Base.:|
— Methodmodel | (x = 1.0, ...)
Return a Model
which now treats variables on the right-hand side as observations.
See condition
for more information and examples.
AbstractPPL.condition
— Functioncondition(model::Model; values...)
+condition(model::Model, values::NamedTuple)
Return a Model
which now treats the variables in values
as observations.
See also: decondition
, conditioned
Limitations
This does currently not work with variables that are provided to the model as arguments, e.g. @model function demo(x) ... end
means that condition
will not affect the variable x
.
Therefore if one wants to make use of condition
and decondition
one should not be specifying any random variables as arguments.
This is done for the sake of backwards compatibility.
Examples
Simple univariate model
julia> using Distributions
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ return (; m=m, x=x)
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo();
+
+julia> m, x = model(); (m ≠ 1.0 && x ≠ 100.0)
+true
+
+julia> # Create a new instance which treats `x` as observed
+ # with value `100.0`, and similarly for `m=1.0`.
+ conditioned_model = condition(model, x=100.0, m=1.0);
+
+julia> m, x = conditioned_model(); (m == 1.0 && x == 100.0)
+true
+
+julia> # Let's only condition on `x = 100.0`.
+ conditioned_model = condition(model, x = 100.0);
+
+julia> m, x =conditioned_model(); (m ≠ 1.0 && x == 100.0)
+true
+
+julia> # We can also use the nicer `|` syntax.
+ conditioned_model = model | (x = 100.0, );
+
+julia> m, x = conditioned_model(); (m ≠ 1.0 && x == 100.0)
+true
The above uses a NamedTuple
to hold the conditioning variables, which allows us to perform some additional optimizations; in many cases, the above has zero runtime-overhead.
But we can also use a Dict
, which offers more flexibility in the conditioning (see examples further below) but generally has worse performance than the NamedTuple
approach:
julia> conditioned_model_dict = condition(model, Dict(@varname(x) => 100.0));
+
+julia> m, x = conditioned_model_dict(); (m ≠ 1.0 && x == 100.0)
+true
+
+julia> # There's also an option using `|` by letting the right-hand side be a tuple
+ # with elements of type `Pair{<:VarName}`, i.e. `vn => value` with `vn isa VarName`.
+ conditioned_model_dict = model | (@varname(x) => 100.0, );
+
+julia> m, x = conditioned_model_dict(); (m ≠ 1.0 && x == 100.0)
+true
Condition only a part of a multivariate variable
Not only can be condition on multivariate random variables, but we can also use the standard mechanism of setting something to missing
in the call to condition
to only condition on a part of the variable.
julia> @model function demo_mv(::Type{TV}=Float64) where {TV}
+ m = Vector{TV}(undef, 2)
+ m[1] ~ Normal()
+ m[2] ~ Normal()
+ return m
+ end
+demo_mv (generic function with 4 methods)
+
+julia> model = demo_mv();
+
+julia> conditioned_model = condition(model, m = [missing, 1.0]);
+
+julia> # (✓) `m[1]` sampled while `m[2]` is fixed
+ m = conditioned_model(); (m[1] ≠ 1.0 && m[2] == 1.0)
+true
Intuitively one might also expect to be able to write model | (m[1] = 1.0, )
. Unfortunately this is not supported as it has the potential of increasing compilation times but without offering any benefit with respect to runtime:
julia> # (×) `m[2]` is not set to 1.0.
+ m = condition(model, var"m[2]" = 1.0)(); m[2] == 1.0
+false
But you can do this if you use a Dict
as the underlying storage instead:
julia> # Alternatives:
+ # - `model | (@varname(m[2]) => 1.0,)`
+ # - `condition(model, Dict(@varname(m[2] => 1.0)))`
+ # (✓) `m[2]` is set to 1.0.
+ m = condition(model, @varname(m[2]) => 1.0)(); (m[1] ≠ 1.0 && m[2] == 1.0)
+true
Nested models
condition
of course also supports the use of nested models through the use of @submodel
.
julia> @model demo_inner() = m ~ Normal()
+demo_inner (generic function with 2 methods)
+
+julia> @model function demo_outer()
+ @submodel m = demo_inner()
+ return m
+ end
+demo_outer (generic function with 2 methods)
+
+julia> model = demo_outer();
+
+julia> model() ≠ 1.0
+true
+
+julia> conditioned_model = model | (m = 1.0, );
+
+julia> conditioned_model()
+1.0
But one needs to be careful when prefixing variables in the nested models:
julia> @model function demo_outer_prefix()
+ @submodel prefix="inner" m = demo_inner()
+ return m
+ end
+demo_outer_prefix (generic function with 2 methods)
+
+julia> # (×) This doesn't work now!
+ conditioned_model = demo_outer_prefix() | (m = 1.0, );
+
+julia> conditioned_model() == 1.0
+false
+
+julia> # (✓) `m` in `demo_inner` is referred to as `inner.m` internally, so we do:
+ conditioned_model = demo_outer_prefix() | (var"inner.m" = 1.0, );
+
+julia> conditioned_model()
+1.0
+
+julia> # Note that the above `var"..."` is just standard Julia syntax:
+ keys((var"inner.m" = 1.0, ))
+(Symbol("inner.m"),)
And similarly when using Dict
:
julia> conditioned_model_dict = demo_outer_prefix() | (@varname(var"inner.m") => 1.0);
+
+julia> conditioned_model_dict()
+1.0
The difference is maybe more obvious once we look at how these different in their trace/VarInfo
:
julia> keys(VarInfo(demo_outer()))
+1-element Vector{VarName{:m, typeof(identity)}}:
+ m
+
+julia> keys(VarInfo(demo_outer_prefix()))
+1-element Vector{VarName{Symbol("inner.m"), typeof(identity)}}:
+ inner.m
From this we can tell what the correct way to condition m
within demo_inner
is in the two different models.
condition([context::AbstractContext,] values::NamedTuple)
+condition([context::AbstractContext]; values...)
Return ConditionContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: decondition
DynamicPPL.conditioned
— Functionconditioned(model::Model)
Return the conditioned values in model
.
Examples
julia> using Distributions
+
+julia> using DynamicPPL: conditioned, contextualize
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ end
+demo (generic function with 2 methods)
+
+julia> m = demo();
+
+julia> # Returns all the variables we have conditioned on + their values.
+ conditioned(condition(m, x=100.0, m=1.0))
+(x = 100.0, m = 1.0)
+
+julia> # Nested ones also work (note that `PrefixContext` does nothing to the result).
+ cm = condition(contextualize(m, PrefixContext{:a}(condition(m=1.0))), x=100.0);
+
+julia> conditioned(cm)
+(x = 100.0, m = 1.0)
+
+julia> # Since we conditioned on `m`, not `a.m` as it will appear after prefixed,
+ # `a.m` is treated as a random variable.
+ keys(VarInfo(cm))
+1-element Vector{VarName{Symbol("a.m"), typeof(identity)}}:
+ a.m
+
+julia> # If we instead condition on `a.m`, `m` in the model will be considered an observation.
+ cm = condition(contextualize(m, PrefixContext{:a}(condition(var"a.m"=1.0))), x=100.0);
+
+julia> conditioned(cm).x
+100.0
+
+julia> conditioned(cm).var"a.m"
+1.0
+
+julia> keys(VarInfo(cm)) # <= no variables are sampled
+VarName[]
conditioned(context::AbstractContext)
Return NamedTuple
of values that are conditioned on under context`.
Note that this will recursively traverse the context stack and return a merged version of the condition values.
Similarly, one can specify with AbstractPPL.decondition
that certain, or all, random variables are not observed.
AbstractPPL.decondition
— Functiondecondition(model::Model)
+decondition(model::Model, variables...)
Return a Model
for which variables...
are not considered observations. If no variables
are provided, then all variables currently considered observations will no longer be.
This is essentially the inverse of condition
. This also means that it suffers from the same limitiations.
Note that currently we only support variables
to take on explicit values provided to condition
.
Examples
julia> using Distributions
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ return (; m=m, x=x)
+ end
+demo (generic function with 2 methods)
+
+julia> conditioned_model = condition(demo(), m = 1.0, x = 10.0);
+
+julia> conditioned_model()
+(m = 1.0, x = 10.0)
+
+julia> # By specifying the `VarName` to `decondition`.
+ model = decondition(conditioned_model, @varname(m));
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
+
+julia> # When `NamedTuple` is used as the underlying, you can also provide
+ # the symbol directly (though the `@varname` approach is preferable if
+ # if the variable is known at compile-time).
+ model = decondition(conditioned_model, :m);
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
+
+julia> # `decondition` multiple at once:
+ (m, x) = decondition(model, :m, :x)(); (m ≠ 1.0 && x ≠ 10.0)
+true
+
+julia> # `decondition` without any symbols will `decondition` all variables.
+ (m, x) = decondition(model)(); (m ≠ 1.0 && x ≠ 10.0)
+true
+
+julia> # Usage of `Val` to perform `decondition` at compile-time if possible
+ # is also supported.
+ model = decondition(conditioned_model, Val{:m}());
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
Similarly when using a Dict
:
julia> conditioned_model_dict = condition(demo(), @varname(m) => 1.0, @varname(x) => 10.0);
+
+julia> conditioned_model_dict()
+(m = 1.0, x = 10.0)
+
+julia> deconditioned_model_dict = decondition(conditioned_model_dict, @varname(m));
+
+julia> (m, x) = deconditioned_model_dict(); m ≠ 1.0 && x == 10.0
+true
But, as mentioned, decondition
is only supported for variables explicitly provided to condition
earlier;
julia> @model function demo_mv(::Type{TV}=Float64) where {TV}
+ m = Vector{TV}(undef, 2)
+ m[1] ~ Normal()
+ m[2] ~ Normal()
+ return m
+ end
+demo_mv (generic function with 4 methods)
+
+julia> model = demo_mv();
+
+julia> conditioned_model = condition(model, @varname(m) => [1.0, 2.0]);
+
+julia> conditioned_model()
+2-element Vector{Float64}:
+ 1.0
+ 2.0
+
+julia> deconditioned_model = decondition(conditioned_model, @varname(m[1]));
+
+julia> deconditioned_model() # (×) `m[1]` is still conditioned
+2-element Vector{Float64}:
+ 1.0
+ 2.0
+
+julia> # (✓) this works though
+ deconditioned_model_2 = deconditioned_model | (@varname(m[1]) => missing);
+
+julia> m = deconditioned_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
+true
decondition(context::AbstractContext, syms...)
Return context
but with syms
no longer conditioned on.
Note that this recursively traverses contexts, deconditioning all along the way.
See also: condition
Fixing and unfixing
We can also fix a collection of variables in a Model
to certain using fix
.
This might seem quite similar to the aforementioned condition
and its siblings, but they are indeed different operations:
condition
ed variables are considered to be observations, and are thus included in the computationlogjoint
andloglikelihood
, but not inlogprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
The differences are more clearly spelled out in the docstring of fix
below.
DynamicPPL.fix
— Functionfix(model::Model; values...)
+fix(model::Model, values::NamedTuple)
Return a Model
which now treats the variables in values
as fixed.
Examples
Simple univariate model
julia> using Distributions
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ return (; m=m, x=x)
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo();
+
+julia> m, x = model(); (m ≠ 1.0 && x ≠ 100.0)
+true
+
+julia> # Create a new instance which treats `x` as observed
+ # with value `100.0`, and similarly for `m=1.0`.
+ fixed_model = fix(model, x=100.0, m=1.0);
+
+julia> m, x = fixed_model(); (m == 1.0 && x == 100.0)
+true
+
+julia> # Let's only fix on `x = 100.0`.
+ fixed_model = fix(model, x = 100.0);
+
+julia> m, x = fixed_model(); (m ≠ 1.0 && x == 100.0)
+true
The above uses a NamedTuple
to hold the fixed variables, which allows us to perform some additional optimizations; in many cases, the above has zero runtime-overhead.
But we can also use a Dict
, which offers more flexibility in the fixing (see examples further below) but generally has worse performance than the NamedTuple
approach:
julia> fixed_model_dict = fix(model, Dict(@varname(x) => 100.0));
+
+julia> m, x = fixed_model_dict(); (m ≠ 1.0 && x == 100.0)
+true
+
+julia> # Alternative: pass `Pair{<:VarName}` as positional argument.
+ fixed_model_dict = fix(model, @varname(x) => 100.0, );
+
+julia> m, x = fixed_model_dict(); (m ≠ 1.0 && x == 100.0)
+true
Fix only a part of a multivariate variable
We can not only fix multivariate random variables, but we can also use the standard mechanism of setting something to missing
in the call to fix
to only fix a part of the variable.
julia> @model function demo_mv(::Type{TV}=Float64) where {TV}
+ m = Vector{TV}(undef, 2)
+ m[1] ~ Normal()
+ m[2] ~ Normal()
+ return m
+ end
+demo_mv (generic function with 4 methods)
+
+julia> model = demo_mv();
+
+julia> fixed_model = fix(model, m = [missing, 1.0]);
+
+julia> # (✓) `m[1]` sampled while `m[2]` is fixed
+ m = fixed_model(); (m[1] ≠ 1.0 && m[2] == 1.0)
+true
Intuitively one might also expect to be able to write something like fix(model, var"m[1]" = 1.0, )
. Unfortunately this is not supported as it has the potential of increasing compilation times but without offering any benefit with respect to runtime:
julia> # (×) `m[2]` is not set to 1.0.
+ m = fix(model, var"m[2]" = 1.0)(); m[2] == 1.0
+false
But you can do this if you use a Dict
as the underlying storage instead:
julia> # Alternative: `fix(model, Dict(@varname(m[2] => 1.0)))`
+ # (✓) `m[2]` is set to 1.0.
+ m = fix(model, @varname(m[2]) => 1.0)(); (m[1] ≠ 1.0 && m[2] == 1.0)
+true
Nested models
fix
of course also supports the use of nested models through the use of @submodel
.
julia> @model demo_inner() = m ~ Normal()
+demo_inner (generic function with 2 methods)
+
+julia> @model function demo_outer()
+ @submodel m = demo_inner()
+ return m
+ end
+demo_outer (generic function with 2 methods)
+
+julia> model = demo_outer();
+
+julia> model() ≠ 1.0
+true
+
+julia> fixed_model = model | (m = 1.0, );
+
+julia> fixed_model()
+1.0
But one needs to be careful when prefixing variables in the nested models:
julia> @model function demo_outer_prefix()
+ @submodel prefix="inner" m = demo_inner()
+ return m
+ end
+demo_outer_prefix (generic function with 2 methods)
+
+julia> # (×) This doesn't work now!
+ fixed_model = demo_outer_prefix() | (m = 1.0, );
+
+julia> fixed_model() == 1.0
+false
+
+julia> # (✓) `m` in `demo_inner` is referred to as `inner.m` internally, so we do:
+ fixed_model = demo_outer_prefix() | (var"inner.m" = 1.0, );
+
+julia> fixed_model()
+1.0
+
+julia> # Note that the above `var"..."` is just standard Julia syntax:
+ keys((var"inner.m" = 1.0, ))
+(Symbol("inner.m"),)
And similarly when using Dict
:
julia> fixed_model_dict = demo_outer_prefix() | (@varname(var"inner.m") => 1.0);
+
+julia> fixed_model_dict()
+1.0
The difference is maybe more obvious once we look at how these different in their trace/VarInfo
:
julia> keys(VarInfo(demo_outer()))
+1-element Vector{VarName{:m, typeof(identity)}}:
+ m
+
+julia> keys(VarInfo(demo_outer_prefix()))
+1-element Vector{VarName{Symbol("inner.m"), typeof(identity)}}:
+ inner.m
From this we can tell what the correct way to fix m
within demo_inner
is in the two different models.
Difference from condition
A very similar functionality is also provided by condition
which, not surprisingly, conditions variables instead of fixing them. The only difference between fixing and conditioning is as follows:
condition
ed variables are considered to be observations, and are thus included in the computationlogjoint
andloglikelihood
, but not inlogprior
.fix
ed variables are considered to be constant, and are thus not included in any log-probability computations.
julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ return (; m=m, x=x)
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo();
+
+julia> model_fixed = fix(model, m = 1.0);
+
+julia> model_conditioned = condition(model, m = 1.0);
+
+julia> logjoint(model_fixed, (x=1.0,))
+-0.9189385332046728
+
+julia> # Different!
+ logjoint(model_conditioned, (x=1.0,))
+-2.3378770664093453
+
+julia> # And the difference is the missing log-probability of `m`:
+ logjoint(model_fixed, (x=1.0,)) + logpdf(Normal(), 1.0) == logjoint(model_conditioned, (x=1.0,))
+true
fix([context::AbstractContext,] values::NamedTuple)
+fix([context::AbstractContext]; values...)
Return FixedContext
with values
and context
if values
is non-empty, otherwise return context
which is DefaultContext
by default.
See also: unfix
DynamicPPL.fixed
— Functionfixed(model::Model)
Return the fixed values in model
.
Examples
julia> using Distributions
+
+julia> using DynamicPPL: fixed, contextualize
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ end
+demo (generic function with 2 methods)
+
+julia> m = demo();
+
+julia> # Returns all the variables we have fixed on + their values.
+ fixed(fix(m, x=100.0, m=1.0))
+(x = 100.0, m = 1.0)
+
+julia> # Nested ones also work (note that `PrefixContext` does nothing to the result).
+ cm = fix(contextualize(m, PrefixContext{:a}(fix(m=1.0))), x=100.0);
+
+julia> fixed(cm)
+(x = 100.0, m = 1.0)
+
+julia> # Since we fixed on `m`, not `a.m` as it will appear after prefixed,
+ # `a.m` is treated as a random variable.
+ keys(VarInfo(cm))
+1-element Vector{VarName{Symbol("a.m"), typeof(identity)}}:
+ a.m
+
+julia> # If we instead fix on `a.m`, `m` in the model will be considered an observation.
+ cm = fix(contextualize(m, PrefixContext{:a}(fix(var"a.m"=1.0))), x=100.0);
+
+julia> fixed(cm).x
+100.0
+
+julia> fixed(cm).var"a.m"
+1.0
+
+julia> keys(VarInfo(cm)) # <= no variables are sampled
+VarName[]
fixed(context::AbstractContext)
Return the values that are fixed under context
.
Note that this will recursively traverse the context stack and return a merged version of the fix values.
The difference between fix
and condition
is described in the docstring of fix
above.
Similarly, we can unfix
variables, i.e. return them to their original meaning:
DynamicPPL.unfix
— Functionunfix(model::Model)
+unfix(model::Model, variables...)
Return a Model
for which variables...
are not considered fixed. If no variables
are provided, then all variables currently considered fixed will no longer be.
This is essentially the inverse of fix
. This also means that it suffers from the same limitiations.
Note that currently we only support variables
to take on explicit values provided to fix
.
Examples
julia> using Distributions
+
+julia> @model function demo()
+ m ~ Normal()
+ x ~ Normal(m, 1)
+ return (; m=m, x=x)
+ end
+demo (generic function with 2 methods)
+
+julia> fixed_model = fix(demo(), m = 1.0, x = 10.0);
+
+julia> fixed_model()
+(m = 1.0, x = 10.0)
+
+julia> # By specifying the `VarName` to `unfix`.
+ model = unfix(fixed_model, @varname(m));
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
+
+julia> # When `NamedTuple` is used as the underlying, you can also provide
+ # the symbol directly (though the `@varname` approach is preferable if
+ # if the variable is known at compile-time).
+ model = unfix(fixed_model, :m);
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
+
+julia> # `unfix` multiple at once:
+ (m, x) = unfix(model, :m, :x)(); (m ≠ 1.0 && x ≠ 10.0)
+true
+
+julia> # `unfix` without any symbols will `unfix` all variables.
+ (m, x) = unfix(model)(); (m ≠ 1.0 && x ≠ 10.0)
+true
+
+julia> # Usage of `Val` to perform `unfix` at compile-time if possible
+ # is also supported.
+ model = unfix(fixed_model, Val{:m}());
+
+julia> (m, x) = model(); (m ≠ 1.0 && x == 10.0)
+true
Similarly when using a Dict
:
julia> fixed_model_dict = fix(demo(), @varname(m) => 1.0, @varname(x) => 10.0);
+
+julia> fixed_model_dict()
+(m = 1.0, x = 10.0)
+
+julia> unfixed_model_dict = unfix(fixed_model_dict, @varname(m));
+
+julia> (m, x) = unfixed_model_dict(); m ≠ 1.0 && x == 10.0
+true
But, as mentioned, unfix
is only supported for variables explicitly provided to fix
earlier:
julia> @model function demo_mv(::Type{TV}=Float64) where {TV}
+ m = Vector{TV}(undef, 2)
+ m[1] ~ Normal()
+ m[2] ~ Normal()
+ return m
+ end
+demo_mv (generic function with 4 methods)
+
+julia> model = demo_mv();
+
+julia> fixed_model = fix(model, @varname(m) => [1.0, 2.0]);
+
+julia> fixed_model()
+2-element Vector{Float64}:
+ 1.0
+ 2.0
+
+julia> unfixed_model = unfix(fixed_model, @varname(m[1]));
+
+julia> unfixed_model() # (×) `m[1]` is still fixed
+2-element Vector{Float64}:
+ 1.0
+ 2.0
+
+julia> # (✓) this works though
+ unfixed_model_2 = fix(unfixed_model, @varname(m[1]) => missing);
+
+julia> m = unfixed_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
+true
unfix(context::AbstractContext, syms...)
Return context
but with syms
no longer fixed.
Note that this recursively traverses contexts, unfixing all along the way.
See also: fix
Utilities
It is possible to manually increase (or decrease) the accumulated log density from within a model function.
DynamicPPL.@addlogprob!
— Macro@addlogprob!(ex)
Add the result of the evaluation of ex
to the joint log probability.
Examples
This macro allows you to include arbitrary terms in the likelihood
julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
+
+julia> @model function demo(x)
+ μ ~ Normal()
+ @addlogprob! myloglikelihood(x, μ)
+ end;
+
+julia> x = [1.3, -2.1];
+
+julia> loglikelihood(demo(x), (μ=0.2,)) ≈ myloglikelihood(x, 0.2)
+true
and to reject samples:
julia> @model function demo(x)
+ m ~ MvNormal(zero(x), I)
+ if dot(m, x) < 0
+ @addlogprob! -Inf
+ # Exit the model evaluation early
+ return
+ end
+ x ~ MvNormal(m, I)
+ return
+ end;
+
+julia> logjoint(demo([-2.1]), (m=[0.2],)) == -Inf
+true
The @addlogprob!
macro increases the accumulated log probability regardless of the evaluation context, i.e., regardless of whether you evaluate the log prior, the log likelihood or the log joint density. If you would like to avoid this behaviour you should check the evaluation context. It can be accessed with the internal variable __context__
. For instance, in the following example the log density is not accumulated when only the log prior is computed:
julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
+
+julia> @model function demo(x)
+ μ ~ Normal()
+ if DynamicPPL.leafcontext(__context__) !== PriorContext()
+ @addlogprob! myloglikelihood(x, μ)
+ end
+ end;
+
+julia> x = [1.3, -2.1];
+
+julia> logprior(demo(x), (μ=0.2,)) ≈ logpdf(Normal(), 0.2)
+true
+
+julia> loglikelihood(demo(x), (μ=0.2,)) ≈ myloglikelihood(x, 0.2)
+true
Return values of the model function for a collection of samples can be obtained with generated_quantities
.
DynamicPPL.generated_quantities
— Functiongenerated_quantities(model::Model, chain::AbstractChains)
Execute model
for each of the samples in chain
and return an array of the values returned by the model
for each sample.
Examples
General
Often you might have additional quantities computed inside the model that you want to inspect, e.g.
@model function demo(x)
+ # sample and observe
+ θ ~ Prior()
+ x ~ Likelihood()
+ return interesting_quantity(θ, x)
+end
+m = demo(data)
+chain = sample(m, alg, n)
+# To inspect the `interesting_quantity(θ, x)` where `θ` is replaced by samples
+# from the posterior/`chain`:
+generated_quantities(m, chain) # <= results in a `Vector` of returned values
+ # from `interesting_quantity(θ, x)`
Concrete (and simple)
julia> using DynamicPPL, Turing
+
+julia> @model function demo(xs)
+ s ~ InverseGamma(2, 3)
+ m_shifted ~ Normal(10, √s)
+ m = m_shifted - 10
+
+ for i in eachindex(xs)
+ xs[i] ~ Normal(m, √s)
+ end
+
+ return (m, )
+ end
+demo (generic function with 1 method)
+
+julia> model = demo(randn(10));
+
+julia> chain = sample(model, MH(), 10);
+
+julia> generated_quantities(model, chain)
+10×1 Array{Tuple{Float64},2}:
+ (2.1964758025119338,)
+ (2.1964758025119338,)
+ (0.09270081916291417,)
+ (0.09270081916291417,)
+ (0.09270081916291417,)
+ (0.09270081916291417,)
+ (0.09270081916291417,)
+ (0.043088571494005024,)
+ (-0.16489786710222099,)
+ (-0.16489786710222099,)
generated_quantities(model::Model, parameters::NamedTuple)
+generated_quantities(model::Model, values, keys)
+generated_quantities(model::Model, values, keys)
Execute model
with variables keys
set to values
and return the values returned by the model
.
If a NamedTuple
is given, keys=keys(parameters)
and values=values(parameters)
.
Example
julia> using DynamicPPL, Distributions
+
+julia> @model function demo(xs)
+ s ~ InverseGamma(2, 3)
+ m_shifted ~ Normal(10, √s)
+ m = m_shifted - 10
+ for i in eachindex(xs)
+ xs[i] ~ Normal(m, √s)
+ end
+ return (m, )
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo(randn(10));
+
+julia> parameters = (; s = 1.0, m_shifted=10);
+
+julia> generated_quantities(model, parameters)
+(0.0,)
+
+julia> generated_quantities(model, values(parameters), keys(parameters))
+(0.0,)
For a chain of samples, one can compute the pointwise log-likelihoods of each observed random variable with pointwise_loglikelihoods
.
DynamicPPL.pointwise_loglikelihoods
— Functionpointwise_loglikelihoods(model::Model, chain::Chains, keytype = String)
Runs model
on each sample in chain
returning a OrderedDict{String, Matrix{Float64}}
with keys corresponding to symbols of the observations, and values being matrices of shape (num_chains, num_samples)
.
keytype
specifies what the type of the keys used in the returned OrderedDict
are. Currently, only String
and VarName
are supported.
Notes
Say y
is a Vector
of n
i.i.d. Normal(μ, σ)
variables, with μ
and σ
both being <:Real
. Then the observe (i.e. when the left-hand side is an observation) statements can be implemented in three ways:
- using a
for
loop:
for i in eachindex(y)
+ y[i] ~ Normal(μ, σ)
+end
- using
.~
:
y .~ Normal(μ, σ)
- using
MvNormal
:
y ~ MvNormal(fill(μ, n), σ^2 * I)
In (1) and (2), y
will be treated as a collection of n
i.i.d. 1-dimensional variables, while in (3) y
will be treated as a single n-dimensional observation.
This is important to keep in mind, in particular if the computation is used for downstream computations.
Examples
From chain
julia> using DynamicPPL, Turing
+
+julia> @model function demo(xs, y)
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, √s)
+ for i in eachindex(xs)
+ xs[i] ~ Normal(m, √s)
+ end
+
+ y ~ Normal(m, √s)
+ end
+demo (generic function with 1 method)
+
+julia> model = demo(randn(3), randn());
+
+julia> chain = sample(model, MH(), 10);
+
+julia> pointwise_loglikelihoods(model, chain)
+OrderedDict{String,Array{Float64,2}} with 4 entries:
+ "xs[1]" => [-1.42932; -2.68123; … ; -1.66333; -1.66333]
+ "xs[2]" => [-1.6724; -0.861339; … ; -1.62359; -1.62359]
+ "xs[3]" => [-1.42862; -2.67573; … ; -1.66251; -1.66251]
+ "y" => [-1.51265; -0.914129; … ; -1.5499; -1.5499]
+
+julia> pointwise_loglikelihoods(model, chain, String)
+OrderedDict{String,Array{Float64,2}} with 4 entries:
+ "xs[1]" => [-1.42932; -2.68123; … ; -1.66333; -1.66333]
+ "xs[2]" => [-1.6724; -0.861339; … ; -1.62359; -1.62359]
+ "xs[3]" => [-1.42862; -2.67573; … ; -1.66251; -1.66251]
+ "y" => [-1.51265; -0.914129; … ; -1.5499; -1.5499]
+
+julia> pointwise_loglikelihoods(model, chain, VarName)
+OrderedDict{VarName,Array{Float64,2}} with 4 entries:
+ xs[1] => [-1.42932; -2.68123; … ; -1.66333; -1.66333]
+ xs[2] => [-1.6724; -0.861339; … ; -1.62359; -1.62359]
+ xs[3] => [-1.42862; -2.67573; … ; -1.66251; -1.66251]
+ y => [-1.51265; -0.914129; … ; -1.5499; -1.5499]
Broadcasting
Note that x .~ Dist()
will treat x
as a collection of independent observations rather than as a single observation.
julia> @model function demo(x)
+ x .~ Normal()
+ end;
+
+julia> m = demo([1.0, ]);
+
+julia> ℓ = pointwise_loglikelihoods(m, VarInfo(m)); first(ℓ[@varname(x[1])])
+-1.4189385332046727
+
+julia> m = demo([1.0; 1.0]);
+
+julia> ℓ = pointwise_loglikelihoods(m, VarInfo(m)); first.((ℓ[@varname(x[1])], ℓ[@varname(x[2])]))
+(-1.4189385332046727, -1.4189385332046727)
For converting a chain into a format that can more easily be fed into a Model
again, for example using condition
, you can use value_iterator_from_chain
.
DynamicPPL.value_iterator_from_chain
— Functionvalue_iterator_from_chain(model::Model, chain)
+value_iterator_from_chain(varinfo::AbstractVarInfo, chain)
Return an iterator over the values in chain
for each variable in model
/varinfo
.
Example
julia> using MCMCChains, DynamicPPL, Distributions, StableRNGs
+
+julia> rng = StableRNG(42);
+
+julia> @model function demo_model(x)
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, sqrt(s))
+ for i in eachindex(x)
+ x[i] ~ Normal(m, sqrt(s))
+ end
+
+ return s, m
+ end
+demo_model (generic function with 2 methods)
+
+julia> model = demo_model([1.0, 2.0]);
+
+julia> chain = Chains(rand(rng, 10, 2, 3), [:s, :m]);
+
+julia> iter = value_iterator_from_chain(model, chain);
+
+julia> first(iter)
+OrderedDict{VarName, Any} with 2 entries:
+ s => 0.580515
+ m => 0.739328
+
+julia> collect(iter)
+10×3 Matrix{OrderedDict{VarName, Any}}:
+ OrderedDict(s=>0.580515, m=>0.739328) … OrderedDict(s=>0.186047, m=>0.402423)
+ OrderedDict(s=>0.191241, m=>0.627342) OrderedDict(s=>0.776277, m=>0.166342)
+ OrderedDict(s=>0.971133, m=>0.637584) OrderedDict(s=>0.651655, m=>0.712044)
+ OrderedDict(s=>0.74345, m=>0.110359) OrderedDict(s=>0.469214, m=>0.104502)
+ OrderedDict(s=>0.170969, m=>0.598514) OrderedDict(s=>0.853546, m=>0.185399)
+ OrderedDict(s=>0.704776, m=>0.322111) … OrderedDict(s=>0.638301, m=>0.853802)
+ OrderedDict(s=>0.441044, m=>0.162285) OrderedDict(s=>0.852959, m=>0.0956922)
+ OrderedDict(s=>0.803972, m=>0.643369) OrderedDict(s=>0.245049, m=>0.871985)
+ OrderedDict(s=>0.772384, m=>0.646323) OrderedDict(s=>0.906603, m=>0.385502)
+ OrderedDict(s=>0.70882, m=>0.253105) OrderedDict(s=>0.413222, m=>0.953288)
+
+julia> # This can be used to `condition` a `Model`.
+ conditioned_model = model | first(iter);
+
+julia> conditioned_model() # <= results in same values as the `first(iter)` above
+(0.5805148626851955, 0.7393275279160691)
Sometimes it can be useful to extract the priors of a model. This is the possible using extract_priors
.
DynamicPPL.extract_priors
— Functionextract_priors([rng::Random.AbstractRNG, ]model::Model)
Extract the priors from a model.
This is done by sampling from the model and recording the distributions that are used to generate the samples.
Because the extraction is done by execution of the model, there are several caveats:
- If one variable, say,
y ~ Normal(0, x)
, wherex ~ Normal()
is also a random variable, then the extracted prior will have different parameters in every extraction! - If the model does not have static support, say,
n ~ Categorical(1:10); x ~ MvNormmal(zeros(n), I)
, then the extracted priors themselves will be different between extractions, not just their parameters.
Both of these caveats are demonstrated below.
Examples
Changing parameters
julia> using Distributions, StableRNGs
+
+julia> rng = StableRNG(42);
+
+julia> @model function model_dynamic_parameters()
+ x ~ Normal(0, 1)
+ y ~ Normal(x, 1)
+ end;
+
+julia> model = model_dynamic_parameters();
+
+julia> extract_priors(rng, model)[@varname(y)]
+Normal{Float64}(μ=-0.6702516921145671, σ=1.0)
+
+julia> extract_priors(rng, model)[@varname(y)]
+Normal{Float64}(μ=1.3736306979834252, σ=1.0)
Changing support
julia> using LinearAlgebra, Distributions, StableRNGs
+
+julia> rng = StableRNG(42);
+
+julia> @model function model_dynamic_support()
+ n ~ Categorical(ones(10) ./ 10)
+ x ~ MvNormal(zeros(n), I)
+ end;
+
+julia> model = model_dynamic_support();
+
+julia> length(extract_priors(rng, model)[@varname(x)])
+6
+
+julia> length(extract_priors(rng, model)[@varname(x)])
+9
Safe extraction of values from a given AbstractVarInfo
as they are seen in the model can be done using values_as_in_model
.
DynamicPPL.values_as_in_model
— Functionvalues_as_in_model(model::Model[, varinfo::AbstractVarInfo, context::AbstractContext])
+values_as_in_model(rng::Random.AbstractRNG, model::Model[, varinfo::AbstractVarInfo, context::AbstractContext])
Get the values of varinfo
as they would be seen in the model.
If no varinfo
is provided, then this is effectively the same as Base.rand(rng::Random.AbstractRNG, model::Model)
.
More specifically, this method attempts to extract the realization as seen in the model. For example, x[1] ~ truncated(Normal(); lower=0)
will result in a realization compatible with truncated(Normal(); lower=0)
regardless of whether varinfo
is working in unconstrained space.
Hence this method is a "safe" way of obtaining realizations in constrained space at the cost of additional model evaluations.
Arguments
model::Model
: model to extract realizations from.varinfo::AbstractVarInfo
: variable information to use for the extraction.context::AbstractContext
: context to use for the extraction. Ifrng
is specified, thencontext
will be wrapped in aSamplingContext
with the providedrng
.
Examples
When VarInfo
fails
The following demonstrates a common pitfall when working with VarInfo
and constrained variables.
julia> using Distributions, StableRNGs
+
+julia> rng = StableRNG(42);
+
+julia> @model function model_changing_support()
+ x ~ Bernoulli(0.5)
+ y ~ x == 1 ? Uniform(0, 1) : Uniform(11, 12)
+ end;
+
+julia> model = model_changing_support();
+
+julia> # Construct initial type-stable `VarInfo`.
+ varinfo = VarInfo(rng, model);
+
+julia> # Link it so it works in unconstrained space.
+ varinfo_linked = DynamicPPL.link(varinfo, model);
+
+julia> # Perform computations in unconstrained space, e.g. changing the values of `θ`.
+ # Flip `x` so we hit the other support of `y`.
+ θ = [!varinfo[@varname(x)], rand(rng)];
+
+julia> # Update the `VarInfo` with the new values.
+ varinfo_linked = DynamicPPL.unflatten(varinfo_linked, θ);
+
+julia> # Determine the expected support of `y`.
+ lb, ub = θ[1] == 1 ? (0, 1) : (11, 12)
+(0, 1)
+
+julia> # Approach 1: Convert back to constrained space using `invlink` and extract.
+ varinfo_invlinked = DynamicPPL.invlink(varinfo_linked, model);
+
+julia> # (×) Fails! Because `VarInfo` _saves_ the original distributions
+ # used in the very first model evaluation, hence the support of `y`
+ # is not updated even though `x` has changed.
+ lb ≤ varinfo_invlinked[@varname(y)] ≤ ub
+false
+
+julia> # Approach 2: Extract realizations using `values_as_in_model`.
+ # (✓) `values_as_in_model` will re-run the model and extract
+ # the correct realization of `y` given the new values of `x`.
+ lb ≤ values_as_in_model(model, varinfo_linked)[@varname(y)] ≤ ub
+true
DynamicPPL.NamedDist
— TypeA named distribution that carries the name of the random variable with it.
Testing Utilities
DynamicPPL provides several demo models and helpers for testing samplers in the DynamicPPL.TestUtils
submodule.
DynamicPPL.TestUtils.test_sampler
— Functiontest_sampler(models, sampler, args...; kwargs...)
Test that sampler
produces correct marginal posterior means on each model in models
.
In short, this method iterates through models
, calls AbstractMCMC.sample
on the model
and sampler
to produce a chain
, and then checks marginal_mean_of_samples(chain, vn)
for every (leaf) varname vn
against the corresponding value returned by posterior_mean
for each model.
To change how comparison is done for a particular chain
type, one can overload marginal_mean_of_samples
for the corresponding type.
Arguments
models
: A collection of instaces ofDynamicPPL.Model
to test on.sampler
: TheAbstractMCMC.AbstractSampler
to test.args...
: Arguments forwarded tosample
.
Keyword arguments
varnames_filter
: A filter to apply tovarnames(model)
, allowing comparison for only a subset of the varnames.atol=1e-1
: Absolute tolerance used in@test
.rtol=1e-3
: Relative tolerance used in@test
.kwargs...
: Keyword arguments forwarded tosample
.
DynamicPPL.TestUtils.test_sampler_on_demo_models
— Functiontest_sampler_on_demo_models(meanfunction, sampler, args...; kwargs...)
Test sampler
on every model in DEMO_MODELS
.
This is just a proxy for test_sampler(meanfunction, DEMO_MODELS, sampler, args...; kwargs...)
.
DynamicPPL.TestUtils.test_sampler_continuous
— Functiontest_sampler_continuous(sampler, args...; kwargs...)
Test that sampler
produces the correct marginal posterior means on all models in demo_models
.
As of right now, this is just an alias for test_sampler_on_demo_models
.
DynamicPPL.TestUtils.marginal_mean_of_samples
— Functionmarginal_mean_of_samples(chain, varname)
Return the mean of variable represented by varname
in chain
.
DynamicPPL.TestUtils.DEMO_MODELS
— ConstantA collection of models corresponding to the posterior distribution defined by the generative process
s ~ InverseGamma(2, 3)
+m ~ Normal(0, √s)
+1.5 ~ Normal(m, √s)
+2.0 ~ Normal(m, √s)
or by
s[1] ~ InverseGamma(2, 3)
+s[2] ~ InverseGamma(2, 3)
+m[1] ~ Normal(0, √s)
+m[2] ~ Normal(0, √s)
+1.5 ~ Normal(m[1], √s[1])
+2.0 ~ Normal(m[2], √s[2])
These are examples of a Normal-InverseGamma conjugate prior with Normal likelihood, for which the posterior is known in closed form.
In particular, for the univariate model (the former one):
mean(s) == 49 / 24
+mean(m) == 7 / 6
And for the multivariate one (the latter one):
mean(s[1]) == 19 / 8
+mean(m[1]) == 3 / 4
+mean(s[2]) == 8 / 3
+mean(m[2]) == 1
For every demo model, one can define the true log prior, log likelihood, and log joint probabilities.
DynamicPPL.TestUtils.logprior_true
— Functionlogprior_true(model, args...)
Return the logprior
of model
for args
.
This should generally be implemented by hand for every specific model
.
See also: logjoint_true
, loglikelihood_true
.
DynamicPPL.TestUtils.loglikelihood_true
— Functionloglikelihood_true(model, args...)
Return the loglikelihood
of model
for args
.
This should generally be implemented by hand for every specific model
.
See also: logjoint_true
, logprior_true
.
DynamicPPL.TestUtils.logjoint_true
— Functionlogjoint_true(model, args...)
Return the logjoint
of model
for args
.
Defaults to logprior_true(model, args...) + loglikelihood_true(model, args..)
.
This should generally be implemented by hand for every specific model
so that the returned value can be used as a ground-truth for testing things like:
- Validity of evaluation of
model
using a particular implementation ofAbstractVarInfo
. - Validity of a sampler when combined with DynamicPPL by running the sampler twice: once targeting ground-truth functions, e.g.
logjoint_true
, and once targetingmodel
.
And more.
See also: logprior_true
, loglikelihood_true
.
And in the case where the model includes constrained variables, it can also be useful to define
DynamicPPL.TestUtils.logprior_true_with_logabsdet_jacobian
— Functionlogprior_true_with_logabsdet_jacobian(model::Model, args...)
Return a tuple (args_unconstrained, logprior_unconstrained)
of model
for args...
.
Unlike logprior_true
, the returned logprior computation includes the log-absdet-jacobian adjustment, thus computing logprior for the unconstrained variables.
Note that args
are assumed be in the support of model
, while args_unconstrained
will be unconstrained.
See also: logprior_true
.
DynamicPPL.TestUtils.logjoint_true_with_logabsdet_jacobian
— Functionlogjoint_true_with_logabsdet_jacobian(model::Model, args...)
Return a tuple (args_unconstrained, logjoint)
of model
for args
.
Unlike logjoint_true
, the returned logjoint computation includes the log-absdet-jacobian adjustment, thus computing logjoint for the unconstrained variables.
Note that args
are assumed be in the support of model
, while args_unconstrained
will be unconstrained.
This should generally not be implemented directly, instead one should implement logprior_true_with_logabsdet_jacobian
for a given model
.
See also: logjoint_true
, logprior_true_with_logabsdet_jacobian
.
Finally, the following methods can also be of use:
DynamicPPL.TestUtils.varnames
— Functionvarnames(model::Model)
Return a collection of VarName
as they are expected to appear in the model.
Even though it is recommended to implement this by hand for a particular Model
, a default implementation using SimpleVarInfo{<:Dict}
is provided.
DynamicPPL.TestUtils.posterior_mean
— Functionposterior_mean(model::Model)
Return a NamedTuple
compatible with varnames(model)
where the values represent the posterior mean under model
.
"Compatible" means that a varname
from varnames(model)
can be used to extract the corresponding value using get
, e.g. get(posterior_mean(model), varname)
.
DynamicPPL.TestUtils.setup_varinfos
— Functionsetup_varinfos(model::Model, example_values::NamedTuple, varnames; include_threadsafe::Bool=false)
Return a tuple of instances for different implementations of AbstractVarInfo
with each vi
, supposedly, satisfying vi[vn] == get(example_values, vn)
for vn
in varnames
.
If include_threadsafe
is true
, then the returned tuple will also include thread-safe versions of the varinfo instances.
DynamicPPL.update_values!!
— Functionupdate_values!!(vi::AbstractVarInfo, vals::NamedTuple, vns)
Return instance similar to vi
but with vns
set to values from vals
.
DynamicPPL.TestUtils.test_values
— Functiontest_values(vi::AbstractVarInfo, vals::NamedTuple, vns)
Test that vi[vn]
corresponds to the correct value in vals
for every vn
in vns
.
Debugging Utilities
DynamicPPL provides a few methods for checking validity of a model-definition.
DynamicPPL.DebugUtils.check_model
— Functioncheck_model([rng, ]model::Model; kwargs...)
Check that model
is valid, warning about any potential issues.
See check_model_and_trace
for more details on supported keword arguments and details of which types of checks are performed.
Returns
issuccess::Bool
: Whether the model check succeeded.
DynamicPPL.DebugUtils.check_model_and_trace
— Functioncheck_model_and_trace([rng, ]model::Model; kwargs...)
Check that model
is valid, warning about any potential issues.
This will check the model for the following issues:
- Repeated usage of the same varname in a model.
- Incorrectly treating a variable as random rather than fixed, and vice versa.
Arguments
rng::Random.AbstractRNG
: The random number generator to use when evaluating the model.model::Model
: The model to check.
Keyword Arguments
varinfo::VarInfo
: The varinfo to use when evaluating the model. Default:VarInfo(model)
.context::AbstractContext
: The context to use when evaluating the model. Default:DefaultContext
.error_on_failure::Bool
: Whether to throw an error if the model check fails. Default:false
.
Returns
issuccess::Bool
: Whether the model check succeeded.trace::Vector{Stmt}
: The trace of statements executed during the model check.
Examples
Correct model
julia> using StableRNGs
+
+julia> rng = StableRNG(42);
+
+julia> @model demo_correct() = x ~ Normal()
+demo_correct (generic function with 2 methods)
+
+julia> issuccess, trace = check_model_and_trace(rng, demo_correct());
+
+julia> issuccess
+true
+
+julia> print(trace)
+ assume: x ~ Normal{Float64}(μ=0.0, σ=1.0) ⟼ -0.670252 (logprob = -1.14356)
+
+julia> issuccess, trace = check_model_and_trace(rng, demo_correct() | (x = 1.0,));
+
+julia> issuccess
+true
+
+julia> print(trace)
+observe: 1.0 ~ Normal{Float64}(μ=0.0, σ=1.0) (logprob = -1.41894)
Incorrect model
julia> @model function demo_incorrect()
+ # (×) Sampling `x` twice will lead to incorrect log-probabilities!
+ x ~ Normal()
+ x ~ Exponential()
+ end
+demo_incorrect (generic function with 2 methods)
+
+julia> issuccess, trace = check_model_and_trace(rng, demo_incorrect(); error_on_failure=true);
+ERROR: varname x used multiple times in model
And some which might be useful to determine certain properties of the model based on the debug trace.
DynamicPPL.DebugUtils.has_static_constraints
— Functionhas_static_constraints([rng, ]model::Model; num_evals=5, kwargs...)
Return true
if the model has static constraints, false
otherwise.
Note that this is a heuristic check based on sampling from the model multiple times and checking if the model is consistent across runs.
Arguments
rng::Random.AbstractRNG
: The random number generator to use when evaluating the model.model::Model
: The model to check.
Keyword Arguments
num_evals::Int
: The number of evaluations to perform. Default:5
.kwargs...
: Additional keyword arguments to pass tocheck_model_and_trace
.
Advanced
Variable names
Names and possibly nested indices of variables are described with AbstractPPL.VarName
. They can be defined with AbstractPPL.@varname
. Please see the documentation of AbstractPPL.jl for further information.
Data Structures of Variables
DynamicPPL provides different data structures for samples from the model and their log density. All of them are subtypes of AbstractVarInfo
.
DynamicPPL.AbstractVarInfo
— TypeAbstractVarInfo
Abstract supertype for data structures that capture random variables when executing a probabilistic model and accumulate log densities such as the log likelihood or the log joint probability of the model.
See also: VarInfo
, SimpleVarInfo
.
Common API
Accumulation of log-probabilities
DynamicPPL.getlogp
— Functiongetlogp(vi::AbstractVarInfo)
Return the log of the joint probability of the observed data and parameters sampled in vi
.
DynamicPPL.setlogp!!
— Functionsetlogp!!(vi::AbstractVarInfo, logp)
Set the log of the joint probability of the observed data and parameters sampled in vi
to logp
, mutating if it makes sense.
DynamicPPL.acclogp!!
— Functionacclogp!!([context::AbstractContext, ]vi::AbstractVarInfo, logp)
Add logp
to the value of the log of the joint probability of the observed data and parameters sampled in vi
, mutating if it makes sense.
DynamicPPL.resetlogp!!
— Functionresetlogp!!(vi::AbstractVarInfo)
Reset the value of the log of the joint probability of the observed data and parameters sampled in vi
to 0, mutating if it makes sense.
Variables and their realizations
Base.keys
— Functionkeys(vi::AbstractVarInfo)
Return an iterator over all vns
in vi
.
Base.getindex
— Functiongetindex(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
+getindex(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])
Return the current value(s) of vn
(vns
) in vi
in the support of its (their) distribution(s).
If dist
is specified, the value(s) will be reshaped accordingly.
See also: getindex_raw(vi::AbstractVarInfo, vn::VarName, dist::Distribution)
DynamicPPL.getindex_raw
— Functiongetindex_raw(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
+getindex_raw(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])
Return the current value(s) of vn
(vns
) in vi
.
If dist
is specified, the value(s) will be reshaped accordingly.
See also: getindex(vi::AbstractVarInfo, vn::VarName, dist::Distribution)
The difference between getindex(vi, vn, dist)
and getindex_raw
is that getindex
will also transform the value(s) to the support of the distribution(s). This is not the case for getindex_raw
.
BangBang.push!!
— Functionpush!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution)
Push a new random variable vn
with a sampled value r
from a distribution dist
to the VarInfo
vi
, mutating if it makes sense.
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, spl::AbstractSampler)
Push a new random variable vn
with a sampled value r
sampled with a sampler spl
from a distribution dist
to VarInfo
vi
, if it makes sense.
The sampler is passed here to invalidate its cache where defined.
This method is considered legacy, and is likely to be deprecated in the future.
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, gid::Selector)
Push a new random variable vn
with a sampled value r
sampled with a sampler of selector gid
from a distribution dist
to VarInfo
vi
.
This method is considered legacy, and is likely to be deprecated in the future.
BangBang.empty!!
— Functionempty!!(vi::AbstractVarInfo)
Empty the fields of vi.metadata
and reset vi.logp[]
and vi.num_produce[]
to zeros.
This is useful when using a sampling algorithm that assumes an empty vi
, e.g. SMC
.
Base.isempty
— Functionisempty(vi::AbstractVarInfo)
Return true if vi
is empty and false otherwise.
DynamicPPL.values_as
— Functionvalues_as(varinfo[, Type])
Return the values/realizations in varinfo
as Type
, if implemented.
If no Type
is provided, return values as stored in varinfo
.
Examples
SimpleVarInfo
with NamedTuple
:
julia> data = (x = 1.0, m = [2.0]);
+
+julia> values_as(SimpleVarInfo(data))
+(x = 1.0, m = [2.0])
+
+julia> values_as(SimpleVarInfo(data), NamedTuple)
+(x = 1.0, m = [2.0])
+
+julia> values_as(SimpleVarInfo(data), OrderedDict)
+OrderedDict{VarName{sym, typeof(identity)} where sym, Any} with 2 entries:
+ x => 1.0
+ m => [2.0]
+
+julia> values_as(SimpleVarInfo(data), Vector)
+2-element Vector{Float64}:
+ 1.0
+ 2.0
SimpleVarInfo
with OrderedDict
:
julia> data = OrderedDict{Any,Any}(@varname(x) => 1.0, @varname(m) => [2.0]);
+
+julia> values_as(SimpleVarInfo(data))
+OrderedDict{Any, Any} with 2 entries:
+ x => 1.0
+ m => [2.0]
+
+julia> values_as(SimpleVarInfo(data), NamedTuple)
+(x = 1.0, m = [2.0])
+
+julia> values_as(SimpleVarInfo(data), OrderedDict)
+OrderedDict{Any, Any} with 2 entries:
+ x => 1.0
+ m => [2.0]
+
+julia> values_as(SimpleVarInfo(data), Vector)
+2-element Vector{Float64}:
+ 1.0
+ 2.0
TypedVarInfo
:
julia> # Just use an example model to construct the `VarInfo` because we're lazy.
+ vi = VarInfo(DynamicPPL.TestUtils.demo_assume_dot_observe());
+
+julia> vi[@varname(s)] = 1.0; vi[@varname(m)] = 2.0;
+
+julia> # For the sake of brevity, let's just check the type.
+ md = values_as(vi); md.s isa DynamicPPL.Metadata
+true
+
+julia> values_as(vi, NamedTuple)
+(s = 1.0, m = 2.0)
+
+julia> values_as(vi, OrderedDict)
+OrderedDict{VarName{sym, typeof(identity)} where sym, Float64} with 2 entries:
+ s => 1.0
+ m => 2.0
+
+julia> values_as(vi, Vector)
+2-element Vector{Float64}:
+ 1.0
+ 2.0
UntypedVarInfo
:
julia> # Just use an example model to construct the `VarInfo` because we're lazy.
+ vi = VarInfo(); DynamicPPL.TestUtils.demo_assume_dot_observe()(vi);
+
+julia> vi[@varname(s)] = 1.0; vi[@varname(m)] = 2.0;
+
+julia> # For the sake of brevity, let's just check the type.
+ values_as(vi) isa DynamicPPL.Metadata
+true
+
+julia> values_as(vi, NamedTuple)
+(s = 1.0, m = 2.0)
+
+julia> values_as(vi, OrderedDict)
+OrderedDict{VarName{sym, typeof(identity)} where sym, Float64} with 2 entries:
+ s => 1.0
+ m => 2.0
+
+julia> values_as(vi, Vector)
+2-element Vector{Real}:
+ 1.0
+ 2.0
Transformations
DynamicPPL.AbstractTransformation
— Typeabstract type AbstractTransformation
Represents a transformation to be used in link!!
and invlink!!
, amongst others.
A concrete implementation of this should implement the following methods:
link!!
: transforms theAbstractVarInfo
to the unconstrained space.invlink!!
: transforms theAbstractVarInfo
to the constrained space.
And potentially:
maybe_invlink_before_eval!!
: hook to decide whether to transform before evaluating the model.
See also: link!!
, invlink!!
, maybe_invlink_before_eval!!
.
DynamicPPL.NoTransformation
— Typestruct NoTransformation <: DynamicPPL.AbstractTransformation
Transformation which applies the identity function.
DynamicPPL.DynamicTransformation
— Typestruct DynamicTransformation <: DynamicPPL.AbstractTransformation
Transformation which transforms the variables on a per-need-basis in the execution of a given Model
.
This is in constrast to StaticTransformation
which transforms all variables before the execution of a given Model
.
See also: StaticTransformation
.
DynamicPPL.StaticTransformation
— Typestruct StaticTransformation{F} <: DynamicPPL.AbstractTransformation
Transformation which transforms all variables before the execution of a given Model
.
This is done through the maybe_invlink_before_eval!!
method.
See also: DynamicTransformation
, maybe_invlink_before_eval!!
.
Fields
bijector::Any
: The function, assumed to implement theBijectors
interface, to be applied to the variables
DynamicPPL.istrans
— Functionistrans(vi::AbstractVarInfo[, vns::Union{VarName, AbstractVector{<:Varname}}])
Return true
if vi
is working in unconstrained space, and false
if vi
is assuming realizations to be in support of the corresponding distributions.
If vns
is provided, then only check if this/these varname(s) are transformed.
Not all implementations of AbstractVarInfo
support transforming only a subset of the variables.
DynamicPPL.settrans!!
— Functionsettrans!!(vi::AbstractVarInfo, trans::Bool[, vn::VarName])
Return vi
with istrans(vi, vn)
evaluating to true
.
If vn
is not specified, then istrans(vi)
evaluates to true
for all variables.
DynamicPPL.transformation
— Functiontransformation(vi::AbstractVarInfo)
Return the AbstractTransformation
related to vi
.
Bijectors.link
— Functionlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)
Transform the variables in vi
to their linked space without mutating vi
, using the transformation t
.
If t
is not provided, default_transformation(model, vi)
will be used.
See also: default_transformation
, invlink
.
Bijectors.invlink
— Functioninvlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)
Transform the variables in vi
to their constrained space without mutating vi
, using the (inverse of) transformation t
.
If t
is not provided, default_transformation(model, vi)
will be used.
See also: default_transformation
, link
.
DynamicPPL.link!!
— Functionlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)
Transform the variables in vi
to their linked space, using the transformation t
, mutating vi
if possible.
If t
is not provided, default_transformation(model, vi)
will be used.
See also: default_transformation
, invlink!!
.
DynamicPPL.invlink!!
— Functioninvlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)
Transform the variables in vi
to their constrained space, using the (inverse of) transformation t
, mutating vi
if possible.
If t
is not provided, default_transformation(model, vi)
will be used.
See also: default_transformation
, link!!
.
DynamicPPL.default_transformation
— Functiondefault_transformation(model::Model[, vi::AbstractVarInfo])
Return the AbstractTransformation
currently related to model
and, potentially, vi
.
DynamicPPL.maybe_invlink_before_eval!!
— Functionmaybe_invlink_before_eval!!([t::Transformation,] vi, context, model)
Return a possibly invlinked version of vi
.
This will be called prior to model
evaluation, allowing one to perform a single invlink!!
before evaluation rather than lazyily evaluating the transforms on as-we-need basis as is done with DynamicTransformation
.
See also: StaticTransformation
, DynamicTransformation
.
Examples
julia> using DynamicPPL, Distributions, Bijectors
+
+julia> @model demo() = x ~ Normal()
+demo (generic function with 2 methods)
+
+julia> # By subtyping `Transform`, we inherit the `(inv)link!!`.
+ struct MyBijector <: Bijectors.Transform end
+
+julia> # Define some dummy `inverse` which will be used in the `link!!` call.
+ Bijectors.inverse(f::MyBijector) = identity
+
+julia> # We need to define `with_logabsdet_jacobian` for `MyBijector`
+ # (`identity` already has `with_logabsdet_jacobian` defined)
+ function Bijectors.with_logabsdet_jacobian(::MyBijector, x)
+ # Just using a large number of the logabsdet-jacobian term
+ # for demonstration purposes.
+ return (x, 1000)
+ end
+
+julia> # Change the `default_transformation` for our model to be a
+ # `StaticTransformation` using `MyBijector`.
+ function DynamicPPL.default_transformation(::Model{typeof(demo)})
+ return DynamicPPL.StaticTransformation(MyBijector())
+ end
+
+julia> model = demo();
+
+julia> vi = SimpleVarInfo(x=1.0)
+SimpleVarInfo((x = 1.0,), 0.0)
+
+julia> # Uses the `inverse` of `MyBijector`, which we have defined as `identity`
+ vi_linked = link!!(vi, model)
+Transformed SimpleVarInfo((x = 1.0,), 0.0)
+
+julia> # Now performs a single `invlink!!` before model evaluation.
+ logjoint(model, vi_linked)
+-1001.4189385332047
DynamicPPL.reconstruct
— Functionreconstruct([f, ]dist, val)
Reconstruct val
so that it's compatible with dist
.
If f
is also provided, the reconstruct value will be such that f(reconstruct_val)
is compatible with dist
.
Utils
Base.merge
— Methodmerge(varinfo, other_varinfos...)
Merge varinfos into one, giving precedence to the right-most varinfo when sensible.
This is particularly useful when combined with subset(varinfo, vns)
.
See docstring of subset(varinfo, vns)
for examples.
DynamicPPL.subset
— Functionsubset(varinfo::AbstractVarInfo, vns::AbstractVector{<:VarName})
Subset a varinfo
to only contain the variables vns
.
The ordering of the variables in the resulting varinfo
is not guaranteed to follow the ordering of the variables in varinfo
. Hence care must be taken, in particular when used in conjunction with other methods which uses the vector-representation of the varinfo
, e.g. getindex(varinfo, sampler)
.
Examples
julia> @model function demo()
+ s ~ InverseGamma(2, 3)
+ m ~ Normal(0, sqrt(s))
+ x = Vector{Float64}(undef, 2)
+ x[1] ~ Normal(m, sqrt(s))
+ x[2] ~ Normal(m, sqrt(s))
+ end
+demo (generic function with 2 methods)
+
+julia> model = demo();
+
+julia> varinfo = VarInfo(model);
+
+julia> keys(varinfo)
+4-element Vector{VarName}:
+ s
+ m
+ x[1]
+ x[2]
+
+julia> for (i, vn) in enumerate(keys(varinfo))
+ varinfo[vn] = i
+ end
+
+julia> varinfo[[@varname(s), @varname(m), @varname(x[1]), @varname(x[2])]]
+4-element Vector{Float64}:
+ 1.0
+ 2.0
+ 3.0
+ 4.0
+
+julia> # Extract one with only `m`.
+ varinfo_subset1 = subset(varinfo, [@varname(m),]);
+
+
+julia> keys(varinfo_subset1)
+1-element Vector{VarName{:m, typeof(identity)}}:
+ m
+
+julia> varinfo_subset1[@varname(m)]
+2.0
+
+julia> # Extract one with both `s` and `x[2]`.
+ varinfo_subset2 = subset(varinfo, [@varname(s), @varname(x[2])]);
+
+julia> keys(varinfo_subset2)
+2-element Vector{VarName}:
+ s
+ x[2]
+
+julia> varinfo_subset2[[@varname(s), @varname(x[2])]]
+2-element Vector{Float64}:
+ 1.0
+ 4.0
subset
is particularly useful when combined with merge(varinfo::AbstractVarInfo)
julia> # Merge the two.
+ varinfo_subset_merged = merge(varinfo_subset1, varinfo_subset2);
+
+julia> keys(varinfo_subset_merged)
+3-element Vector{VarName}:
+ m
+ s
+ x[2]
+
+julia> varinfo_subset_merged[[@varname(s), @varname(m), @varname(x[2])]]
+3-element Vector{Float64}:
+ 1.0
+ 2.0
+ 4.0
+
+julia> # Merge the two with the original.
+ varinfo_merged = merge(varinfo, varinfo_subset_merged);
+
+julia> keys(varinfo_merged)
+4-element Vector{VarName}:
+ s
+ m
+ x[1]
+ x[2]
+
+julia> varinfo_merged[[@varname(s), @varname(m), @varname(x[1]), @varname(x[2])]]
+4-element Vector{Float64}:
+ 1.0
+ 2.0
+ 3.0
+ 4.0
Notes
Type-stability
This function is only type-stable when vns
contains only varnames with the same symbol. For exmaple, [@varname(m[1]), @varname(m[2])]
will be type-stable, but [@varname(m[1]), @varname(x)]
will not be.
DynamicPPL.unflatten
— Functionunflatten(original, x::AbstractVector)
Return instance of original
constructed from x
.
unflatten(vi::AbstractVarInfo[, context::AbstractContext], x::AbstractVector)
Return a new instance of vi
with the values of x
assigned to the variables.
If context
is provided, x
is assumed to be realizations only for variables not filtered out by context
.
DynamicPPL.varname_leaves
— Functionvarname_leaves(vn::VarName, val)
Return an iterator over all varnames that are represented by vn
on val
.
Examples
julia> using DynamicPPL: varname_leaves
+
+julia> foreach(println, varname_leaves(@varname(x), rand(2)))
+x[1]
+x[2]
+
+julia> foreach(println, varname_leaves(@varname(x[1:2]), rand(2)))
+x[1:2][1]
+x[1:2][2]
+
+julia> x = (y = 1, z = [[2.0], [3.0]]);
+
+julia> foreach(println, varname_leaves(@varname(x), x))
+x.y
+x.z[1][1]
+x.z[2][1]
DynamicPPL.varname_and_value_leaves
— Functionvarname_and_value_leaves(vn::VarName, val)
Return an iterator over all varname-value pairs that are represented by vn
on val
.
Examples
julia> using DynamicPPL: varname_and_value_leaves
+
+julia> foreach(println, varname_and_value_leaves(@varname(x), 1:2))
+(x[1], 1)
+(x[2], 2)
+
+julia> foreach(println, varname_and_value_leaves(@varname(x[1:2]), 1:2))
+(x[1:2][1], 1)
+(x[1:2][2], 2)
+
+julia> x = (y = 1, z = [[2.0], [3.0]]);
+
+julia> foreach(println, varname_and_value_leaves(@varname(x), x))
+(x.y, 1)
+(x.z[1][1], 2.0)
+(x.z[2][1], 3.0)
There are also some special handling for certain types:
julia> using LinearAlgebra
+
+julia> x = reshape(1:4, 2, 2);
+
+julia> # `LowerTriangular`
+ foreach(println, varname_and_value_leaves(@varname(x), LowerTriangular(x)))
+(x[1, 1], 1)
+(x[2, 1], 2)
+(x[2, 2], 4)
+
+julia> # `UpperTriangular`
+ foreach(println, varname_and_value_leaves(@varname(x), UpperTriangular(x)))
+(x[1, 1], 1)
+(x[1, 2], 3)
+(x[2, 2], 4)
+
+julia> # `Cholesky` with lower-triangular
+ foreach(println, varname_and_value_leaves(@varname(x), Cholesky([1.0 0.0; 0.0 1.0], 'L', 0)))
+(x.L[1, 1], 1.0)
+(x.L[2, 1], 0.0)
+(x.L[2, 2], 1.0)
+
+julia> # `Cholesky` with upper-triangular
+ foreach(println, varname_and_value_leaves(@varname(x), Cholesky([1.0 0.0; 0.0 1.0], 'U', 0)))
+(x.U[1, 1], 1.0)
+(x.U[1, 2], 0.0)
+(x.U[2, 2], 1.0)
SimpleVarInfo
DynamicPPL.SimpleVarInfo
— Typestruct SimpleVarInfo{NT, T, C<:DynamicPPL.AbstractTransformation} <: AbstractVarInfo
A simple wrapper of the parameters with a logp
field for accumulation of the logdensity.
Currently only implemented for NT<:NamedTuple
and NT<:AbstractDict
.
Fields
values
: underlying representation of the realization representedlogp
: holds the accumulated log-probabilitytransformation
: represents whether it assumes variables to be transformed
Notes
The major differences between this and TypedVarInfo
are:
SimpleVarInfo
does not require linearization.SimpleVarInfo
can use more efficient bijectors.SimpleVarInfo
is only type-stable ifNT<:NamedTuple
and either a) no indexing is used in tilde-statements, or b) the values have been specified with the correct shapes.
Examples
General usage
julia> using StableRNGs
+
+julia> @model function demo()
+ m ~ Normal()
+ x = Vector{Float64}(undef, 2)
+ for i in eachindex(x)
+ x[i] ~ Normal()
+ end
+ return x
+ end
+demo (generic function with 2 methods)
+
+julia> m = demo();
+
+julia> rng = StableRNG(42);
+
+julia> ### Sampling ###
+ ctx = SamplingContext(rng, SampleFromPrior(), DefaultContext());
+
+julia> # In the `NamedTuple` version we need to provide the place-holder values for
+ # the variables which are using "containers", e.g. `Array`.
+ # In this case, this means that we need to specify `x` but not `m`.
+ _, vi = DynamicPPL.evaluate!!(m, SimpleVarInfo((x = ones(2), )), ctx);
+
+julia> # (✓) Vroom, vroom! FAST!!!
+ vi[@varname(x[1])]
+0.4471218424633827
+
+julia> # We can also access arbitrary varnames pointing to `x`, e.g.
+ vi[@varname(x)]
+2-element Vector{Float64}:
+ 0.4471218424633827
+ 1.3736306979834252
+
+julia> vi[@varname(x[1:2])]
+2-element Vector{Float64}:
+ 0.4471218424633827
+ 1.3736306979834252
+
+julia> # (×) If we don't provide the container...
+ _, vi = DynamicPPL.evaluate!!(m, SimpleVarInfo(), ctx); vi
+ERROR: type NamedTuple has no field x
+[...]
+
+julia> # If one does not know the varnames, we can use a `OrderedDict` instead.
+ _, vi = DynamicPPL.evaluate!!(m, SimpleVarInfo{Float64}(OrderedDict()), ctx);
+
+julia> # (✓) Sort of fast, but only possible at runtime.
+ vi[@varname(x[1])]
+-1.019202452456547
+
+julia> # In addtion, we can only access varnames as they appear in the model!
+ vi[@varname(x)]
+ERROR: KeyError: key x not found
+[...]
+
+julia> vi[@varname(x[1:2])]
+ERROR: KeyError: key x[1:2] not found
+[...]
Technically, it's possible to use any implementation of AbstractDict
in place of OrderedDict
, but OrderedDict
ensures that certain operations, e.g. linearization/flattening of the values in the varinfo, are consistent between evaluations. Hence OrderedDict
is the preferred implementation of AbstractDict
to use here.
You can also sample in transformed space:
julia> @model demo_constrained() = x ~ Exponential()
+demo_constrained (generic function with 2 methods)
+
+julia> m = demo_constrained();
+
+julia> _, vi = DynamicPPL.evaluate!!(m, SimpleVarInfo(), ctx);
+
+julia> vi[@varname(x)] # (✓) 0 ≤ x < ∞
+1.8632965762164932
+
+julia> _, vi = DynamicPPL.evaluate!!(m, DynamicPPL.settrans!!(SimpleVarInfo(), true), ctx);
+
+julia> vi[@varname(x)] # (✓) -∞ < x < ∞
+-0.21080155351918753
+
+julia> xs = [last(DynamicPPL.evaluate!!(m, DynamicPPL.settrans!!(SimpleVarInfo(), true), ctx))[@varname(x)] for i = 1:10];
+
+julia> any(xs .< 0) # (✓) Positive probability mass on negative numbers!
+true
+
+julia> # And with `OrderedDict` of course!
+ _, vi = DynamicPPL.evaluate!!(m, DynamicPPL.settrans!!(SimpleVarInfo(OrderedDict()), true), ctx);
+
+julia> vi[@varname(x)] # (✓) -∞ < x < ∞
+0.6225185067787314
+
+julia> xs = [last(DynamicPPL.evaluate!!(m, DynamicPPL.settrans!!(SimpleVarInfo(), true), ctx))[@varname(x)] for i = 1:10];
+
+julia> any(xs .< 0) # (✓) Positive probability mass on negative numbers!
+true
Evaluation in transformed space of course also works:
julia> vi = DynamicPPL.settrans!!(SimpleVarInfo((x = -1.0,)), true)
+Transformed SimpleVarInfo((x = -1.0,), 0.0)
+
+julia> # (✓) Positive probability mass on negative numbers!
+ getlogp(last(DynamicPPL.evaluate!!(m, vi, DynamicPPL.DefaultContext())))
+-1.3678794411714423
+
+julia> # While if we forget to indicate that it's transformed:
+ vi = DynamicPPL.settrans!!(SimpleVarInfo((x = -1.0,)), false)
+SimpleVarInfo((x = -1.0,), 0.0)
+
+julia> # (✓) No probability mass on negative numbers!
+ getlogp(last(DynamicPPL.evaluate!!(m, vi, DynamicPPL.DefaultContext())))
+-Inf
Indexing
Using NamedTuple
as underlying storage.
julia> svi_nt = SimpleVarInfo((m = (a = [1.0], ), ));
+
+julia> svi_nt[@varname(m)]
+(a = [1.0],)
+
+julia> svi_nt[@varname(m.a)]
+1-element Vector{Float64}:
+ 1.0
+
+julia> svi_nt[@varname(m.a[1])]
+1.0
+
+julia> svi_nt[@varname(m.a[2])]
+ERROR: BoundsError: attempt to access 1-element Vector{Float64} at index [2]
+[...]
+
+julia> svi_nt[@varname(m.b)]
+ERROR: type NamedTuple has no field b
+[...]
Using OrderedDict
as underlying storage.
julia> svi_dict = SimpleVarInfo(OrderedDict(@varname(m) => (a = [1.0], )));
+
+julia> svi_dict[@varname(m)]
+(a = [1.0],)
+
+julia> svi_dict[@varname(m.a)]
+1-element Vector{Float64}:
+ 1.0
+
+julia> svi_dict[@varname(m.a[1])]
+1.0
+
+julia> svi_dict[@varname(m.a[2])]
+ERROR: BoundsError: attempt to access 1-element Vector{Float64} at index [2]
+[...]
+
+julia> svi_dict[@varname(m.b)]
+ERROR: type NamedTuple has no field b
+[...]
VarInfo
Another data structure is VarInfo
.
DynamicPPL.VarInfo
— Typestruct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
+ metadata::Tmeta
+ logp::Base.RefValue{Tlogp}
+ num_produce::Base.RefValue{Int}
+end
A light wrapper over one or more instances of Metadata
. Let vi
be an instance of VarInfo
. If vi isa VarInfo{<:Metadata}
, then only one Metadata
instance is used for all the sybmols. VarInfo{<:Metadata}
is aliased UntypedVarInfo
. If vi isa VarInfo{<:NamedTuple}
, then vi.metadata
is a NamedTuple
that maps each symbol used on the LHS of ~
in the model to its Metadata
instance. The latter allows for the type specialization of vi
after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple}
is aliased TypedVarInfo
.
Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ...
and x[2] ~ ...
both have the same symbol x
.
DynamicPPL.TypedVarInfo
— TypeTypedVarInfo(vi::UntypedVarInfo)
This function finds all the unique sym
s from the instances of VarName{sym}
found in vi.metadata.vns
. It then extracts the metadata associated with each symbol from the global vi.metadata
field. Finally, a new VarInfo
is created with a new metadata
as a NamedTuple
mapping from symbols to type-stable Metadata
instances, one for each symbol.
One main characteristic of VarInfo
is that samples are stored in a linearized form.
DynamicPPL.link!
— Functionlink!(vi::VarInfo, spl::Sampler)
Transform the values of the random variables sampled by spl
in vi
from the support of their distributions to the Euclidean space and set their corresponding "trans"
flag values to true
.
DynamicPPL.invlink!
— Functioninvlink!(vi::VarInfo, spl::AbstractSampler)
Transform the values of the random variables sampled by spl
in vi
from the Euclidean space back to the support of their distributions and sets their corresponding "trans"
flag values to false
.
DynamicPPL.set_flag!
— Functionset_flag!(vi::VarInfo, vn::VarName, flag::String)
Set vn
's value for flag
to true
in vi
.
DynamicPPL.unset_flag!
— Functionunset_flag!(vi::VarInfo, vn::VarName, flag::String)
Set vn
's value for flag
to false
in vi
.
DynamicPPL.is_flagged
— Functionis_flagged(vi::VarInfo, vn::VarName, flag::String)
Check whether vn
has a true value for flag
in vi
.
For Gibbs sampling the following functions were added.
DynamicPPL.setgid!
— Functionsetgid!(vi::VarInfo, gid::Selector, vn::VarName)
Add gid
to the set of sampler selectors associated with vn
in vi
.
DynamicPPL.updategid!
— Functionupdategid!(vi::VarInfo, vn::VarName, spl::Sampler)
Set vn
's gid
to Set([spl.selector])
, if vn
does not have a sampler selector linked and vn
's symbol is in the space of spl
.
The following functions were used for sequential Monte Carlo methods.
DynamicPPL.get_num_produce
— Functionget_num_produce(vi::VarInfo)
Return the num_produce
of vi
.
DynamicPPL.set_num_produce!
— Functionset_num_produce!(vi::VarInfo, n::Int)
Set the num_produce
field of vi
to n
.
DynamicPPL.increment_num_produce!
— Functionincrement_num_produce!(vi::VarInfo)
Add 1 to num_produce
in vi
.
DynamicPPL.reset_num_produce!
— Functionreset_num_produce!(vi::VarInfo)
Reset the value of num_produce
the log of the joint probability of the observed data and parameters sampled in vi
to 0.
DynamicPPL.setorder!
— Functionsetorder!(vi::VarInfo, vn::VarName, index::Int)
Set the order
of vn
in vi
to index
, where order
is the number of observe statements run before sampling
vn`.
DynamicPPL.set_retained_vns_del_by_spl!
— Functionset_retained_vns_del_by_spl!(vi::VarInfo, spl::Sampler)
Set the "del"
flag of variables in vi
with order > vi.num_produce[]
to true
.
Base.empty!
— Functionempty!(meta::Metadata)
Empty the fields of meta
.
This is useful when using a sampling algorithm that assumes an empty meta
, e.g. SMC
.
Evaluation Contexts
Internally, both sampling and evaluation of log densities are performed with AbstractPPL.evaluate!!
.
AbstractPPL.evaluate!!
— Functionevaluate!!(model::Model[, rng, varinfo, sampler, context])
Sample from the model
using the sampler
with random number generator rng
and the context
, and store the sample and log joint probability in varinfo
.
Returns both the return-value of the original model, and the resulting varinfo.
The method resets the log joint probability of varinfo
and increases the evaluation number of sampler
.
The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function. Contexts are subtypes of AbstractPPL.AbstractContext
.
DynamicPPL.SamplingContext
— TypeSamplingContext(
+ [rng::Random.AbstractRNG=Random.default_rng()],
+ [sampler::AbstractSampler=SampleFromPrior()],
+ [context::AbstractContext=DefaultContext()],
+)
Create a context that allows you to sample parameters with the sampler
when running the model. The context
determines how the returned log density is computed when running the model.
See also: DefaultContext
, LikelihoodContext
, PriorContext
DynamicPPL.DefaultContext
— Typestruct DefaultContext <: AbstractContext end
The DefaultContext
is used by default to compute log the joint probability of the data and parameters when running the model.
DynamicPPL.LikelihoodContext
— Typestruct LikelihoodContext{Tvars} <: AbstractContext
+ vars::Tvars
+end
The LikelihoodContext
enables the computation of the log likelihood of the parameters when running the model. vars
can be used to evaluate the log likelihood for specific values of the model's parameters. If vars
is nothing
, the parameter values inside the VarInfo
will be used by default.
DynamicPPL.PriorContext
— Typestruct PriorContext{Tvars} <: AbstractContext
+ vars::Tvars
+end
The PriorContext
enables the computation of the log prior of the parameters vars
when running the model.
DynamicPPL.MiniBatchContext
— Typestruct MiniBatchContext{Tctx, T} <: AbstractContext
+ context::Tctx
+ loglike_scalar::T
+end
The MiniBatchContext
enables the computation of log(prior) + s * log(likelihood of a batch)
when running the model, where s
is the loglike_scalar
field, typically equal to the number of data points / batch size
. This is useful in batch-based stochastic gradient descent algorithms to be optimizing log(prior) + log(likelihood of all the data points)
in the expectation.
DynamicPPL.PrefixContext
— TypePrefixContext{Prefix}(context)
Create a context that allows you to use the wrapped context
when running the model and adds the Prefix
to all parameters.
This context is useful in nested models to ensure that the names of the parameters are unique.
See also: @submodel
Samplers
In DynamicPPL two samplers are defined that are used to initialize unobserved random variables: SampleFromPrior
which samples from the prior distribution, and SampleFromUniform
which samples from a uniform distribution.
DynamicPPL.SampleFromPrior
— TypeSampleFromPrior
Sampling algorithm that samples unobserved random variables from their prior distribution.
DynamicPPL.SampleFromUniform
— TypeSampleFromUniform
Sampling algorithm that samples unobserved random variables from a uniform distribution.
References
Additionally, a generic sampler for inference is implemented.
DynamicPPL.Sampler
— TypeSampler{T}
Generic sampler type for inference algorithms of type T
in DynamicPPL.
Sampler
should implement the AbstractMCMC interface, and in particular AbstractMCMC.step
. A default implementation of the initial sampling step is provided that supports resuming sampling from a previous state and setting initial parameter values. It requires to overload loadstate
and initialstep
for loading previous states and actually performing the initial sampling step, respectively. Additionally, sometimes one might want to implement initialsampler
that specifies how the initial parameter values are sampled if they are not provided. By default, values are sampled from the prior.
The default implementation of Sampler
uses the following unexported functions.
DynamicPPL.initialstep
— Functioninitialstep(rng, model, sampler, varinfo; kwargs...)
Perform the initial sampling step of the sampler
for the model
.
The varinfo
contains the initial samples, which can be provided by the user or sampled randomly.
DynamicPPL.loadstate
— Functionloadstate(data)
Load sampler state from data
.
By default, data
is returned.
DynamicPPL.initialsampler
— Functioninitialsampler(sampler::Sampler)
Return the sampler that is used for generating the initial parameters when sampling with sampler
.
By default, it returns an instance of SampleFromPrior
.
Model-Internal Functions
DynamicPPL.tilde_assume
— Functiontilde_assume(context::SamplingContext, right, vn, vi)
Handle assumed variables, e.g., x ~ Normal()
(where x
does occur in the model inputs), accumulate the log probability, and return the sampled value with a context associated with a sampler.
Falls back to
tilde_assume(context.rng, context.context, context.sampler, right, vn, vi)
DynamicPPL.dot_tilde_assume
— Functiondot_tilde_assume(context::SamplingContext, right, left, vn, vi)
Handle broadcasted assumed variables, e.g., x .~ MvNormal()
(where x
does not occur in the model inputs), accumulate the log probability, and return the sampled value for a context associated with a sampler.
Falls back to
dot_tilde_assume(context.rng, context.context, context.sampler, right, left, vn, vi)
DynamicPPL.tilde_observe
— Functiontilde_observe(context::SamplingContext, right, left, vi)
Handle observed constants with a context
associated with a sampler.
Falls back to tilde_observe(context.context, context.sampler, right, left, vi)
.
DynamicPPL.dot_tilde_observe
— Functiondot_tilde_observe(context::SamplingContext, right, left, vi)
Handle broadcasted observed constants, e.g., [1.0] .~ MvNormal()
, accumulate the log probability, and return the observed value for a context associated with a sampler.
Falls back to dot_tilde_observe(context.context, context.sampler, right, left, vi)
.