diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 632ced3bf..84b60446f 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-29T16:42:24","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-29T16:52:54","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index ebd78a545..04efdc604 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,465 +1,7 @@ -API · DynamicPPL - - - - - -

API

Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.

Model

Macros

A core component of DynamicPPL is the @model macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~ statements. These statements are rewritten by @model as calls of internal functions for sampling the variables and computing their log densities.

DynamicPPL.@modelMacro
@model(expr[, warn = false])

Macro to specify a probabilistic model.

If warn is true, a warning is displayed if internal variable names are used in the model definition.

Examples

Model definition:

@model function model(x, y = 42)
+API · DynamicPPL

API

Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.

Model

Macros

A core component of DynamicPPL is the @model macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~ statements. These statements are rewritten by @model as calls of internal functions for sampling the variables and computing their log densities.

DynamicPPL.@modelMacro
@model(expr[, warn = false])

Macro to specify a probabilistic model.

If warn is true, a warning is displayed if internal variable names are used in the model definition.

Examples

Model definition:

@model function model(x, y = 42)
     ...
-end

To generate a Model, call model(xvalue) or model(xvalue, yvalue).

source

One can nest models and call another model inside the model function with @submodel.

One can nest models and call another model inside the model function with @submodel.

DynamicPPL.@submodelMacro
@submodel model
 @submodel ... = model

Run a Turing model nested inside of a Turing model.

Examples

julia> @model function demo1(x)
            x ~ Normal()
            return 1 + abs(x)
@@ -475,7 +17,7 @@
 false

We can check that the log joint probability of the model accumulated in vi is correct:

julia> x = vi[@varname(x)];
 
 julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
-true
source
@submodel prefix=... model
+true
source
@submodel prefix=... model
 @submodel prefix=... ... = model

Run a Turing model nested inside of a Turing model and add "prefix." as a prefix to all random variables inside of the model.

Valid expressions for prefix=... are:

  • prefix=false: no prefix is used.
  • prefix=true: attempt to automatically determine the prefix from the left-hand side ... = model by first converting into a VarName, and then calling Symbol on this.
  • prefix=expression: results in the prefix Symbol(expression).

The prefix makes it possible to run the same Turing model multiple times while keeping track of all random variables correctly.

Examples

Example models

julia> @model function demo1(x)
            x ~ Normal()
            return 1 + abs(x)
@@ -552,7 +94,7 @@
 julia> # (×) Automatic prefixing without a left-hand side expression does not work!
        @model submodel_prefix_error() = @submodel prefix=true inner()
 ERROR: LoadError: cannot automatically prefix with no left-hand side
-[...]

Notes

  • The choice prefix=expression means that the prefixing will incur a runtime cost. This is also the case for prefix=true, depending on whether the expression on the the right-hand side of ... = model requires runtime-information or not, e.g. x = model will result in the static prefix x, while x[i] = model will be resolved at runtime.
source

Type

A Model can be created by calling the model function, as defined by @model.

DynamicPPL.ModelType
struct Model{F,argnames,defaultnames,missings,Targs,Tdefaults,Ctx<:AbstactContext}
+[...]

Notes

  • The choice prefix=expression means that the prefixing will incur a runtime cost. This is also the case for prefix=true, depending on whether the expression on the the right-hand side of ... = model requires runtime-information or not, e.g. x = model will result in the static prefix x, while x[i] = model will be resolved at runtime.
source

Type

A Model can be created by calling the model function, as defined by @model.

DynamicPPL.ModelType
struct Model{F,argnames,defaultnames,missings,Targs,Tdefaults,Ctx<:AbstactContext}
     f::F
     args::NamedTuple{argnames,Targs}
     defaults::NamedTuple{defaultnames,Tdefaults}
@@ -564,7 +106,7 @@
 Model{typeof(f),(:x, :y),(:x,),(),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
 
 julia> Model{(:y,)}(f, (x = 1.0, y = 2.0), (x = 42,)) # with special definition of missings
-Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
source

Models are callable structs.

DynamicPPL.ModelMethod
(model::Model)([rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

Basic properties of a model can be accessed with getargnames, getmissings, and nameof.

Evaluation

With rand one can draw samples from the prior distribution of a Model.

Base.randFunction
rand([rng=Random.default_rng()], [T=NamedTuple], model::Model)

Generate a sample of type T from the prior distribution of the model.

source

One can also evaluate the log prior, log likelihood, and log joint probability.

DynamicPPL.logpriorFunction
logprior(model::Model, varinfo::AbstractVarInfo)

Return the log prior probability of variables varinfo for the probabilistic model.

See also logjoint and loglikelihood.

source
logprior(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log prior probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
source

Models are callable structs.

DynamicPPL.ModelMethod
(model::Model)([rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

Basic properties of a model can be accessed with getargnames, getmissings, and nameof.

Evaluation

With rand one can draw samples from the prior distribution of a Model.

Base.randFunction
rand([rng=Random.default_rng()], [T=NamedTuple], model::Model)

Generate a sample of type T from the prior distribution of the model.

source

One can also evaluate the log prior, log likelihood, and log joint probability.

DynamicPPL.logpriorFunction
logprior(model::Model, varinfo::AbstractVarInfo)

Return the log prior probability of variables varinfo for the probabilistic model.

See also logjoint and loglikelihood.

source
logprior(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log prior probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -577,7 +119,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> logprior(demo_model([1., 2.]), chain);
source
logprior(model::Model, θ)

Return the log prior probability of variables θ for the probabilistic model.

See also logjoint and loglikelihood.

Examples

julia> @model function demo(x)
+julia> logprior(demo_model([1., 2.]), chain);
source
logprior(model::Model, θ)

Return the log prior probability of variables θ for the probabilistic model.

See also logjoint and loglikelihood.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -595,7 +137,7 @@
 
 julia> # Truth.
        logpdf(Normal(), 100.0)
--5000.918938533205
source
StatsAPI.loglikelihoodFunction
loglikelihood(model::Model, varinfo::AbstractVarInfo)

Return the log likelihood of variables varinfo for the probabilistic model.

See also logjoint and logprior.

source
loglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log likelihoods evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+-5000.918938533205
source
StatsAPI.loglikelihoodFunction
loglikelihood(model::Model, varinfo::AbstractVarInfo)

Return the log likelihood of variables varinfo for the probabilistic model.

See also logjoint and logprior.

source
loglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log likelihoods evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -608,7 +150,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> loglikelihood(demo_model([1., 2.]), chain);
source
loglikelihood(model::Model, θ)

Return the log likelihood of variables θ for the probabilistic model.

See also logjoint and logprior.

Examples

julia> @model function demo(x)
+julia> loglikelihood(demo_model([1., 2.]), chain);
source
loglikelihood(model::Model, θ)

Return the log likelihood of variables θ for the probabilistic model.

See also logjoint and logprior.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -626,7 +168,7 @@
 
 julia> # Truth.
        logpdf(Normal(100.0, 1.0), 1.0)
--4901.418938533205
source
DynamicPPL.logjointFunction
logjoint(model::Model, varinfo::AbstractVarInfo)

Return the log joint probability of variables varinfo for the probabilistic model.

See logprior and loglikelihood.

source
logjoint(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log joint probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+-4901.418938533205
source
DynamicPPL.logjointFunction
logjoint(model::Model, varinfo::AbstractVarInfo)

Return the log joint probability of variables varinfo for the probabilistic model.

See logprior and loglikelihood.

source
logjoint(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log joint probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -639,7 +181,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> logjoint(demo_model([1., 2.]), chain);
source
logjoint(model::Model, θ)

Return the log joint probability of variables θ for the probabilistic model.

See logprior and loglikelihood.

Examples

julia> @model function demo(x)
+julia> logjoint(demo_model([1., 2.]), chain);
source
logjoint(model::Model, θ)

Return the log joint probability of variables θ for the probabilistic model.

See logprior and loglikelihood.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -657,7 +199,7 @@
 
 julia> # Truth.
        logpdf(Normal(100.0, 1.0), 1.0) + logpdf(Normal(), 100.0)
--9902.33787706641
source

LogDensityProblems.jl interface

The LogDensityProblems.jl interface is also supported by simply wrapping a Model in a DynamicPPL.LogDensityFunction:

DynamicPPL.LogDensityFunctionType
LogDensityFunction

A callable representing a log density function of a model.

Fields

  • varinfo: varinfo used for evaluation

  • model: model used for evaluation

  • context: context used for evaluation; if nothing, leafcontext(model.context) will be used when applicable

Examples

julia> using Distributions
+-9902.33787706641
source

LogDensityProblems.jl interface

The LogDensityProblems.jl interface is also supported by simply wrapping a Model in a DynamicPPL.LogDensityFunction:

DynamicPPL.LogDensityFunctionType
LogDensityFunction

A callable representing a log density function of a model.

Fields

  • varinfo: varinfo used for evaluation

  • model: model used for evaluation

  • context: context used for evaluation; if nothing, leafcontext(model.context) will be used when applicable

Examples

julia> using Distributions
 
 julia> using DynamicPPL: LogDensityFunction, contextualize
 
@@ -690,7 +232,7 @@
        f_prior = LogDensityFunction(contextualize(model, DynamicPPL.PriorContext()), VarInfo(model));
 
 julia> LogDensityProblems.logdensity(f_prior, [0.0]) == logpdf(Normal(), 0.0)
-true
source

Condition and decondition

A Model can be conditioned on a set of observations with AbstractPPL.condition or its alias |.

Base.:|Method
model | (x = 1.0, ...)

Return a Model which now treats variables on the right-hand side as observations.

See condition for more information and examples.

source

Condition and decondition

A Model can be conditioned on a set of observations with AbstractPPL.condition or its alias |.

Base.:|Method
model | (x = 1.0, ...)

Return a Model which now treats variables on the right-hand side as observations.

See condition for more information and examples.

source
AbstractPPL.conditionFunction
condition(model::Model; values...)
 condition(model::Model, values::NamedTuple)

Return a Model which now treats the variables in values as observations.

See also: decondition, conditioned

Limitations

This does currently not work with variables that are provided to the model as arguments, e.g. @model function demo(x) ... end means that condition will not affect the variable x.

Therefore if one wants to make use of condition and decondition one should not be specifying any random variables as arguments.

This is done for the sake of backwards compatibility.

Examples

Simple univariate model

julia> using Distributions
 
 julia> @model function demo()
@@ -799,8 +341,8 @@
 
 julia> keys(VarInfo(demo_outer_prefix()))
 1-element Vector{VarName{Symbol("inner.m"), typeof(identity)}}:
- inner.m

From this we can tell what the correct way to condition m within demo_inner is in the two different models.

source
condition([context::AbstractContext,] values::NamedTuple)
-condition([context::AbstractContext]; values...)

Return ConditionContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: decondition

source
DynamicPPL.conditionedFunction
conditioned(model::Model)

Return the conditioned values in model.

Examples

julia> using Distributions
+ inner.m

From this we can tell what the correct way to condition m within demo_inner is in the two different models.

source
condition([context::AbstractContext,] values::NamedTuple)
+condition([context::AbstractContext]; values...)

Return ConditionContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: decondition

source
DynamicPPL.conditionedFunction
conditioned(model::Model)

Return the conditioned values in model.

Examples

julia> using Distributions
 
 julia> using DynamicPPL: conditioned, contextualize
 
@@ -838,7 +380,7 @@
 1.0
 
 julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
source
conditioned(context::AbstractContext)

Return NamedTuple of values that are conditioned on under context`.

Note that this will recursively traverse the context stack and return a merged version of the condition values.

source

Similarly, one can specify with AbstractPPL.decondition that certain, or all, random variables are not observed.

AbstractPPL.deconditionFunction
decondition(model::Model)
+VarName[]
source
conditioned(context::AbstractContext)

Return NamedTuple of values that are conditioned on under context`.

Note that this will recursively traverse the context stack and return a merged version of the condition values.

source

Similarly, one can specify with AbstractPPL.decondition that certain, or all, random variables are not observed.

AbstractPPL.deconditionFunction
decondition(model::Model)
 decondition(model::Model, variables...)

Return a Model for which variables... are not considered observations. If no variables are provided, then all variables currently considered observations will no longer be.

This is essentially the inverse of condition. This also means that it suffers from the same limitiations.

Note that currently we only support variables to take on explicit values provided to condition.

Examples

julia> using Distributions
 
 julia> @model function demo()
@@ -916,7 +458,7 @@
        deconditioned_model_2 = deconditioned_model | (@varname(m[1]) => missing);
 
 julia> m = deconditioned_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
source
decondition(context::AbstractContext, syms...)

Return context but with syms no longer conditioned on.

Note that this recursively traverses contexts, deconditioning all along the way.

See also: condition

source

Fixing and unfixing

We can also fix a collection of variables in a Model to certain using fix.

This might seem quite similar to the aforementioned condition and its siblings, but they are indeed different operations:

  • conditioned variables are considered to be observations, and are thus included in the computation logjoint and loglikelihood, but not in logprior.
  • fixed variables are considered to be constant, and are thus not included in any log-probability computations.

The differences are more clearly spelled out in the docstring of fix below.

DynamicPPL.fixFunction
fix(model::Model; values...)
+true
source
decondition(context::AbstractContext, syms...)

Return context but with syms no longer conditioned on.

Note that this recursively traverses contexts, deconditioning all along the way.

See also: condition

source

Fixing and unfixing

We can also fix a collection of variables in a Model to certain using fix.

This might seem quite similar to the aforementioned condition and its siblings, but they are indeed different operations:

  • conditioned variables are considered to be observations, and are thus included in the computation logjoint and loglikelihood, but not in logprior.
  • fixed variables are considered to be constant, and are thus not included in any log-probability computations.

The differences are more clearly spelled out in the docstring of fix below.

DynamicPPL.fixFunction
fix(model::Model; values...)
 fix(model::Model, values::NamedTuple)

Return a Model which now treats the variables in values as fixed.

See also: unfix, fixed

Examples

Simple univariate model

julia> using Distributions
 
 julia> @model function demo()
@@ -1038,8 +580,8 @@
 
 julia> # And the difference is the missing log-probability of `m`:
        logjoint(model_fixed, (x=1.0,)) + logpdf(Normal(), 1.0) == logjoint(model_conditioned, (x=1.0,))
-true
source
fix([context::AbstractContext,] values::NamedTuple)
-fix([context::AbstractContext]; values...)

Return FixedContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: unfix

source
DynamicPPL.fixedFunction
fixed(model::Model)

Return the fixed values in model.

Examples

julia> using Distributions
+true
source
fix([context::AbstractContext,] values::NamedTuple)
+fix([context::AbstractContext]; values...)

Return FixedContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: unfix

source
DynamicPPL.fixedFunction
fixed(model::Model)

Return the fixed values in model.

Examples

julia> using Distributions
 
 julia> using DynamicPPL: fixed, contextualize
 
@@ -1077,7 +619,7 @@
 1.0
 
 julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
source
fixed(context::AbstractContext)

Return the values that are fixed under context.

Note that this will recursively traverse the context stack and return a merged version of the fix values.

source

The difference between fix and condition is described in the docstring of fix above.

Similarly, we can unfix variables, i.e. return them to their original meaning:

DynamicPPL.unfixFunction
unfix(model::Model)
+VarName[]
source
fixed(context::AbstractContext)

Return the values that are fixed under context.

Note that this will recursively traverse the context stack and return a merged version of the fix values.

source

The difference between fix and condition is described in the docstring of fix above.

Similarly, we can unfix variables, i.e. return them to their original meaning:

DynamicPPL.unfixFunction
unfix(model::Model)
 unfix(model::Model, variables...)

Return a Model for which variables... are not considered fixed. If no variables are provided, then all variables currently considered fixed will no longer be.

This is essentially the inverse of fix. This also means that it suffers from the same limitiations.

Note that currently we only support variables to take on explicit values provided to fix.

Examples

julia> using Distributions
 
 julia> @model function demo()
@@ -1155,7 +697,7 @@
        unfixed_model_2 = fix(unfixed_model, @varname(m[1]) => missing);
 
 julia> m = unfixed_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
source
unfix(context::AbstractContext, syms...)

Return context but with syms no longer fixed.

Note that this recursively traverses contexts, unfixing all along the way.

See also: fix

source

Utilities

It is possible to manually increase (or decrease) the accumulated log density from within a model function.

DynamicPPL.@addlogprob!Macro
@addlogprob!(ex)

Add the result of the evaluation of ex to the joint log probability.

Examples

This macro allows you to include arbitrary terms in the likelihood

julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
+true
source
unfix(context::AbstractContext, syms...)

Return context but with syms no longer fixed.

Note that this recursively traverses contexts, unfixing all along the way.

See also: fix

source

Utilities

It is possible to manually increase (or decrease) the accumulated log density from within a model function.

DynamicPPL.@addlogprob!Macro
@addlogprob!(ex)

Add the result of the evaluation of ex to the joint log probability.

Examples

This macro allows you to include arbitrary terms in the likelihood

julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
 
 julia> @model function demo(x)
            μ ~ Normal()
@@ -1192,7 +734,7 @@
 true
 
 julia> loglikelihood(demo(x), (μ=0.2,)) ≈ myloglikelihood(x, 0.2)
-true
source

Return values of the model function for a collection of samples can be obtained with generated_quantities.

source

Return values of the model function for a collection of samples can be obtained with generated_quantities.

DynamicPPL.generated_quantitiesFunction
generated_quantities(model::Model, parameters::NamedTuple)
 generated_quantities(model::Model, values, keys)
 generated_quantities(model::Model, values, keys)

Execute model with variables keys set to values and return the values returned by the model.

If a NamedTuple is given, keys=keys(parameters) and values=values(parameters).

Example

julia> using DynamicPPL, Distributions
 
@@ -1215,7 +757,7 @@
 (0.0,)
 
 julia> generated_quantities(model, values(parameters), keys(parameters))
-(0.0,)
source

For a chain of samples, one can compute the pointwise log-likelihoods of each observed random variable with pointwise_loglikelihoods. Similarly, the log-densities of the priors using pointwise_prior_logdensities or both, i.e. all variables, using pointwise_logdensities.

DynamicPPL.pointwise_logdensitiesFunction
pointwise_logdensities(model::Model, chain::Chains, keytype = String)

Runs model on each sample in chain returning a OrderedDict{String, Matrix{Float64}} with keys corresponding to symbols of the variables, and values being matrices of shape (num_chains, num_samples).

keytype specifies what the type of the keys used in the returned OrderedDict are. Currently, only String and VarName are supported.

Notes

Say y is a Vector of n i.i.d. Normal(μ, σ) variables, with μ and σ both being <:Real. Then the observe (i.e. when the left-hand side is an observation) statements can be implemented in three ways:

  1. using a for loop:
for i in eachindex(y)
+(0.0,)
source

For a chain of samples, one can compute the pointwise log-likelihoods of each observed random variable with pointwise_loglikelihoods. Similarly, the log-densities of the priors using pointwise_prior_logdensities or both, i.e. all variables, using pointwise_logdensities.

DynamicPPL.pointwise_logdensitiesFunction
pointwise_logdensities(model::Model, chain::Chains, keytype = String)

Runs model on each sample in chain returning a OrderedDict{String, Matrix{Float64}} with keys corresponding to symbols of the variables, and values being matrices of shape (num_chains, num_samples).

keytype specifies what the type of the keys used in the returned OrderedDict are. Currently, only String and VarName are supported.

Notes

Say y is a Vector of n i.i.d. Normal(μ, σ) variables, with μ and σ both being <:Real. Then the observe (i.e. when the left-hand side is an observation) statements can be implemented in three ways:

  1. using a for loop:
for i in eachindex(y)
     y[i] ~ Normal(μ, σ)
 end
  1. using .~:
y .~ Normal(μ, σ)
  1. using MvNormal:
y ~ MvNormal(fill(μ, n), σ^2 * I)

In (1) and (2), y will be treated as a collection of n i.i.d. 1-dimensional variables, while in (3) y will be treated as a single n-dimensional observation.

This is important to keep in mind, in particular if the computation is used for downstream computations.

Examples

From chain

julia> using MCMCChains
 
@@ -1275,7 +817,7 @@
 julia> m = demo([1.0; 1.0]);
 
 julia> ℓ = pointwise_logdensities(m, VarInfo(m)); first.((ℓ[@varname(x[1])], ℓ[@varname(x[2])]))
-(-1.4189385332046727, -1.4189385332046727)
source
DynamicPPL.pointwise_loglikelihoodsFunction
pointwise_loglikelihoods(model, chain[, keytype, context])

Compute the pointwise log-likelihoods of the model given the chain. This is the same as pointwise_logdensities(model, chain, context), but only including the likelihood terms. See also: pointwise_logdensities.

source
DynamicPPL.pointwise_prior_logdensitiesFunction
pointwise_prior_logdensities(model, chain[, keytype, context])

Compute the pointwise log-prior-densities of the model given the chain. This is the same as pointwise_logdensities(model, chain, context), but only including the prior terms. See also: pointwise_logdensities.

source

For converting a chain into a format that can more easily be fed into a Model again, for example using condition, you can use value_iterator_from_chain.

DynamicPPL.value_iterator_from_chainFunction
value_iterator_from_chain(model::Model, chain)
+(-1.4189385332046727, -1.4189385332046727)
source
DynamicPPL.pointwise_loglikelihoodsFunction
pointwise_loglikelihoods(model, chain[, keytype, context])

Compute the pointwise log-likelihoods of the model given the chain. This is the same as pointwise_logdensities(model, chain, context), but only including the likelihood terms. See also: pointwise_logdensities.

source
DynamicPPL.pointwise_prior_logdensitiesFunction
pointwise_prior_logdensities(model, chain[, keytype, context])

Compute the pointwise log-prior-densities of the model given the chain. This is the same as pointwise_logdensities(model, chain, context), but only including the prior terms. See also: pointwise_logdensities.

source

For converting a chain into a format that can more easily be fed into a Model again, for example using condition, you can use value_iterator_from_chain.

DynamicPPL.value_iterator_from_chainFunction
value_iterator_from_chain(model::Model, chain)
 value_iterator_from_chain(varinfo::AbstractVarInfo, chain)

Return an iterator over the values in chain for each variable in model/varinfo.

Example

julia> using MCMCChains, DynamicPPL, Distributions, StableRNGs
 
 julia> rng = StableRNG(42);
@@ -1319,7 +861,7 @@
        conditioned_model = model | first(iter);
 
 julia> conditioned_model()  # <= results in same values as the `first(iter)` above
-(0.5805148626851955, 0.7393275279160691)
source

Sometimes it can be useful to extract the priors of a model. This is the possible using extract_priors.

DynamicPPL.extract_priorsFunction
extract_priors([rng::Random.AbstractRNG, ]model::Model)

Extract the priors from a model.

This is done by sampling from the model and recording the distributions that are used to generate the samples.

Warning

Because the extraction is done by execution of the model, there are several caveats:

  1. If one variable, say, y ~ Normal(0, x), where x ~ Normal() is also a random variable, then the extracted prior will have different parameters in every extraction!
  2. If the model does not have static support, say, n ~ Categorical(1:10); x ~ MvNormmal(zeros(n), I), then the extracted priors themselves will be different between extractions, not just their parameters.

Both of these caveats are demonstrated below.

Examples

Changing parameters

julia> using Distributions, StableRNGs
+(0.5805148626851955, 0.7393275279160691)
source

Sometimes it can be useful to extract the priors of a model. This is the possible using extract_priors.

DynamicPPL.extract_priorsFunction
extract_priors([rng::Random.AbstractRNG, ]model::Model)

Extract the priors from a model.

This is done by sampling from the model and recording the distributions that are used to generate the samples.

Warning

Because the extraction is done by execution of the model, there are several caveats:

  1. If one variable, say, y ~ Normal(0, x), where x ~ Normal() is also a random variable, then the extracted prior will have different parameters in every extraction!
  2. If the model does not have static support, say, n ~ Categorical(1:10); x ~ MvNormmal(zeros(n), I), then the extracted priors themselves will be different between extractions, not just their parameters.

Both of these caveats are demonstrated below.

Examples

Changing parameters

julia> using Distributions, StableRNGs
 
 julia> rng = StableRNG(42);
 
@@ -1349,7 +891,7 @@
 6
 
 julia> length(extract_priors(rng, model)[@varname(x)])
-9
source
extract_priors(model::Model, varinfo::AbstractVarInfo)

Extract the priors from a model.

This is done by evaluating the model at the values present in varinfo and recording the distributions that are present at each tilde statement.

source

Safe extraction of values from a given AbstractVarInfo as they are seen in the model can be done using values_as_in_model.

DynamicPPL.values_as_in_modelFunction
values_as_in_model(model::Model[, varinfo::AbstractVarInfo, context::AbstractContext])
+9
source
extract_priors(model::Model, varinfo::AbstractVarInfo)

Extract the priors from a model.

This is done by evaluating the model at the values present in varinfo and recording the distributions that are present at each tilde statement.

source

Safe extraction of values from a given AbstractVarInfo as they are seen in the model can be done using values_as_in_model.

DynamicPPL.values_as_in_modelFunction
values_as_in_model(model::Model[, varinfo::AbstractVarInfo, context::AbstractContext])
 values_as_in_model(rng::Random.AbstractRNG, model::Model[, varinfo::AbstractVarInfo, context::AbstractContext])

Get the values of varinfo as they would be seen in the model.

If no varinfo is provided, then this is effectively the same as Base.rand(rng::Random.AbstractRNG, model::Model).

More specifically, this method attempts to extract the realization as seen in the model. For example, x[1] ~ truncated(Normal(); lower=0) will result in a realization compatible with truncated(Normal(); lower=0) regardless of whether varinfo is working in unconstrained space.

Hence this method is a "safe" way of obtaining realizations in constrained space at the cost of additional model evaluations.

Arguments

  • model::Model: model to extract realizations from.
  • varinfo::AbstractVarInfo: variable information to use for the extraction.
  • context::AbstractContext: context to use for the extraction. If rng is specified, then context will be wrapped in a SamplingContext with the provided rng.

Examples

When VarInfo fails

The following demonstrates a common pitfall when working with VarInfo and constrained variables.

julia> using Distributions, StableRNGs
 
 julia> rng = StableRNG(42);
@@ -1391,7 +933,7 @@
        # (✓) `values_as_in_model` will re-run the model and extract
        # the correct realization of `y` given the new values of `x`.
        lb ≤ values_as_in_model(model, varinfo_linked)[@varname(y)] ≤ ub
-true
source
DynamicPPL.NamedDistType

A named distribution that carries the name of the random variable with it.

source

Testing Utilities

DynamicPPL provides several demo models and helpers for testing samplers in the DynamicPPL.TestUtils submodule.

DynamicPPL.TestUtils.test_samplerFunction
test_sampler(models, sampler, args...; kwargs...)

Test that sampler produces correct marginal posterior means on each model in models.

In short, this method iterates through models, calls AbstractMCMC.sample on the model and sampler to produce a chain, and then checks marginal_mean_of_samples(chain, vn) for every (leaf) varname vn against the corresponding value returned by posterior_mean for each model.

To change how comparison is done for a particular chain type, one can overload marginal_mean_of_samples for the corresponding type.

Arguments

  • models: A collection of instaces of DynamicPPL.Model to test on.
  • sampler: The AbstractMCMC.AbstractSampler to test.
  • args...: Arguments forwarded to sample.

Keyword arguments

  • varnames_filter: A filter to apply to varnames(model), allowing comparison for only a subset of the varnames.
  • atol=1e-1: Absolute tolerance used in @test.
  • rtol=1e-3: Relative tolerance used in @test.
  • kwargs...: Keyword arguments forwarded to sample.
source
DynamicPPL.TestUtils.test_sampler_on_demo_modelsFunction
test_sampler_on_demo_models(meanfunction, sampler, args...; kwargs...)

Test sampler on every model in DEMO_MODELS.

This is just a proxy for test_sampler(meanfunction, DEMO_MODELS, sampler, args...; kwargs...).

source
DynamicPPL.TestUtils.test_sampler_continuousFunction
test_sampler_continuous(sampler, args...; kwargs...)

Test that sampler produces the correct marginal posterior means on all models in demo_models.

As of right now, this is just an alias for test_sampler_on_demo_models.

source
DynamicPPL.TestUtils.marginal_mean_of_samplesFunction
marginal_mean_of_samples(chain, varname)

Return the mean of variable represented by varname in chain.

source
DynamicPPL.TestUtils.DEMO_MODELSConstant

A collection of models corresponding to the posterior distribution defined by the generative process

s ~ InverseGamma(2, 3)
+true
source
DynamicPPL.NamedDistType

A named distribution that carries the name of the random variable with it.

source

Testing Utilities

DynamicPPL provides several demo models and helpers for testing samplers in the DynamicPPL.TestUtils submodule.

DynamicPPL.TestUtils.test_samplerFunction
test_sampler(models, sampler, args...; kwargs...)

Test that sampler produces correct marginal posterior means on each model in models.

In short, this method iterates through models, calls AbstractMCMC.sample on the model and sampler to produce a chain, and then checks marginal_mean_of_samples(chain, vn) for every (leaf) varname vn against the corresponding value returned by posterior_mean for each model.

To change how comparison is done for a particular chain type, one can overload marginal_mean_of_samples for the corresponding type.

Arguments

  • models: A collection of instaces of DynamicPPL.Model to test on.
  • sampler: The AbstractMCMC.AbstractSampler to test.
  • args...: Arguments forwarded to sample.

Keyword arguments

  • varnames_filter: A filter to apply to varnames(model), allowing comparison for only a subset of the varnames.
  • atol=1e-1: Absolute tolerance used in @test.
  • rtol=1e-3: Relative tolerance used in @test.
  • kwargs...: Keyword arguments forwarded to sample.
source
DynamicPPL.TestUtils.test_sampler_on_demo_modelsFunction
test_sampler_on_demo_models(meanfunction, sampler, args...; kwargs...)

Test sampler on every model in DEMO_MODELS.

This is just a proxy for test_sampler(meanfunction, DEMO_MODELS, sampler, args...; kwargs...).

source
DynamicPPL.TestUtils.test_sampler_continuousFunction
test_sampler_continuous(sampler, args...; kwargs...)

Test that sampler produces the correct marginal posterior means on all models in demo_models.

As of right now, this is just an alias for test_sampler_on_demo_models.

source
DynamicPPL.TestUtils.marginal_mean_of_samplesFunction
marginal_mean_of_samples(chain, varname)

Return the mean of variable represented by varname in chain.

source
DynamicPPL.TestUtils.DEMO_MODELSConstant

A collection of models corresponding to the posterior distribution defined by the generative process

s ~ InverseGamma(2, 3)
 m ~ Normal(0, √s)
 1.5 ~ Normal(m, √s)
 2.0 ~ Normal(m, √s)

or by

s[1] ~ InverseGamma(2, 3)
@@ -1403,7 +945,7 @@
 mean(m) == 7 / 6

And for the multivariate one (the latter one):

mean(s[1]) == 19 / 8
 mean(m[1]) == 3 / 4
 mean(s[2]) == 8 / 3
-mean(m[2]) == 1
source

For every demo model, one can define the true log prior, log likelihood, and log joint probabilities.

DynamicPPL.TestUtils.logprior_trueFunction
logprior_true(model, args...)

Return the logprior of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, loglikelihood_true.

source
DynamicPPL.TestUtils.loglikelihood_trueFunction
loglikelihood_true(model, args...)

Return the loglikelihood of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, logprior_true.

source
DynamicPPL.TestUtils.logjoint_trueFunction
logjoint_true(model, args...)

Return the logjoint of model for args.

Defaults to logprior_true(model, args...) + loglikelihood_true(model, args..).

This should generally be implemented by hand for every specific model so that the returned value can be used as a ground-truth for testing things like:

  1. Validity of evaluation of model using a particular implementation of AbstractVarInfo.
  2. Validity of a sampler when combined with DynamicPPL by running the sampler twice: once targeting ground-truth functions, e.g. logjoint_true, and once targeting model.

And more.

See also: logprior_true, loglikelihood_true.

source

And in the case where the model includes constrained variables, it can also be useful to define

DynamicPPL.TestUtils.logprior_true_with_logabsdet_jacobianFunction
logprior_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logprior_unconstrained) of model for args....

Unlike logprior_true, the returned logprior computation includes the log-absdet-jacobian adjustment, thus computing logprior for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

See also: logprior_true.

source
DynamicPPL.TestUtils.logjoint_true_with_logabsdet_jacobianFunction
logjoint_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logjoint) of model for args.

Unlike logjoint_true, the returned logjoint computation includes the log-absdet-jacobian adjustment, thus computing logjoint for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

This should generally not be implemented directly, instead one should implement logprior_true_with_logabsdet_jacobian for a given model.

See also: logjoint_true, logprior_true_with_logabsdet_jacobian.

source

Finally, the following methods can also be of use:

DynamicPPL.TestUtils.varnamesFunction
varnames(model::Model)

Return a collection of VarName as they are expected to appear in the model.

Even though it is recommended to implement this by hand for a particular Model, a default implementation using SimpleVarInfo{<:Dict} is provided.

source
DynamicPPL.TestUtils.posterior_meanFunction
posterior_mean(model::Model)

Return a NamedTuple compatible with varnames(model) where the values represent the posterior mean under model.

"Compatible" means that a varname from varnames(model) can be used to extract the corresponding value using get, e.g. get(posterior_mean(model), varname).

source
DynamicPPL.TestUtils.setup_varinfosFunction
setup_varinfos(model::Model, example_values::NamedTuple, varnames; include_threadsafe::Bool=false)

Return a tuple of instances for different implementations of AbstractVarInfo with each vi, supposedly, satisfying vi[vn] == get(example_values, vn) for vn in varnames.

If include_threadsafe is true, then the returned tuple will also include thread-safe versions of the varinfo instances.

source
DynamicPPL.update_values!!Function
update_values!!(vi::AbstractVarInfo, vals::NamedTuple, vns)

Return instance similar to vi but with vns set to values from vals.

source
DynamicPPL.TestUtils.test_valuesFunction
test_values(vi::AbstractVarInfo, vals::NamedTuple, vns)

Test that vi[vn] corresponds to the correct value in vals for every vn in vns.

source

Debugging Utilities

DynamicPPL provides a few methods for checking validity of a model-definition.

DynamicPPL.DebugUtils.check_modelFunction
check_model([rng, ]model::Model; kwargs...)

Check that model is valid, warning about any potential issues.

See check_model_and_trace for more details on supported keword arguments and details of which types of checks are performed.

Returns

  • issuccess::Bool: Whether the model check succeeded.
source
DynamicPPL.DebugUtils.check_model_and_traceFunction
check_model_and_trace([rng, ]model::Model; kwargs...)

Check that model is valid, warning about any potential issues.

This will check the model for the following issues:

  1. Repeated usage of the same varname in a model.
  2. Incorrectly treating a variable as random rather than fixed, and vice versa.

Arguments

  • rng::Random.AbstractRNG: The random number generator to use when evaluating the model.
  • model::Model: The model to check.

Keyword Arguments

  • varinfo::VarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.
  • error_on_failure::Bool: Whether to throw an error if the model check fails. Default: false.

Returns

  • issuccess::Bool: Whether the model check succeeded.
  • trace::Vector{Stmt}: The trace of statements executed during the model check.

Examples

Correct model

julia> using StableRNGs
+mean(m[2]) == 1
source

For every demo model, one can define the true log prior, log likelihood, and log joint probabilities.

DynamicPPL.TestUtils.logprior_trueFunction
logprior_true(model, args...)

Return the logprior of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, loglikelihood_true.

source
DynamicPPL.TestUtils.loglikelihood_trueFunction
loglikelihood_true(model, args...)

Return the loglikelihood of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, logprior_true.

source
DynamicPPL.TestUtils.logjoint_trueFunction
logjoint_true(model, args...)

Return the logjoint of model for args.

Defaults to logprior_true(model, args...) + loglikelihood_true(model, args..).

This should generally be implemented by hand for every specific model so that the returned value can be used as a ground-truth for testing things like:

  1. Validity of evaluation of model using a particular implementation of AbstractVarInfo.
  2. Validity of a sampler when combined with DynamicPPL by running the sampler twice: once targeting ground-truth functions, e.g. logjoint_true, and once targeting model.

And more.

See also: logprior_true, loglikelihood_true.

source

And in the case where the model includes constrained variables, it can also be useful to define

DynamicPPL.TestUtils.logprior_true_with_logabsdet_jacobianFunction
logprior_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logprior_unconstrained) of model for args....

Unlike logprior_true, the returned logprior computation includes the log-absdet-jacobian adjustment, thus computing logprior for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

See also: logprior_true.

source
DynamicPPL.TestUtils.logjoint_true_with_logabsdet_jacobianFunction
logjoint_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logjoint) of model for args.

Unlike logjoint_true, the returned logjoint computation includes the log-absdet-jacobian adjustment, thus computing logjoint for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

This should generally not be implemented directly, instead one should implement logprior_true_with_logabsdet_jacobian for a given model.

See also: logjoint_true, logprior_true_with_logabsdet_jacobian.

source

Finally, the following methods can also be of use:

DynamicPPL.TestUtils.varnamesFunction
varnames(model::Model)

Return a collection of VarName as they are expected to appear in the model.

Even though it is recommended to implement this by hand for a particular Model, a default implementation using SimpleVarInfo{<:Dict} is provided.

source
DynamicPPL.TestUtils.posterior_meanFunction
posterior_mean(model::Model)

Return a NamedTuple compatible with varnames(model) where the values represent the posterior mean under model.

"Compatible" means that a varname from varnames(model) can be used to extract the corresponding value using get, e.g. get(posterior_mean(model), varname).

source
DynamicPPL.TestUtils.setup_varinfosFunction
setup_varinfos(model::Model, example_values::NamedTuple, varnames; include_threadsafe::Bool=false)

Return a tuple of instances for different implementations of AbstractVarInfo with each vi, supposedly, satisfying vi[vn] == get(example_values, vn) for vn in varnames.

If include_threadsafe is true, then the returned tuple will also include thread-safe versions of the varinfo instances.

source
DynamicPPL.update_values!!Function
update_values!!(vi::AbstractVarInfo, vals::NamedTuple, vns)

Return instance similar to vi but with vns set to values from vals.

source
DynamicPPL.TestUtils.test_valuesFunction
test_values(vi::AbstractVarInfo, vals::NamedTuple, vns)

Test that vi[vn] corresponds to the correct value in vals for every vn in vns.

source

Debugging Utilities

DynamicPPL provides a few methods for checking validity of a model-definition.

DynamicPPL.DebugUtils.check_modelFunction
check_model([rng, ]model::Model; kwargs...)

Check that model is valid, warning about any potential issues.

See check_model_and_trace for more details on supported keword arguments and details of which types of checks are performed.

Returns

  • issuccess::Bool: Whether the model check succeeded.
source
DynamicPPL.DebugUtils.check_model_and_traceFunction
check_model_and_trace([rng, ]model::Model; kwargs...)

Check that model is valid, warning about any potential issues.

This will check the model for the following issues:

  1. Repeated usage of the same varname in a model.
  2. Incorrectly treating a variable as random rather than fixed, and vice versa.

Arguments

  • rng::Random.AbstractRNG: The random number generator to use when evaluating the model.
  • model::Model: The model to check.

Keyword Arguments

  • varinfo::VarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.
  • error_on_failure::Bool: Whether to throw an error if the model check fails. Default: false.

Returns

  • issuccess::Bool: Whether the model check succeeded.
  • trace::Vector{Stmt}: The trace of statements executed during the model check.

Examples

Correct model

julia> using StableRNGs
 
 julia> rng = StableRNG(42);
 
@@ -1432,11 +974,11 @@
 demo_incorrect (generic function with 2 methods)
 
 julia> issuccess, trace = check_model_and_trace(rng, demo_incorrect(); error_on_failure=true);
-ERROR: varname x used multiple times in model
source

And some which might be useful to determine certain properties of the model based on the debug trace.

DynamicPPL.DebugUtils.has_static_constraintsFunction
has_static_constraints([rng, ]model::Model; num_evals=5, kwargs...)

Return true if the model has static constraints, false otherwise.

Note that this is a heuristic check based on sampling from the model multiple times and checking if the model is consistent across runs.

Arguments

  • rng::Random.AbstractRNG: The random number generator to use when evaluating the model.
  • model::Model: The model to check.

Keyword Arguments

  • num_evals::Int: The number of evaluations to perform. Default: 5.
  • kwargs...: Additional keyword arguments to pass to check_model_and_trace.
source

For determining whether one might have type instabilities in the model, the following can be useful

DynamicPPL.DebugUtils.model_warntypeFunction
model_warntype(model[, varinfo, context]; optimize=true)

Check the type stability of the model's evaluator, warning about any potential issues.

This simply calls @code_warntype on the model's evaluator, filling in internal arguments where needed.

Arguments

  • model::Model: The model to check.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Keyword Arguments

  • optimize::Bool: Whether to generate optimized code. Default: false.
source
DynamicPPL.DebugUtils.model_typedFunction
model_typed(model[, varinfo, context]; optimize=true)

Return the type inference for the model's evaluator.

This simply calls @code_typed on the model's evaluator, filling in internal arguments where needed.

Arguments

  • model::Model: The model to check.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Keyword Arguments

  • optimize::Bool: Whether to generate optimized code. Default: true.
source

Interally, the type-checking methods make use of the following method for construction of the call with the argument types:

DynamicPPL.DebugUtils.gen_evaluator_call_with_typesFunction
gen_evaluator_call_with_types(model[, varinfo, context])

Generate the evaluator call and the types of the arguments.

Arguments

  • model::Model: The model whose evaluator is of interest.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Returns

A 2-tuple with the following elements:

  • f: This is either model.f or Core.kwcall, depending on whether the model has keyword arguments.
  • argtypes::Type{<:Tuple}: The types of the arguments for the evaluator.
source

Advanced

Variable names

Names and possibly nested indices of variables are described with AbstractPPL.VarName. They can be defined with AbstractPPL.@varname. Please see the documentation of AbstractPPL.jl for further information.

Data Structures of Variables

DynamicPPL provides different data structures used in for storing samples and accumulation of the log-probabilities, all of which are subtypes of AbstractVarInfo.

DynamicPPL.AbstractVarInfoType
AbstractVarInfo

Abstract supertype for data structures that capture random variables when executing a probabilistic model and accumulate log densities such as the log likelihood or the log joint probability of the model.

See also: VarInfo, SimpleVarInfo.

source

But exactly how a AbstractVarInfo stores this information can vary.

VarInfo

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
+ERROR: varname x used multiple times in model
source

And some which might be useful to determine certain properties of the model based on the debug trace.

DynamicPPL.DebugUtils.has_static_constraintsFunction
has_static_constraints([rng, ]model::Model; num_evals=5, kwargs...)

Return true if the model has static constraints, false otherwise.

Note that this is a heuristic check based on sampling from the model multiple times and checking if the model is consistent across runs.

Arguments

  • rng::Random.AbstractRNG: The random number generator to use when evaluating the model.
  • model::Model: The model to check.

Keyword Arguments

  • num_evals::Int: The number of evaluations to perform. Default: 5.
  • kwargs...: Additional keyword arguments to pass to check_model_and_trace.
source

For determining whether one might have type instabilities in the model, the following can be useful

DynamicPPL.DebugUtils.model_warntypeFunction
model_warntype(model[, varinfo, context]; optimize=true)

Check the type stability of the model's evaluator, warning about any potential issues.

This simply calls @code_warntype on the model's evaluator, filling in internal arguments where needed.

Arguments

  • model::Model: The model to check.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Keyword Arguments

  • optimize::Bool: Whether to generate optimized code. Default: false.
source
DynamicPPL.DebugUtils.model_typedFunction
model_typed(model[, varinfo, context]; optimize=true)

Return the type inference for the model's evaluator.

This simply calls @code_typed on the model's evaluator, filling in internal arguments where needed.

Arguments

  • model::Model: The model to check.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Keyword Arguments

  • optimize::Bool: Whether to generate optimized code. Default: true.
source

Interally, the type-checking methods make use of the following method for construction of the call with the argument types:

DynamicPPL.DebugUtils.gen_evaluator_call_with_typesFunction
gen_evaluator_call_with_types(model[, varinfo, context])

Generate the evaluator call and the types of the arguments.

Arguments

  • model::Model: The model whose evaluator is of interest.
  • varinfo::AbstractVarInfo: The varinfo to use when evaluating the model. Default: VarInfo(model).
  • context::AbstractContext: The context to use when evaluating the model. Default: DefaultContext.

Returns

A 2-tuple with the following elements:

  • f: This is either model.f or Core.kwcall, depending on whether the model has keyword arguments.
  • argtypes::Type{<:Tuple}: The types of the arguments for the evaluator.
source

Advanced

Variable names

Names and possibly nested indices of variables are described with AbstractPPL.VarName. They can be defined with AbstractPPL.@varname. Please see the documentation of AbstractPPL.jl for further information.

Data Structures of Variables

DynamicPPL provides different data structures used in for storing samples and accumulation of the log-probabilities, all of which are subtypes of AbstractVarInfo.

DynamicPPL.AbstractVarInfoType
AbstractVarInfo

Abstract supertype for data structures that capture random variables when executing a probabilistic model and accumulate log densities such as the log likelihood or the log joint probability of the model.

See also: VarInfo, SimpleVarInfo.

source

But exactly how a AbstractVarInfo stores this information can vary.

VarInfo

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
     metadata::Tmeta
     logp::Base.RefValue{Tlogp}
     num_produce::Base.RefValue{Int}
-end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source
DynamicPPL.TypedVarInfoType
TypedVarInfo(vi::UntypedVarInfo)

This function finds all the unique syms from the instances of VarName{sym} found in vi.metadata.vns. It then extracts the metadata associated with each symbol from the global vi.metadata field. Finally, a new VarInfo is created with a new metadata as a NamedTuple mapping from symbols to type-stable Metadata instances, one for each symbol.

source

One main characteristic of VarInfo is that samples are stored in a linearized form.

DynamicPPL.link!Function
link!(vi::VarInfo, spl::Sampler)

Transform the values of the random variables sampled by spl in vi from the support of their distributions to the Euclidean space and set their corresponding "trans" flag values to true.

source
DynamicPPL.invlink!Function
invlink!(vi::VarInfo, spl::AbstractSampler)

Transform the values of the random variables sampled by spl in vi from the Euclidean space back to the support of their distributions and sets their corresponding "trans" flag values to false.

source
DynamicPPL.set_flag!Function
set_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to true in vi.

source
DynamicPPL.unset_flag!Function
unset_flag!(vi::VarInfo, vn::VarName, flag::String, ignorable::Bool=false

Set vn's value for flag to false in vi.

Setting some flags for some VarInfo types is not possible, and by default attempting to do so will error. If ignorable is set to true then this will silently be ignored instead.

source
DynamicPPL.is_flaggedFunction
is_flagged(vi::VarInfo, vn::VarName, flag::String)

Check whether vn has a true value for flag in vi.

source

For Gibbs sampling the following functions were added.

DynamicPPL.setgid!Function
setgid!(vi::VarInfo, gid::Selector, vn::VarName)

Add gid to the set of sampler selectors associated with vn in vi.

source
DynamicPPL.updategid!Function
updategid!(vi::VarInfo, vn::VarName, spl::Sampler)

Set vn's gid to Set([spl.selector]), if vn does not have a sampler selector linked and vn's symbol is in the space of spl.

source

The following functions were used for sequential Monte Carlo methods.

DynamicPPL.get_num_produceFunction
get_num_produce(vi::VarInfo)

Return the num_produce of vi.

source
DynamicPPL.set_num_produce!Function
set_num_produce!(vi::VarInfo, n::Int)

Set the num_produce field of vi to n.

source
DynamicPPL.increment_num_produce!Function
increment_num_produce!(vi::VarInfo)

Add 1 to num_produce in vi.

source
DynamicPPL.reset_num_produce!Function
reset_num_produce!(vi::VarInfo)

Reset the value of num_produce the log of the joint probability of the observed data and parameters sampled in vi to 0.

source
DynamicPPL.setorder!Function
setorder!(vi::VarInfo, vn::VarName, index::Int)

Set the order of vn in vi to index, where order is the number of observe statements run before samplingvn`.

source
DynamicPPL.set_retained_vns_del_by_spl!Function
set_retained_vns_del_by_spl!(vi::VarInfo, spl::Sampler)

Set the "del" flag of variables in vi with order > vi.num_produce[] to true.

source
Base.empty!Function
empty!(meta::Metadata)

Empty the fields of meta.

This is useful when using a sampling algorithm that assumes an empty meta, e.g. SMC.

source

SimpleVarInfo

DynamicPPL.SimpleVarInfoType
struct SimpleVarInfo{NT, T, C<:DynamicPPL.AbstractTransformation} <: AbstractVarInfo

A simple wrapper of the parameters with a logp field for accumulation of the logdensity.

Currently only implemented for NT<:NamedTuple and NT<:AbstractDict.

Fields

  • values: underlying representation of the realization represented

  • logp: holds the accumulated log-probability

  • transformation: represents whether it assumes variables to be transformed

Notes

The major differences between this and TypedVarInfo are:

  1. SimpleVarInfo does not require linearization.
  2. SimpleVarInfo can use more efficient bijectors.
  3. SimpleVarInfo is only type-stable if NT<:NamedTuple and either a) no indexing is used in tilde-statements, or b) the values have been specified with the correct shapes.

Examples

General usage

julia> using StableRNGs
+end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source
DynamicPPL.TypedVarInfoType
TypedVarInfo(vi::UntypedVarInfo)

This function finds all the unique syms from the instances of VarName{sym} found in vi.metadata.vns. It then extracts the metadata associated with each symbol from the global vi.metadata field. Finally, a new VarInfo is created with a new metadata as a NamedTuple mapping from symbols to type-stable Metadata instances, one for each symbol.

source

One main characteristic of VarInfo is that samples are stored in a linearized form.

DynamicPPL.link!Function
link!(vi::VarInfo, spl::Sampler)

Transform the values of the random variables sampled by spl in vi from the support of their distributions to the Euclidean space and set their corresponding "trans" flag values to true.

source
DynamicPPL.invlink!Function
invlink!(vi::VarInfo, spl::AbstractSampler)

Transform the values of the random variables sampled by spl in vi from the Euclidean space back to the support of their distributions and sets their corresponding "trans" flag values to false.

source
DynamicPPL.set_flag!Function
set_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to true in vi.

source
DynamicPPL.unset_flag!Function
unset_flag!(vi::VarInfo, vn::VarName, flag::String, ignorable::Bool=false

Set vn's value for flag to false in vi.

Setting some flags for some VarInfo types is not possible, and by default attempting to do so will error. If ignorable is set to true then this will silently be ignored instead.

source
DynamicPPL.is_flaggedFunction
is_flagged(vi::VarInfo, vn::VarName, flag::String)

Check whether vn has a true value for flag in vi.

source

For Gibbs sampling the following functions were added.

DynamicPPL.setgid!Function
setgid!(vi::VarInfo, gid::Selector, vn::VarName)

Add gid to the set of sampler selectors associated with vn in vi.

source
DynamicPPL.updategid!Function
updategid!(vi::VarInfo, vn::VarName, spl::Sampler)

Set vn's gid to Set([spl.selector]), if vn does not have a sampler selector linked and vn's symbol is in the space of spl.

source

The following functions were used for sequential Monte Carlo methods.

DynamicPPL.get_num_produceFunction
get_num_produce(vi::VarInfo)

Return the num_produce of vi.

source
DynamicPPL.set_num_produce!Function
set_num_produce!(vi::VarInfo, n::Int)

Set the num_produce field of vi to n.

source
DynamicPPL.increment_num_produce!Function
increment_num_produce!(vi::VarInfo)

Add 1 to num_produce in vi.

source
DynamicPPL.reset_num_produce!Function
reset_num_produce!(vi::VarInfo)

Reset the value of num_produce the log of the joint probability of the observed data and parameters sampled in vi to 0.

source
DynamicPPL.setorder!Function
setorder!(vi::VarInfo, vn::VarName, index::Int)

Set the order of vn in vi to index, where order is the number of observe statements run before samplingvn`.

source
DynamicPPL.set_retained_vns_del_by_spl!Function
set_retained_vns_del_by_spl!(vi::VarInfo, spl::Sampler)

Set the "del" flag of variables in vi with order > vi.num_produce[] to true.

source
Base.empty!Function
empty!(meta::Metadata)

Empty the fields of meta.

This is useful when using a sampling algorithm that assumes an empty meta, e.g. SMC.

source

SimpleVarInfo

DynamicPPL.SimpleVarInfoType
struct SimpleVarInfo{NT, T, C<:DynamicPPL.AbstractTransformation} <: AbstractVarInfo

A simple wrapper of the parameters with a logp field for accumulation of the logdensity.

Currently only implemented for NT<:NamedTuple and NT<:AbstractDict.

Fields

  • values: underlying representation of the realization represented

  • logp: holds the accumulated log-probability

  • transformation: represents whether it assumes variables to be transformed

Notes

The major differences between this and TypedVarInfo are:

  1. SimpleVarInfo does not require linearization.
  2. SimpleVarInfo can use more efficient bijectors.
  3. SimpleVarInfo is only type-stable if NT<:NamedTuple and either a) no indexing is used in tilde-statements, or b) the values have been specified with the correct shapes.

Examples

General usage

julia> using StableRNGs
 
 julia> @model function demo()
            m ~ Normal()
@@ -1572,9 +1114,9 @@
 
 julia> svi_dict[@varname(m.b)]
 ERROR: type NamedTuple has no field b
-[...]
source

Common API

Accumulation of log-probabilities

DynamicPPL.getlogpFunction
getlogp(vi::AbstractVarInfo)

Return the log of the joint probability of the observed data and parameters sampled in vi.

source
DynamicPPL.setlogp!!Function
setlogp!!(vi::AbstractVarInfo, logp)

Set the log of the joint probability of the observed data and parameters sampled in vi to logp, mutating if it makes sense.

source
DynamicPPL.acclogp!!Function
acclogp!!([context::AbstractContext, ]vi::AbstractVarInfo, logp)

Add logp to the value of the log of the joint probability of the observed data and parameters sampled in vi, mutating if it makes sense.

source
DynamicPPL.resetlogp!!Function
resetlogp!!(vi::AbstractVarInfo)

Reset the value of the log of the joint probability of the observed data and parameters sampled in vi to 0, mutating if it makes sense.

source

Variables and their realizations

Base.keysFunction
keys(vi::AbstractVarInfo)

Return an iterator over all vns in vi.

source
Base.getindexFunction
getindex(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
-getindex(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi in the support of its (their) distribution(s).

If dist is specified, the value(s) will be massaged into the representation expected by dist.

source
BangBang.push!!Function
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution)

Push a new random variable vn with a sampled value r from a distribution dist to the VarInfo vi, mutating if it makes sense.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, spl::AbstractSampler)

Push a new random variable vn with a sampled value r sampled with a sampler spl from a distribution dist to VarInfo vi, if it makes sense.

The sampler is passed here to invalidate its cache where defined.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, gid::Selector)

Push a new random variable vn with a sampled value r sampled with a sampler of selector gid from a distribution dist to VarInfo vi.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
BangBang.empty!!Function
empty!!(vi::AbstractVarInfo)

Empty the fields of vi.metadata and reset vi.logp[] and vi.num_produce[] to zeros.

This is useful when using a sampling algorithm that assumes an empty vi, e.g. SMC.

source
Base.isemptyFunction
isempty(vi::AbstractVarInfo)

Return true if vi is empty and false otherwise.

source
DynamicPPL.getindex_internalFunction
getindex_internal(vi::AbstractVarInfo, vn::VarName)
-getindex_internal(vi::AbstractVarInfo, vns::Vector{<:VarName})

Return the current value(s) of vn (vns) in vi as represented internally in vi.

See also: getindex(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

source
DynamicPPL.setindex_internal!Function
setindex_internal!(vnv::VarNamedVector, val, i::Int)

Sets the ith element of the internal storage vector, ignoring inactive entries.

source
setindex_internal!(vnv::VarNamedVector, val, vn::VarName[, transform])

Like setindex!, but sets the values as they are stored internally in vnv.

Optionally can set the transformation, such that transform(val) is the original value of the variable. By default, the transform is the identity if creating a new entry in vnv, or the existing transform if updating an existing entry.

source
DynamicPPL.update_internal!Function
update_internal!(vnv::VarNamedVector, vn::VarName, val::AbstractVector[, transform])

Update an existing entry for vn in vnv with the value val.

Like setindex_internal!, but errors if the key vn doesn't exist.

transform should be a function that converts val to the original representation. By default it's the same as the old transform for vn.

source
DynamicPPL.insert_internal!Function
insert_internal!(vnv::VarNamedVector, val::AbstractVector, vn::VarName[, transform])

Add a variable with given value to vnv.

Like setindex_internal!, but errors if the key vn already exists.

transform should be a function that converts val to the original representation. By default it's identity.

source
DynamicPPL.length_internalFunction
length_internal(vnv::VarNamedVector)

Return the length of the internal storage vector of vnv, ignoring inactive entries.

source
DynamicPPL.reset!Function
reset!(vnv::VarNamedVector, val, vn::VarName)

Reset the value of vn in vnv to val.

This differs from setindex! in that it will always change the transform of the variable to be the default vectorisation transform. This undoes any possible linking.

Examples

julia> using DynamicPPL: VarNamedVector, @varname, reset!
+[...]
source

Common API

Accumulation of log-probabilities

DynamicPPL.getlogpFunction
getlogp(vi::AbstractVarInfo)

Return the log of the joint probability of the observed data and parameters sampled in vi.

source
DynamicPPL.setlogp!!Function
setlogp!!(vi::AbstractVarInfo, logp)

Set the log of the joint probability of the observed data and parameters sampled in vi to logp, mutating if it makes sense.

source
DynamicPPL.acclogp!!Function
acclogp!!([context::AbstractContext, ]vi::AbstractVarInfo, logp)

Add logp to the value of the log of the joint probability of the observed data and parameters sampled in vi, mutating if it makes sense.

source
DynamicPPL.resetlogp!!Function
resetlogp!!(vi::AbstractVarInfo)

Reset the value of the log of the joint probability of the observed data and parameters sampled in vi to 0, mutating if it makes sense.

source

Variables and their realizations

Base.keysFunction
keys(vi::AbstractVarInfo)

Return an iterator over all vns in vi.

source
Base.getindexFunction
getindex(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
+getindex(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi in the support of its (their) distribution(s).

If dist is specified, the value(s) will be massaged into the representation expected by dist.

source
BangBang.push!!Function
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution)

Push a new random variable vn with a sampled value r from a distribution dist to the VarInfo vi, mutating if it makes sense.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, spl::AbstractSampler)

Push a new random variable vn with a sampled value r sampled with a sampler spl from a distribution dist to VarInfo vi, if it makes sense.

The sampler is passed here to invalidate its cache where defined.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, gid::Selector)

Push a new random variable vn with a sampled value r sampled with a sampler of selector gid from a distribution dist to VarInfo vi.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
BangBang.empty!!Function
empty!!(vi::AbstractVarInfo)

Empty the fields of vi.metadata and reset vi.logp[] and vi.num_produce[] to zeros.

This is useful when using a sampling algorithm that assumes an empty vi, e.g. SMC.

source
Base.isemptyFunction
isempty(vi::AbstractVarInfo)

Return true if vi is empty and false otherwise.

source
DynamicPPL.getindex_internalFunction
getindex_internal(vi::AbstractVarInfo, vn::VarName)
+getindex_internal(vi::AbstractVarInfo, vns::Vector{<:VarName})

Return the current value(s) of vn (vns) in vi as represented internally in vi.

See also: getindex(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

source
DynamicPPL.setindex_internal!Function
setindex_internal!(vnv::VarNamedVector, val, i::Int)

Sets the ith element of the internal storage vector, ignoring inactive entries.

source
setindex_internal!(vnv::VarNamedVector, val, vn::VarName[, transform])

Like setindex!, but sets the values as they are stored internally in vnv.

Optionally can set the transformation, such that transform(val) is the original value of the variable. By default, the transform is the identity if creating a new entry in vnv, or the existing transform if updating an existing entry.

source
DynamicPPL.update_internal!Function
update_internal!(vnv::VarNamedVector, vn::VarName, val::AbstractVector[, transform])

Update an existing entry for vn in vnv with the value val.

Like setindex_internal!, but errors if the key vn doesn't exist.

transform should be a function that converts val to the original representation. By default it's the same as the old transform for vn.

source
DynamicPPL.insert_internal!Function
insert_internal!(vnv::VarNamedVector, val::AbstractVector, vn::VarName[, transform])

Add a variable with given value to vnv.

Like setindex_internal!, but errors if the key vn already exists.

transform should be a function that converts val to the original representation. By default it's identity.

source
DynamicPPL.length_internalFunction
length_internal(vnv::VarNamedVector)

Return the length of the internal storage vector of vnv, ignoring inactive entries.

source
DynamicPPL.reset!Function
reset!(vnv::VarNamedVector, val, vn::VarName)

Reset the value of vn in vnv to val.

This differs from setindex! in that it will always change the transform of the variable to be the default vectorisation transform. This undoes any possible linking.

Examples

julia> using DynamicPPL: VarNamedVector, @varname, reset!
 
 julia> vnv = VarNamedVector();
 
@@ -1587,7 +1129,7 @@
 julia> reset!(vnv, 2.0, @varname(x));
 
 julia> vnv[@varname(x)]
-2.0
source
DynamicPPL.update!Function
update!(vnv::VarNamedVector, val, vn::VarName)

Update the value of vn in vnv to val.

Like setindex!, but errors if the key vn doesn't exist.

source
Base.insert!Function
insert!(vnv::VarNamedVector, val, vn::VarName)

Add a variable with given value to vnv.

Like setindex!, but errors if the key vn already exists.

source
DynamicPPL.loosen_types!!Function
loosen_types!!(vnv::VarNamedVector{K,V,TVN,TVal,TTrans}, ::Type{KNew}, ::Type{TransNew})

Loosen the types of vnv to allow varname type KNew and transformation type TransNew.

If KNew is a subtype of K and TransNew is a subtype of the element type of the TTrans then this is a no-op and vnv is returned as is. Otherwise a new VarNamedVector is returned with the same data but more abstract types, so that variables of type KNew and transformations of type TransNew can be pushed to it. Some of the underlying storage is shared between vnv and the return value, and thus mutating one may affect the other.

See also

tighten_types

Examples

julia> using DynamicPPL: VarNamedVector, @varname, loosen_types!!, setindex_internal!
+2.0
source
DynamicPPL.update!Function
update!(vnv::VarNamedVector, val, vn::VarName)

Update the value of vn in vnv to val.

Like setindex!, but errors if the key vn doesn't exist.

source
Base.insert!Function
insert!(vnv::VarNamedVector, val, vn::VarName)

Add a variable with given value to vnv.

Like setindex!, but errors if the key vn already exists.

source
DynamicPPL.loosen_types!!Function
loosen_types!!(vnv::VarNamedVector{K,V,TVN,TVal,TTrans}, ::Type{KNew}, ::Type{TransNew})

Loosen the types of vnv to allow varname type KNew and transformation type TransNew.

If KNew is a subtype of K and TransNew is a subtype of the element type of the TTrans then this is a no-op and vnv is returned as is. Otherwise a new VarNamedVector is returned with the same data but more abstract types, so that variables of type KNew and transformations of type TransNew can be pushed to it. Some of the underlying storage is shared between vnv and the return value, and thus mutating one may affect the other.

See also

tighten_types

Examples

julia> using DynamicPPL: VarNamedVector, @varname, loosen_types!!, setindex_internal!
 
 julia> vnv = VarNamedVector(@varname(x) => [1.0]);
 
@@ -1604,7 +1146,7 @@
 julia> vnv_loose[@varname(y)]
 2×2 Matrix{Float64}:
  1.0  3.0
- 2.0  4.0
source
DynamicPPL.tighten_typesFunction
tighten_types(vnv::VarNamedVector)

Return a copy of vnv with the most concrete types possible.

For instance, if vnv has its vector of transforms have eltype Any, but all the transforms are actually identity transformations, this function will return a new VarNamedVector with the transforms vector having eltype typeof(identity).

This is a lot like the reverse of loosen_types!!, but with two notable differences: Unlike loosen_types!!, this function does not mutate vnv; it also changes not only the key and transform eltypes, but also the values eltype.

See also

loosen_types!!

Examples

julia> using DynamicPPL: VarNamedVector, @varname, loosen_types!!, setindex_internal!
+ 2.0  4.0
source
DynamicPPL.tighten_typesFunction
tighten_types(vnv::VarNamedVector)

Return a copy of vnv with the most concrete types possible.

For instance, if vnv has its vector of transforms have eltype Any, but all the transforms are actually identity transformations, this function will return a new VarNamedVector with the transforms vector having eltype typeof(identity).

This is a lot like the reverse of loosen_types!!, but with two notable differences: Unlike loosen_types!!, this function does not mutate vnv; it also changes not only the key and transform eltypes, but also the values eltype.

See also

loosen_types!!

Examples

julia> using DynamicPPL: VarNamedVector, @varname, loosen_types!!, setindex_internal!
 
 julia> vnv = VarNamedVector();
 
@@ -1624,7 +1166,7 @@
 
 julia> vnv_tight.transforms
 1-element Vector{typeof(identity)}:
- identity (generic function with 1 method)
source
DynamicPPL.values_asFunction
values_as(varinfo[, Type])

Return the values/realizations in varinfo as Type, if implemented.

If no Type is provided, return values as stored in varinfo.

Examples

SimpleVarInfo with NamedTuple:

julia> data = (x = 1.0, m = [2.0]);
+ identity (generic function with 1 method)
source
DynamicPPL.values_asFunction
values_as(varinfo[, Type])

Return the values/realizations in varinfo as Type, if implemented.

If no Type is provided, return values as stored in varinfo.

Examples

SimpleVarInfo with NamedTuple:

julia> data = (x = 1.0, m = [2.0]);
 
 julia> values_as(SimpleVarInfo(data))
 (x = 1.0, m = [2.0])
@@ -1698,11 +1240,11 @@
 julia> values_as(vi, Vector)
 2-element Vector{Real}:
  1.0
- 2.0
source

Transformations

DynamicPPL.AbstractTransformationType
abstract type AbstractTransformation

Represents a transformation to be used in link!! and invlink!!, amongst others.

A concrete implementation of this should implement the following methods:

And potentially:

See also: link!!, invlink!!, maybe_invlink_before_eval!!.

source
DynamicPPL.NoTransformationType
struct NoTransformation <: DynamicPPL.AbstractTransformation

Transformation which applies the identity function.

source
DynamicPPL.DynamicTransformationType
struct DynamicTransformation <: DynamicPPL.AbstractTransformation

Transformation which transforms the variables on a per-need-basis in the execution of a given Model.

This is in constrast to StaticTransformation which transforms all variables before the execution of a given Model.

See also: StaticTransformation.

source
DynamicPPL.StaticTransformationType
struct StaticTransformation{F} <: DynamicPPL.AbstractTransformation

Transformation which transforms all variables before the execution of a given Model.

This is done through the maybe_invlink_before_eval!! method.

See also: DynamicTransformation, maybe_invlink_before_eval!!.

Fields

  • bijector::Any: The function, assumed to implement the Bijectors interface, to be applied to the variables
source
DynamicPPL.istransFunction
istrans(vnv::VarNamedVector, vn::VarName)

Return a boolean for whether vn is guaranteed to have been transformed so that its domain is all of Euclidean space.

source
istrans(vi::AbstractVarInfo[, vns::Union{VarName, AbstractVector{<:Varname}}])

Return true if vi is working in unconstrained space, and false if vi is assuming realizations to be in support of the corresponding distributions.

If vns is provided, then only check if this/these varname(s) are transformed.

Warning

Not all implementations of AbstractVarInfo support transforming only a subset of the variables.

source
DynamicPPL.settrans!!Function
settrans!!(vi::AbstractVarInfo, trans::Bool[, vn::VarName])

Return vi with istrans(vi, vn) evaluating to true.

If vn is not specified, then istrans(vi) evaluates to true for all variables.

source
DynamicPPL.transformationFunction
transformation(vi::AbstractVarInfo)

Return the AbstractTransformation related to vi.

source
Bijectors.linkFunction
link([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-link([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space without mutating vi, using the transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink.

source
Bijectors.invlinkFunction
invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space without mutating vi, using the (inverse of) transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link.

source
DynamicPPL.link!!Function
link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space, using the transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink!!.

source
DynamicPPL.invlink!!Function
invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space, using the (inverse of) transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link!!.

source
DynamicPPL.default_transformationFunction
default_transformation(model::Model[, vi::AbstractVarInfo])

Return the AbstractTransformation currently related to model and, potentially, vi.

source
DynamicPPL.link_transformFunction
link_transform(dist)

Return the constrained-to-unconstrained bijector for distribution dist.

By default, this is just Bijectors.bijector(dist).

Warning

Note that currently this is not used by Bijectors.logpdf_with_trans, hence that needs to be overloaded separately if the intention is to change behavior of an existing distribution.

source
DynamicPPL.invlink_transformFunction
invlink_transform(dist)

Return the unconstrained-to-constrained bijector for distribution dist.

By default, this is just inverse(link_transform(dist)).

Warning

Note that currently this is not used by Bijectors.logpdf_with_trans, hence that needs to be overloaded separately if the intention is to change behavior of an existing distribution.

source
DynamicPPL.maybe_invlink_before_eval!!Function
maybe_invlink_before_eval!!([t::Transformation,] vi, context, model)

Return a possibly invlinked version of vi.

This will be called prior to model evaluation, allowing one to perform a single invlink!! before evaluation rather than lazyily evaluating the transforms on as-we-need basis as is done with DynamicTransformation.

See also: StaticTransformation, DynamicTransformation.

Examples

julia> using DynamicPPL, Distributions, Bijectors
+ 2.0
source

Transformations

DynamicPPL.AbstractTransformationType
abstract type AbstractTransformation

Represents a transformation to be used in link!! and invlink!!, amongst others.

A concrete implementation of this should implement the following methods:

And potentially:

See also: link!!, invlink!!, maybe_invlink_before_eval!!.

source
DynamicPPL.NoTransformationType
struct NoTransformation <: DynamicPPL.AbstractTransformation

Transformation which applies the identity function.

source
DynamicPPL.DynamicTransformationType
struct DynamicTransformation <: DynamicPPL.AbstractTransformation

Transformation which transforms the variables on a per-need-basis in the execution of a given Model.

This is in constrast to StaticTransformation which transforms all variables before the execution of a given Model.

See also: StaticTransformation.

source
DynamicPPL.StaticTransformationType
struct StaticTransformation{F} <: DynamicPPL.AbstractTransformation

Transformation which transforms all variables before the execution of a given Model.

This is done through the maybe_invlink_before_eval!! method.

See also: DynamicTransformation, maybe_invlink_before_eval!!.

Fields

  • bijector::Any: The function, assumed to implement the Bijectors interface, to be applied to the variables
source
DynamicPPL.istransFunction
istrans(vnv::VarNamedVector, vn::VarName)

Return a boolean for whether vn is guaranteed to have been transformed so that its domain is all of Euclidean space.

source
istrans(vi::AbstractVarInfo[, vns::Union{VarName, AbstractVector{<:Varname}}])

Return true if vi is working in unconstrained space, and false if vi is assuming realizations to be in support of the corresponding distributions.

If vns is provided, then only check if this/these varname(s) are transformed.

Warning

Not all implementations of AbstractVarInfo support transforming only a subset of the variables.

source
DynamicPPL.settrans!!Function
settrans!!(vi::AbstractVarInfo, trans::Bool[, vn::VarName])

Return vi with istrans(vi, vn) evaluating to true.

If vn is not specified, then istrans(vi) evaluates to true for all variables.

source
DynamicPPL.transformationFunction
transformation(vi::AbstractVarInfo)

Return the AbstractTransformation related to vi.

source
Bijectors.linkFunction
link([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space without mutating vi, using the transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink.

source
Bijectors.invlinkFunction
invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space without mutating vi, using the (inverse of) transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link.

source
DynamicPPL.link!!Function
link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space, using the transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink!!.

source
DynamicPPL.invlink!!Function
invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space, using the (inverse of) transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link!!.

source
DynamicPPL.default_transformationFunction
default_transformation(model::Model[, vi::AbstractVarInfo])

Return the AbstractTransformation currently related to model and, potentially, vi.

source
DynamicPPL.link_transformFunction
link_transform(dist)

Return the constrained-to-unconstrained bijector for distribution dist.

By default, this is just Bijectors.bijector(dist).

Warning

Note that currently this is not used by Bijectors.logpdf_with_trans, hence that needs to be overloaded separately if the intention is to change behavior of an existing distribution.

source
DynamicPPL.invlink_transformFunction
invlink_transform(dist)

Return the unconstrained-to-constrained bijector for distribution dist.

By default, this is just inverse(link_transform(dist)).

Warning

Note that currently this is not used by Bijectors.logpdf_with_trans, hence that needs to be overloaded separately if the intention is to change behavior of an existing distribution.

source
DynamicPPL.maybe_invlink_before_eval!!Function
maybe_invlink_before_eval!!([t::Transformation,] vi, context, model)

Return a possibly invlinked version of vi.

This will be called prior to model evaluation, allowing one to perform a single invlink!! before evaluation rather than lazyily evaluating the transforms on as-we-need basis as is done with DynamicTransformation.

See also: StaticTransformation, DynamicTransformation.

Examples

julia> using DynamicPPL, Distributions, Bijectors
 
 julia> @model demo() = x ~ Normal()
 demo (generic function with 2 methods)
@@ -1738,7 +1280,7 @@
 
 julia> # Now performs a single `invlink!!` before model evaluation.
        logjoint(model, vi_linked)
--1001.4189385332047
source

Utils

Base.mergeMethod
merge(varinfo, other_varinfos...)

Merge varinfos into one, giving precedence to the right-most varinfo when sensible.

This is particularly useful when combined with subset(varinfo, vns).

See docstring of subset(varinfo, vns) for examples.

source
DynamicPPL.subsetFunction
subset(varinfo::AbstractVarInfo, vns::AbstractVector{<:VarName})

Subset a varinfo to only contain the variables vns.

Warning

The ordering of the variables in the resulting varinfo is not guaranteed to follow the ordering of the variables in varinfo. Hence care must be taken, in particular when used in conjunction with other methods which uses the vector-representation of the varinfo, e.g. getindex(varinfo, sampler).

Examples

julia> @model function demo()
+-1001.4189385332047
source

Utils

Base.mergeMethod
merge(varinfo, other_varinfos...)

Merge varinfos into one, giving precedence to the right-most varinfo when sensible.

This is particularly useful when combined with subset(varinfo, vns).

See docstring of subset(varinfo, vns) for examples.

source
DynamicPPL.subsetFunction
subset(varinfo::AbstractVarInfo, vns::AbstractVector{<:VarName})

Subset a varinfo to only contain the variables vns.

Warning

The ordering of the variables in the resulting varinfo is not guaranteed to follow the ordering of the variables in varinfo. Hence care must be taken, in particular when used in conjunction with other methods which uses the vector-representation of the varinfo, e.g. getindex(varinfo, sampler).

Examples

julia> @model function demo()
            s ~ InverseGamma(2, 3)
            m ~ Normal(0, sqrt(s))
            x = Vector{Float64}(undef, 2)
@@ -1772,6 +1314,7 @@
 julia> # Extract one with only `m`.
        varinfo_subset1 = subset(varinfo, [@varname(m),]);
 
+
 julia> keys(varinfo_subset1)
 1-element Vector{VarName{:m, typeof(identity)}}:
  m
@@ -1820,7 +1363,7 @@
  1.0
  2.0
  3.0
- 4.0

Notes

Type-stability

Warning

This function is only type-stable when vns contains only varnames with the same symbol. For exmaple, [@varname(m[1]), @varname(m[2])] will be type-stable, but [@varname(m[1]), @varname(x)] will not be.

source
DynamicPPL.unflattenFunction
unflatten(original, x::AbstractVector)

Return instance of original constructed from x.

source
unflatten(vnv::VarNamedVector, vals::AbstractVector)

Return a new instance of vnv with the values of vals assigned to the variables.

This assumes that vals have been transformed by the same transformations that that the values in vnv have been transformed by. However, unlike replace_raw_storage, unflatten does account for inactive entries in vnv, so that the user does not have to care about them.

This is in a sense the reverse operation of vnv[:].

Unflatten recontiguifies the internal storage, getting rid of any inactive entries.

Examples

```jldoctest varnamedvector-unflatten julia> using DynamicPPL: VarNamedVector, unflatten

julia> vnv = VarNamedVector(@varname(x) => [1.0, 2.0], @varname(y) => [3.0]);

julia> unflatten(vnv, vnv[:]) == vnv true

source
unflatten(vi::AbstractVarInfo[, context::AbstractContext], x::AbstractVector)

Return a new instance of vi with the values of x assigned to the variables.

If context is provided, x is assumed to be realizations only for variables not filtered out by context.

source
DynamicPPL.varname_leavesFunction
varname_leaves(vn::VarName, val)

Return an iterator over all varnames that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_leaves
+ 4.0

Notes

Type-stability

Warning

This function is only type-stable when vns contains only varnames with the same symbol. For exmaple, [@varname(m[1]), @varname(m[2])] will be type-stable, but [@varname(m[1]), @varname(x)] will not be.

source
DynamicPPL.unflattenFunction
unflatten(original, x::AbstractVector)

Return instance of original constructed from x.

source
unflatten(vnv::VarNamedVector, vals::AbstractVector)

Return a new instance of vnv with the values of vals assigned to the variables.

This assumes that vals have been transformed by the same transformations that that the values in vnv have been transformed by. However, unlike replace_raw_storage, unflatten does account for inactive entries in vnv, so that the user does not have to care about them.

This is in a sense the reverse operation of vnv[:].

Unflatten recontiguifies the internal storage, getting rid of any inactive entries.

Examples

```jldoctest varnamedvector-unflatten julia> using DynamicPPL: VarNamedVector, unflatten

julia> vnv = VarNamedVector(@varname(x) => [1.0, 2.0], @varname(y) => [3.0]);

julia> unflatten(vnv, vnv[:]) == vnv true

source
unflatten(vi::AbstractVarInfo[, context::AbstractContext], x::AbstractVector)

Return a new instance of vi with the values of x assigned to the variables.

If context is provided, x is assumed to be realizations only for variables not filtered out by context.

source
DynamicPPL.varname_leavesFunction
varname_leaves(vn::VarName, val)

Return an iterator over all varnames that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_leaves
 
 julia> foreach(println, varname_leaves(@varname(x), rand(2)))
 x[1]
@@ -1835,7 +1378,7 @@
 julia> foreach(println, varname_leaves(@varname(x), x))
 x.y
 x.z[1][1]
-x.z[2][1]
source
DynamicPPL.varname_and_value_leavesFunction
varname_and_value_leaves(vn::VarName, val)

Return an iterator over all varname-value pairs that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_and_value_leaves
+x.z[2][1]
source
DynamicPPL.varname_and_value_leavesFunction
varname_and_value_leaves(vn::VarName, val)

Return an iterator over all varname-value pairs that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_and_value_leaves
 
 julia> foreach(println, varname_and_value_leaves(@varname(x), 1:2))
 (x[1], 1)
@@ -1876,7 +1419,7 @@
        foreach(println, varname_and_value_leaves(@varname(x), Cholesky([1.0 0.0; 0.0 1.0], 'U', 0)))
 (x.U[1, 1], 1.0)
 (x.U[1, 2], 0.0)
-(x.U[2, 2], 1.0)
source
varname_and_value_leaves(container)

Return an iterator over all varname-value pairs that are represented by container.

This is the same as varname_and_value_leaves(vn::VarName, x) but over a container containing multiple varnames.

See also: varname_and_value_leaves(vn::VarName, x).

Examples

julia> using DynamicPPL: varname_and_value_leaves
+(x.U[2, 2], 1.0)
source
varname_and_value_leaves(container)

Return an iterator over all varname-value pairs that are represented by container.

This is the same as varname_and_value_leaves(vn::VarName, x) but over a container containing multiple varnames.

See also: varname_and_value_leaves(vn::VarName, x).

Examples

julia> using DynamicPPL: varname_and_value_leaves
 
 julia> # With an `OrderedDict`
        dict = OrderedDict(@varname(y) => 1, @varname(z) => [[2.0], [3.0]]);
@@ -1892,17 +1435,16 @@
 julia> foreach(println, varname_and_value_leaves(nt))
 (y, 1)
 (z[1][1], 2.0)
-(z[2][1], 3.0)
source

Evaluation Contexts

Internally, both sampling and evaluation of log densities are performed with AbstractPPL.evaluate!!.

AbstractPPL.evaluate!!Function
evaluate!!(model::Model[, rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

Returns both the return-value of the original model, and the resulting varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function. Contexts are subtypes of AbstractPPL.AbstractContext.

DynamicPPL.SamplingContextType
SamplingContext(
+(z[2][1], 3.0)
source

Evaluation Contexts

Internally, both sampling and evaluation of log densities are performed with AbstractPPL.evaluate!!.

AbstractPPL.evaluate!!Function
evaluate!!(model::Model[, rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

Returns both the return-value of the original model, and the resulting varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function. Contexts are subtypes of AbstractPPL.AbstractContext.

DynamicPPL.SamplingContextType
SamplingContext(
         [rng::Random.AbstractRNG=Random.default_rng()],
         [sampler::AbstractSampler=SampleFromPrior()],
         [context::AbstractContext=DefaultContext()],
-)

Create a context that allows you to sample parameters with the sampler when running the model. The context determines how the returned log density is computed when running the model.

See also: DefaultContext, LikelihoodContext, PriorContext

source
DynamicPPL.DefaultContextType
struct DefaultContext <: AbstractContext end

The DefaultContext is used by default to compute the log joint probability of the data and parameters when running the model.

source
DynamicPPL.LikelihoodContextType
LikelihoodContext <: AbstractContext

A leaf context resulting in the exclusion of prior terms when running the model.

source
DynamicPPL.PriorContextType
PriorContext <: AbstractContext

A leaf context resulting in the exclusion of likelihood terms when running the model.

source
DynamicPPL.MiniBatchContextType
struct MiniBatchContext{Tctx, T} <: AbstractContext
+)

Create a context that allows you to sample parameters with the sampler when running the model. The context determines how the returned log density is computed when running the model.

See also: DefaultContext, LikelihoodContext, PriorContext

source
DynamicPPL.DefaultContextType
struct DefaultContext <: AbstractContext end

The DefaultContext is used by default to compute the log joint probability of the data and parameters when running the model.

source
DynamicPPL.LikelihoodContextType
LikelihoodContext <: AbstractContext

A leaf context resulting in the exclusion of prior terms when running the model.

source
DynamicPPL.PriorContextType
PriorContext <: AbstractContext

A leaf context resulting in the exclusion of likelihood terms when running the model.

source
DynamicPPL.MiniBatchContextType
struct MiniBatchContext{Tctx, T} <: AbstractContext
     context::Tctx
     loglike_scalar::T
-end

The MiniBatchContext enables the computation of log(prior) + s * log(likelihood of a batch) when running the model, where s is the loglike_scalar field, typically equal to the number of data points / batch size. This is useful in batch-based stochastic gradient descent algorithms to be optimizing log(prior) + log(likelihood of all the data points) in the expectation.

source
DynamicPPL.PrefixContextType
PrefixContext{Prefix}(context)

Create a context that allows you to use the wrapped context when running the model and adds the Prefix to all parameters.

This context is useful in nested models to ensure that the names of the parameters are unique.

See also: @submodel

source

Samplers

In DynamicPPL two samplers are defined that are used to initialize unobserved random variables: SampleFromPrior which samples from the prior distribution, and SampleFromUniform which samples from a uniform distribution.

DynamicPPL.SampleFromPriorType
SampleFromPrior

Sampling algorithm that samples unobserved random variables from their prior distribution.

source
DynamicPPL.SampleFromUniformType
SampleFromUniform

Sampling algorithm that samples unobserved random variables from a uniform distribution.

References

Stan reference manual

source

Additionally, a generic sampler for inference is implemented.

DynamicPPL.SamplerType
Sampler{T}

Generic sampler type for inference algorithms of type T in DynamicPPL.

Sampler should implement the AbstractMCMC interface, and in particular AbstractMCMC.step. A default implementation of the initial sampling step is provided that supports resuming sampling from a previous state and setting initial parameter values. It requires to overload loadstate and initialstep for loading previous states and actually performing the initial sampling step, respectively. Additionally, sometimes one might want to implement initialsampler that specifies how the initial parameter values are sampled if they are not provided. By default, values are sampled from the prior.

source

The default implementation of Sampler uses the following unexported functions.

DynamicPPL.initialstepFunction
initialstep(rng, model, sampler, varinfo; kwargs...)

Perform the initial sampling step of the sampler for the model.

The varinfo contains the initial samples, which can be provided by the user or sampled randomly.

source
DynamicPPL.loadstateFunction
loadstate(data)

Load sampler state from data.

By default, data is returned.

source
DynamicPPL.initialsamplerFunction
initialsampler(sampler::Sampler)

Return the sampler that is used for generating the initial parameters when sampling with sampler.

By default, it returns an instance of SampleFromPrior.

source

Model-Internal Functions

DynamicPPL.tilde_assumeFunction
tilde_assume(context::SamplingContext, right, vn, vi)

Handle assumed variables, e.g., x ~ Normal() (where x does occur in the model inputs), accumulate the log probability, and return the sampled value with a context associated with a sampler.

Falls back to

tilde_assume(context.rng, context.context, context.sampler, right, vn, vi)
source
DynamicPPL.dot_tilde_assumeFunction
dot_tilde_assume(context::SamplingContext, right, left, vn, vi)

Handle broadcasted assumed variables, e.g., x .~ MvNormal() (where x does not occur in the model inputs), accumulate the log probability, and return the sampled value for a context associated with a sampler.

Falls back to

dot_tilde_assume(context.rng, context.context, context.sampler, right, left, vn, vi)
source
DynamicPPL.tilde_observeFunction
tilde_observe(context::SamplingContext, right, left, vi)

Handle observed constants with a context associated with a sampler.

Falls back to tilde_observe(context.context, context.sampler, right, left, vi).

source
DynamicPPL.dot_tilde_observeFunction
dot_tilde_observe(context::SamplingContext, right, left, vi)

Handle broadcasted observed constants, e.g., [1.0] .~ MvNormal(), accumulate the log probability, and return the observed value for a context associated with a sampler.

Falls back to dot_tilde_observe(context.context, context.sampler, right, left, vi).

source
- + diff --git a/dev/index.html b/dev/index.html index cc54c383c..9d9aad9b2 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,466 +1,7 @@ -Home · DynamicPPL - - - - - -

DynamicPPL.jl

A domain-specific language and backend for probabilistic programming languages, used by Turing.jl.

DynamicPPL.jl

A domain-specific language and backend for probabilistic programming languages, used by Turing.jl.

- +
diff --git a/dev/internals/transformations/index.html b/dev/internals/transformations/index.html index 7c99ee9f2..b26caa06d 100644 --- a/dev/internals/transformations/index.html +++ b/dev/internals/transformations/index.html @@ -1,463 +1,5 @@ -Transforming variables · DynamicPPL - - - - - -

Transforming variables

Motivation

In a probabilistic programming language (PPL) such as DynamicPPL.jl, one crucial functionality for enabling a large number of inference algorithms to be implemented, in particular gradient-based ones, is the ability to work with "unconstrained" variables.

For example, consider the following model:

@model function demo()
+Transforming variables · DynamicPPL

Transforming variables

Motivation

In a probabilistic programming language (PPL) such as DynamicPPL.jl, one crucial functionality for enabling a large number of inference algorithms to be implemented, in particular gradient-based ones, is the ability to work with "unconstrained" variables.

For example, consider the following model:

@model function demo()
     s ~ InverseGamma(2, 3)
     return m ~ Normal(0, √s)
 end

Here we have two variables s and m, where s is constrained to be positive, while m can be any real number.

For certain inference methods, it's necessary / much more convenient to work with an equivalent model to demo but where all the variables can take any real values (they're "unconstrained").

Note

We write "unconstrained" with quotes because there are many ways to transform a constrained variable to an unconstrained one, and DynamicPPL can work with a much broader class of bijective transformations of variables, not just ones that go to the entire real line. But for MCMC, unconstraining is the most common transformation so we'll stick with that terminology.

For a large family of constraints encountered in practice, it is indeed possible to transform a (partially) constrained model to a completely unconstrained one in such a way that sampling in the unconstrained space is equivalent to sampling in the constrained space.

In DynamicPPL.jl, this is often referred to as linking (a term originating in the statistics literature) and is done using transformations from Bijectors.jl.

For example, the above model could be transformed into (the following pseudo-code; it's not working code):

@model function demo()
@@ -492,48 +34,48 @@
     classDef boxStyle fill:#ffffff,stroke:#000000,font-family:Courier,color:#000000
     
     linkStyle default stroke:#000000,stroke-width:1px,color:#000000

Below we'll see how this is done.

What do we need?

There are two aspects to transforming from the internal representation of a variable in a varinfo to the representation wanted in the model:

  1. Different implementations of AbstractVarInfo represent realizations of a model in different ways internally, so we need to transform from this internal representation to the desired representation in the model. For example,

    • VarInfo represents a realization of a model as a "flattened" / vector representation, regardless of the form of the variable in the model.
    • SimpleVarInfo represents a realization of a model exactly as in the model (unless it has been transformed; we'll get to that later).
  2. We need the ability to transform from "constrained space" to "unconstrained space", as we saw in the previous section.

Working example

A good and non-trivial example to keep in mind throughout is the following model:

using DynamicPPL, Distributions
-@model demo_lkj() = x ~ LKJCholesky(2, 1.0)
demo_lkj (generic function with 2 methods)

LKJCholesky is a LKJ(2, 1.0) distribution, a distribution over correlation matrices (covariance matrices but with unit diagonal), but working directly with the Cholesky factorization of the correlation matrix rather than the correlation matrix itself (this is more numerically stable and computationally efficient).

Note

This is a particularly "annoying" case because the return-value is not a simple Real or AbstractArray{<:Real}, but rather a LineraAlgebra.Cholesky object which wraps a triangular matrix (whether it's upper- or lower-triangular depends on the instance).

As mentioned, some implementations of AbstractVarInfo, e.g. VarInfo, works with a "flattened" / vector representation of a variable, and so in this case we need two transformations:

  1. From the Cholesky object to a vector representation.
  2. From the Cholesky object to an "unconstrained" / linked vector representation.

And similarly, we'll need the inverses of these transformations.

From internal representation to model representation

To go from the internal variable representation of an AbstractVarInfo to the variable representation wanted in the model, e.g. from a Vector{Float64} to Cholesky in the case of VarInfo in demo_lkj, we have the following methods:

DynamicPPL.to_internal_transformFunction
to_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a representation compatible with dist to the internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.from_internal_transformFunction
from_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from the internal representation of vn with dist in varinfo to a representation compatible with dist.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source

These methods allow us to extract the internal-to-model transformation function depending on the varinfo, the variable, and the distribution of the variable:

  • varinfo + vn defines the internal representation of the variable.
  • dist defines the representation expected within the model scope.
Note

If vn is not present in varinfo, then the internal representation is fully determined by varinfo alone. This is used when we're about to add a new variable to the varinfo and need to know how to represent it internally.

Continuing from the example above, we can inspect the internal representation of x in demo_lkj with VarInfo using DynamicPPL.getindex_internal:

model = demo_lkj()
+@model demo_lkj() = x ~ LKJCholesky(2, 1.0)
demo_lkj (generic function with 2 methods)

LKJCholesky is a LKJ(2, 1.0) distribution, a distribution over correlation matrices (covariance matrices but with unit diagonal), but working directly with the Cholesky factorization of the correlation matrix rather than the correlation matrix itself (this is more numerically stable and computationally efficient).

Note

This is a particularly "annoying" case because the return-value is not a simple Real or AbstractArray{<:Real}, but rather a LineraAlgebra.Cholesky object which wraps a triangular matrix (whether it's upper- or lower-triangular depends on the instance).

As mentioned, some implementations of AbstractVarInfo, e.g. VarInfo, works with a "flattened" / vector representation of a variable, and so in this case we need two transformations:

  1. From the Cholesky object to a vector representation.
  2. From the Cholesky object to an "unconstrained" / linked vector representation.

And similarly, we'll need the inverses of these transformations.

From internal representation to model representation

To go from the internal variable representation of an AbstractVarInfo to the variable representation wanted in the model, e.g. from a Vector{Float64} to Cholesky in the case of VarInfo in demo_lkj, we have the following methods:

DynamicPPL.to_internal_transformFunction
to_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a representation compatible with dist to the internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.from_internal_transformFunction
from_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from the internal representation of vn with dist in varinfo to a representation compatible with dist.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source

These methods allow us to extract the internal-to-model transformation function depending on the varinfo, the variable, and the distribution of the variable:

  • varinfo + vn defines the internal representation of the variable.
  • dist defines the representation expected within the model scope.
Note

If vn is not present in varinfo, then the internal representation is fully determined by varinfo alone. This is used when we're about to add a new variable to the varinfo and need to know how to represent it internally.

Continuing from the example above, we can inspect the internal representation of x in demo_lkj with VarInfo using DynamicPPL.getindex_internal:

model = demo_lkj()
 varinfo = VarInfo(model)
 x_internal = DynamicPPL.getindex_internal(varinfo, @varname(x))
4-element Vector{Float64}:
-  1.0
- -0.004004093033376677
-  0.0
-  0.9999919835873586
f_from_internal = DynamicPPL.from_internal_transform(
+ 1.0
+ 0.3490904956012313
+ 0.0
+ 0.9370890170527487
f_from_internal = DynamicPPL.from_internal_transform(
     varinfo, @varname(x), LKJCholesky(2, 1.0)
 )
 f_from_internal(x_internal)
LinearAlgebra.Cholesky{Float64, Matrix{Float64}}
 L factor:
 2×2 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
-  1.0          ⋅ 
- -0.00400409  0.999992

Let's confirm that this is the same as varinfo[@varname(x)]:

x_model = varinfo[@varname(x)]
LinearAlgebra.Cholesky{Float64, Matrix{Float64}}
+ 1.0       ⋅ 
+ 0.34909  0.937089

Let's confirm that this is the same as varinfo[@varname(x)]:

x_model = varinfo[@varname(x)]
LinearAlgebra.Cholesky{Float64, Matrix{Float64}}
 L factor:
 2×2 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
-  1.0          ⋅ 
- -0.00400409  0.999992

Similarly, we can go from the model representation to the internal representation:

f_to_internal = DynamicPPL.to_internal_transform(varinfo, @varname(x), LKJCholesky(2, 1.0))
+ 1.0       ⋅ 
+ 0.34909  0.937089

Similarly, we can go from the model representation to the internal representation:

f_to_internal = DynamicPPL.to_internal_transform(varinfo, @varname(x), LKJCholesky(2, 1.0))
 
 f_to_internal(x_model)
4-element reshape(::LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}, 4) with eltype Float64:
-  1.0
- -0.004004093033376677
-  0.0
-  0.9999919835873586

It's also useful to see how this is done in SimpleVarInfo:

simple_varinfo = SimpleVarInfo(varinfo)
+ 1.0
+ 0.3490904956012313
+ 0.0
+ 0.9370890170527487

It's also useful to see how this is done in SimpleVarInfo:

simple_varinfo = SimpleVarInfo(varinfo)
 DynamicPPL.getindex_internal(simple_varinfo, @varname(x))
LinearAlgebra.Cholesky{Float64, Matrix{Float64}}
 L factor:
 2×2 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
-  1.0          ⋅ 
- -0.00400409  0.999992

Here see that the internal representation is exactly the same as the model representation, and so we'd expect from_internal_transform to be the identity function:

DynamicPPL.from_internal_transform(simple_varinfo, @varname(x), LKJCholesky(2, 1.0))
identity (generic function with 1 method)

Great!

From unconstrained internal representation to model representation

In addition to going from internal representation to model representation of a variable, we also need to be able to go from the unconstrained internal representation to the model representation.

For this, we have the following methods:

DynamicPPL.to_linked_internal_transformFunction
to_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a representation compatible with dist to the linked internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.from_linked_internal_transformFunction
from_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from the linked internal representation of vn with dist in varinfo to a representation compatible with dist.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source

These are very similar to DynamicPPL.to_internal_transform and DynamicPPL.from_internal_transform, but here the internal representation is also linked / "unconstrained".

Continuing from the example above:

f_to_linked_internal = DynamicPPL.to_linked_internal_transform(
+ 1.0       ⋅ 
+ 0.34909  0.937089

Here see that the internal representation is exactly the same as the model representation, and so we'd expect from_internal_transform to be the identity function:

DynamicPPL.from_internal_transform(simple_varinfo, @varname(x), LKJCholesky(2, 1.0))
identity (generic function with 1 method)

Great!

From unconstrained internal representation to model representation

In addition to going from internal representation to model representation of a variable, we also need to be able to go from the unconstrained internal representation to the model representation.

For this, we have the following methods:

DynamicPPL.to_linked_internal_transformFunction
to_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a representation compatible with dist to the linked internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.from_linked_internal_transformFunction
from_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from the linked internal representation of vn with dist in varinfo to a representation compatible with dist.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source

These are very similar to DynamicPPL.to_internal_transform and DynamicPPL.from_internal_transform, but here the internal representation is also linked / "unconstrained".

Continuing from the example above:

f_to_linked_internal = DynamicPPL.to_linked_internal_transform(
     varinfo, @varname(x), LKJCholesky(2, 1.0)
 )
 
 x_linked_internal = f_to_linked_internal(x_model)
1-element Vector{Float64}:
- -0.004004114432471431
f_from_linked_internal = DynamicPPL.from_linked_internal_transform(
+ 0.3644076575111429
f_from_linked_internal = DynamicPPL.from_linked_internal_transform(
     varinfo, @varname(x), LKJCholesky(2, 1.0)
 )
 
 f_from_linked_internal(x_linked_internal)
LinearAlgebra.Cholesky{Float64, Matrix{Float64}}
 L factor:
 2×2 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
-  1.0          ⋅ 
- -0.00400409  0.999992

Here we see a significant difference between the linked representation and the non-linked representation: the linked representation is only of length 1, whereas the non-linked representation is of length 4. This is because we actually only need a single element to represent a 2x2 correlation matrix, as the diagonal elements are always 1 and it's symmetric.

We can also inspect the transforms themselves:

f_from_internal
DynamicPPL.ToChol('L') ∘ DynamicPPL.ReshapeTransform{Tuple{Int64}, Tuple{Int64, Int64}}((4,), (2, 2))

vs.

f_from_linked_internal
Bijectors.Inverse{Bijectors.VecCholeskyBijector}(Bijectors.VecCholeskyBijector(:L))

Here we see that f_from_linked_internal is a single function taking us directly from the linked representation to the model representation, whereas f_from_internal is a composition of a few functions: one reshaping the underlying length 4 array into 2x2 matrix, and the other converting this matrix into a Cholesky, as required to be compatible with LKJCholesky(2, 1.0).

Why do we need both to_internal_transform and to_linked_internal_transform?

One might wonder why we need both to_internal_transform and to_linked_internal_transform instead of just a single to_internal_transform which returns the "standard" internal representation if the variable is not linked / "unconstrained" and the linked / "unconstrained" internal representation if it is.

That is, why can't we just do

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%% + 1.0 ⋅ + 0.34909 0.937089

Here we see a significant difference between the linked representation and the non-linked representation: the linked representation is only of length 1, whereas the non-linked representation is of length 4. This is because we actually only need a single element to represent a 2x2 correlation matrix, as the diagonal elements are always 1 and it's symmetric.

We can also inspect the transforms themselves:

f_from_internal
DynamicPPL.ToChol('L') ∘ DynamicPPL.ReshapeTransform{Tuple{Int64}, Tuple{Int64, Int64}}((4,), (2, 2))

vs.

f_from_linked_internal
Bijectors.Inverse{Bijectors.VecCholeskyBijector}(Bijectors.VecCholeskyBijector(:L))

Here we see that f_from_linked_internal is a single function taking us directly from the linked representation to the model representation, whereas f_from_internal is a composition of a few functions: one reshaping the underlying length 4 array into 2x2 matrix, and the other converting this matrix into a Cholesky, as required to be compatible with LKJCholesky(2, 1.0).

Why do we need both to_internal_transform and to_linked_internal_transform?

One might wonder why we need both to_internal_transform and to_linked_internal_transform instead of just a single to_internal_transform which returns the "standard" internal representation if the variable is not linked / "unconstrained" and the linked / "unconstrained" internal representation if it is.

That is, why can't we just do

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%% %%{ init: { 'themeVariables': { 'lineColor': '#000000' } } }%% graph TD subgraph assume ["assume"] @@ -559,10 +101,10 @@ return (m=m, x=x) end
demo_dynamic_constraint (generic function with 2 methods)

Here the variable x is constrained to be in the domain (m, Inf), where m is sampled according to a Normal.

model = demo_dynamic_constraint()
 varinfo = VarInfo(model)
-varinfo[@varname(m)], varinfo[@varname(x)]
(1.442798141532063, 2.0731000680987712)

We see that the realization of x is indeed greater than m, as expected.

But what if we link this varinfo so that we end up working on an "unconstrained" space, i.e. both m and x can take on any values in (-Inf, Inf):

varinfo_linked = link(varinfo, model)
-varinfo_linked[@varname(m)], varinfo_linked[@varname(x)]
(1.442798141532063, 2.0731000680987712)

Still get the same values, as expected, since internally varinfo transforms from the linked internal representation to the model representation.

But what if we change the value of m, to, say, a bit larger than x?

# Update realization for `m` in `varinfo_linked`.
+varinfo[@varname(m)], varinfo[@varname(x)]
(0.0011110130824132165, 1.2452745147028423)

We see that the realization of x is indeed greater than m, as expected.

But what if we link this varinfo so that we end up working on an "unconstrained" space, i.e. both m and x can take on any values in (-Inf, Inf):

varinfo_linked = link(varinfo, model)
+varinfo_linked[@varname(m)], varinfo_linked[@varname(x)]
(0.0011110130824132165, 1.2452745147028423)

Still get the same values, as expected, since internally varinfo transforms from the linked internal representation to the model representation.

But what if we change the value of m, to, say, a bit larger than x?

# Update realization for `m` in `varinfo_linked`.
 varinfo_linked[@varname(m)] = varinfo_linked[@varname(x)] + 1
-varinfo_linked[@varname(m)], varinfo_linked[@varname(x)]
(3.0731000680987712, 2.0731000680987712)

Now we see that the constraint m < x is no longer satisfied!

Hence one might expect that if we try to compute, say, the logjoint using varinfo_linked with this "invalid" realization, we'll get an error:

logjoint(model, varinfo_linked)
-7.028792301059717

But we don't! In fact, if we look at the actual value used within the model

first(DynamicPPL.evaluate!!(model, varinfo_linked, DefaultContext()))
(m = 3.0731000680987712, x = 3.7034019946654793)

we see that we indeed satisfy the constraint m < x, as desired.

Warning

One shouldn't be setting variables in a linked varinfo willy-nilly directly like this unless one knows that the value will be compatible with the constraints of the model.

The reason for this is that internally in a model evaluation, we construct the transformation from the internal to the model representation based on the current realizations in the model! That is, we take the dist in a x ~ dist expression at model evaluation time and use that to construct the transformation, thus allowing it to change between model evaluations without invalidating the transformation.

But to be able to do this, we need to know whether the variable is linked / "unconstrained" or not, since the transformation is different in the two cases. Hence we need to be able to determine this at model evaluation time. Hence the internals end up looking something like this:

if istrans(varinfo, varname)
+varinfo_linked[@varname(m)], varinfo_linked[@varname(x)]
(2.2452745147028423, 1.2452745147028423)

Now we see that the constraint m < x is no longer satisfied!

Hence one might expect that if we try to compute, say, the logjoint using varinfo_linked with this "invalid" realization, we'll get an error:

logjoint(model, varinfo_linked)
-5.836075146480411

But we don't! In fact, if we look at the actual value used within the model

first(DynamicPPL.evaluate!!(model, varinfo_linked, DefaultContext()))
(m = 2.2452745147028423, x = 3.4894380163232714)

we see that we indeed satisfy the constraint m < x, as desired.

Warning

One shouldn't be setting variables in a linked varinfo willy-nilly directly like this unless one knows that the value will be compatible with the constraints of the model.

The reason for this is that internally in a model evaluation, we construct the transformation from the internal to the model representation based on the current realizations in the model! That is, we take the dist in a x ~ dist expression at model evaluation time and use that to construct the transformation, thus allowing it to change between model evaluations without invalidating the transformation.

But to be able to do this, we need to know whether the variable is linked / "unconstrained" or not, since the transformation is different in the two cases. Hence we need to be able to determine this at model evaluation time. Hence the internals end up looking something like this:

if istrans(varinfo, varname)
     from_linked_internal_transform(varinfo, varname, dist)
 else
     from_internal_transform(varinfo, varname, dist)
@@ -634,10 +176,9 @@
     classDef dashedBox fill:#ffffff,stroke:#000000,stroke-dasharray: 5 5,font-family:Courier,color:#000000
     classDef boxStyle fill:#ffffff,stroke:#000000,font-family:Courier,color:#000000
 
-    linkStyle default stroke:#000000,stroke-width:1px,color:#000000

Notice that dist is not present here, but otherwise the diagrams are the same.

Warning

This does mean that the getindex(varinfo, varname) might not be the same as the getindex(varinfo, varname, dist) that occurs within a model evaluation! This can be confusing, but as outlined above, we do want to allow the dist in a x ~ dist expression to "override" whatever transformation varinfo might have.

Other functionalities

There are also some additional methods for transforming between representations that are all automatically implemented from DynamicPPL.from_internal_transform, DynamicPPL.from_linked_internal_transform and their siblings, and thus don't need to be implemented manually.

Convenience methods for constructing transformations:

DynamicPPL.from_maybe_linked_internal_transformFunction
from_maybe_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from the possibly linked internal representation of vn with distn in varinfo to a representation compatible with dist.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.to_maybe_linked_internal_transformFunction
to_maybe_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a representation compatible with dist to a possibly linked internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.internal_to_linked_internal_transformFunction
internal_to_linked_internal_transform(varinfo::AbstractVarInfo, vn::VarName, dist)

Return a transformation that transforms from the internal representation of vn with dist in varinfo to a linked internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source
DynamicPPL.linked_internal_to_internal_transformFunction
linked_internal_to_internal_transform(varinfo::AbstractVarInfo, vn::VarName[, dist])

Return a transformation that transforms from a linked internal representation of vn with dist in varinfo to the internal representation of vn with dist in varinfo.

If dist is not present, then it is assumed that varinfo knows the correct output for vn.

source

Convenience methods for transforming between representations without having to explicitly construct the transformation:

Supporting a new distribution

To support a new distribution, one needs to implement for the desired AbstractVarInfo the following methods:

At the time of writing, VarInfo is the one that is most commonly used, whose internal representation is always a Vector. In this scenario, one can just implement the following methods instead:

DynamicPPL.from_vec_transformMethod
from_vec_transform(dist::Distribution)

Return the transformation from the vector representation of a realization from distribution dist to the original representation compatible with dist.

source

These are used internally by VarInfo.

Optionally, if inverse of the above is expensive to compute, one can also implement:

And similarly, there are corresponding to-methods for the from_*_vec_transform variants too

Warning

Whatever the resulting transformation is, it should be invertible, i.e. implement InverseFunctions.inverse, and have a well-defined log-abs-det Jacobian, i.e. implement ChangesOfVariables.with_logabsdet_jacobian.

TL;DR

  • DynamicPPL.jl has three representations of a variable: the model representation, the internal representation, and the linked internal representation.

    • The model representation is the representation of the variable as it appears in the model code / is expected by the dist on the right-hand-side of the ~ in the model code.
    • The internal representation is the representation of the variable as it appears in the varinfo, which varies between implementations of AbstractVarInfo, e.g. a Vector in VarInfo. This can be converted to the model representation by DynamicPPL.from_internal_transform.
    • The linked internal representation is the representation of the variable as it appears in the varinfo after linking. This can be converted to the model representation by DynamicPPL.from_linked_internal_transform.
  • Having separation between internal and linked internal is necessary because transformations might be constructed at the time of model evaluation, and thus we need to know whether to construct the transformation from the internal representation or the linked internal representation.

- + diff --git a/dev/internals/varinfo/index.html b/dev/internals/varinfo/index.html index 357e205c6..f89e8f161 100644 --- a/dev/internals/varinfo/index.html +++ b/dev/internals/varinfo/index.html @@ -1,467 +1,9 @@ -Design of VarInfo · DynamicPPL - - - - - -

Design of VarInfo

VarInfo is a fairly simple structure.

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
+Design of VarInfo · DynamicPPL

Design of VarInfo

VarInfo is a fairly simple structure.

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
     metadata::Tmeta
     logp::Base.RefValue{Tlogp}
     num_produce::Base.RefValue{Int}
-end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source

It contains

  • a logp field for accumulation of the log-density evaluation, and
  • a metadata field for storing information about the realizations of the different variables.

Representing logp is fairly straight-forward: we'll just use a Real or an array of Real, depending on the context.

Representing metadata is a bit trickier. This is supposed to contain all the necessary information for each VarName to enable the different executions of the model + extraction of different properties of interest after execution, e.g. the realization / value corresponding to a variable @varname(x).

Note

We want to work with VarName rather than something like Symbol or String as VarName contains additional structural information, e.g. a Symbol("x[1]") can be a result of either var"x[1]" ~ Normal() or x[1] ~ Normal(); these scenarios are disambiguated by VarName.

To ensure that VarInfo is simple and intuitive to work with, we want VarInfo, and hence the underlying metadata, to replicate the following functionality of Dict:

  • keys(::Dict): return all the VarNames present in metadata.
  • haskey(::Dict): check if a particular VarName is present in metadata.
  • getindex(::Dict, ::VarName): return the realization corresponding to a particular VarName.
  • setindex!(::Dict, val, ::VarName): set the realization corresponding to a particular VarName.
  • push!(::Dict, ::Pair): add a new key-value pair to the container.
  • delete!(::Dict, ::VarName): delete the realization corresponding to a particular VarName.
  • empty!(::Dict): delete all realizations in metadata.
  • merge(::Dict, ::Dict): merge two metadata structures according to similar rules as Dict.

But for general-purpose samplers, we often want to work with a simple flattened structure, typically a Vector{<:Real}. One can access a vectorised version of a variable's value with the following vector-like functions:

  • getindex_internal(::VarInfo, ::VarName): get the flattened value of a single variable.
  • getindex_internal(::VarInfo, ::Colon): get the flattened values of all variables.
  • getindex_internal(::VarInfo, i::Int): get ith value of the flattened vector of all values
  • setindex_internal!(::VarInfo, ::AbstractVector, ::VarName): set the flattened value of a variable.
  • setindex_internal!(::VarInfo, val, i::Int): set the ith value of the flattened vector of all values
  • length_internal(::VarInfo): return the length of the flat representation of metadata.

The functions have _internal in their name because internally VarInfo always stores values as vectorised.

Moreover, a link transformation can be applied to a VarInfo with link!! (and reversed with invlink!!), which applies a reversible transformation to the internal storage format of a variable that makes the range of the random variable cover all of Euclidean space. getindex_internal and setindex_internal! give direct access to the vectorised value after such a transformation, which is what samplers often need to be able sample in unconstrained space. One can also manually set a transformation by giving setindex_internal! a fourth, optional argument, that is a function that maps internally stored value to the actual value of the variable.

Finally, we want want the underlying representation used in metadata to have a few performance-related properties:

  1. Type-stable when possible, but functional when not.
  2. Efficient storage and iteration when possible, but functional when not.

The "but functional when not" is important as we want to support arbitrary models, which means that we can't always have these performance properties.

In the following sections, we'll outline how we achieve this in VarInfo.

Type-stability

Ensuring type-stability is somewhat non-trivial to address since we want this to be the case even when models mix continuous (typically Float64) and discrete (typically Int) variables.

Suppose we have an implementation of metadata which implements the functionality outlined in the previous section. The way we approach this in VarInfo is to use a NamedTuple with a separate metadata for each distinct Symbol used. For example, if we have a model of the form

using DynamicPPL, Distributions, FillArrays
+end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source

It contains

  • a logp field for accumulation of the log-density evaluation, and
  • a metadata field for storing information about the realizations of the different variables.

Representing logp is fairly straight-forward: we'll just use a Real or an array of Real, depending on the context.

Representing metadata is a bit trickier. This is supposed to contain all the necessary information for each VarName to enable the different executions of the model + extraction of different properties of interest after execution, e.g. the realization / value corresponding to a variable @varname(x).

Note

We want to work with VarName rather than something like Symbol or String as VarName contains additional structural information, e.g. a Symbol("x[1]") can be a result of either var"x[1]" ~ Normal() or x[1] ~ Normal(); these scenarios are disambiguated by VarName.

To ensure that VarInfo is simple and intuitive to work with, we want VarInfo, and hence the underlying metadata, to replicate the following functionality of Dict:

  • keys(::Dict): return all the VarNames present in metadata.
  • haskey(::Dict): check if a particular VarName is present in metadata.
  • getindex(::Dict, ::VarName): return the realization corresponding to a particular VarName.
  • setindex!(::Dict, val, ::VarName): set the realization corresponding to a particular VarName.
  • push!(::Dict, ::Pair): add a new key-value pair to the container.
  • delete!(::Dict, ::VarName): delete the realization corresponding to a particular VarName.
  • empty!(::Dict): delete all realizations in metadata.
  • merge(::Dict, ::Dict): merge two metadata structures according to similar rules as Dict.

But for general-purpose samplers, we often want to work with a simple flattened structure, typically a Vector{<:Real}. One can access a vectorised version of a variable's value with the following vector-like functions:

  • getindex_internal(::VarInfo, ::VarName): get the flattened value of a single variable.
  • getindex_internal(::VarInfo, ::Colon): get the flattened values of all variables.
  • getindex_internal(::VarInfo, i::Int): get ith value of the flattened vector of all values
  • setindex_internal!(::VarInfo, ::AbstractVector, ::VarName): set the flattened value of a variable.
  • setindex_internal!(::VarInfo, val, i::Int): set the ith value of the flattened vector of all values
  • length_internal(::VarInfo): return the length of the flat representation of metadata.

The functions have _internal in their name because internally VarInfo always stores values as vectorised.

Moreover, a link transformation can be applied to a VarInfo with link!! (and reversed with invlink!!), which applies a reversible transformation to the internal storage format of a variable that makes the range of the random variable cover all of Euclidean space. getindex_internal and setindex_internal! give direct access to the vectorised value after such a transformation, which is what samplers often need to be able sample in unconstrained space. One can also manually set a transformation by giving setindex_internal! a fourth, optional argument, that is a function that maps internally stored value to the actual value of the variable.

Finally, we want want the underlying representation used in metadata to have a few performance-related properties:

  1. Type-stable when possible, but functional when not.
  2. Efficient storage and iteration when possible, but functional when not.

The "but functional when not" is important as we want to support arbitrary models, which means that we can't always have these performance properties.

In the following sections, we'll outline how we achieve this in VarInfo.

Type-stability

Ensuring type-stability is somewhat non-trivial to address since we want this to be the case even when models mix continuous (typically Float64) and discrete (typically Int) variables.

Suppose we have an implementation of metadata which implements the functionality outlined in the previous section. The way we approach this in VarInfo is to use a NamedTuple with a separate metadata for each distinct Symbol used. For example, if we have a model of the form

using DynamicPPL, Distributions, FillArrays
 
 @model function demo()
     x ~ product_distribution(Fill(Bernoulli(0.5), 2))
@@ -473,7 +15,7 @@
 )
 typeof(varinfo_untyped.metadata)
DynamicPPL.Metadata{Dict{VarName, Int64}, Vector{Distribution}, Vector{VarName}, Vector{Real}, Vector{Set{DynamicPPL.Selector}}}
# Type-stable `VarInfo`
 varinfo_typed = DynamicPPL.typed_varinfo(demo())
-typeof(varinfo_typed.metadata)
@NamedTuple{x::DynamicPPL.Metadata{Dict{VarName{:x, typeof(identity)}, Int64}, Vector{Product{Discrete, Bernoulli{Float64}, FillArrays.Fill{Bernoulli{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{VarName{:x, typeof(identity)}}, BitVector, Vector{Set{DynamicPPL.Selector}}}, y::DynamicPPL.Metadata{Dict{VarName{:y, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{VarName{:y, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}

They both work as expected but one results in concrete typing and the other does not:

varinfo_untyped[@varname(x)], varinfo_untyped[@varname(y)]
(Real[true, true], 0.8746783638310272)
varinfo_typed[@varname(x)], varinfo_typed[@varname(y)]
(Bool[0, 1], 0.17706997268428606)

Notice that the untyped VarInfo uses Vector{Real} to store the boolean entries while the typed uses Vector{Bool}. This is because the untyped version needs the underlying container to be able to handle both the Bool for x and the Float64 for y, while the typed version can use a Vector{Bool} for x and a Vector{Float64} for y due to its usage of NamedTuple.

Warning

Of course, this NamedTuple approach is not necessarily going to help us in scenarios where the Symbol does not correspond to a unique type, e.g.

x[1] ~ Bernoulli(0.5)
+typeof(varinfo_typed.metadata)
@NamedTuple{x::DynamicPPL.Metadata{Dict{VarName{:x, typeof(identity)}, Int64}, Vector{Product{Discrete, Bernoulli{Float64}, FillArrays.Fill{Bernoulli{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{VarName{:x, typeof(identity)}}, BitVector, Vector{Set{DynamicPPL.Selector}}}, y::DynamicPPL.Metadata{Dict{VarName{:y, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{VarName{:y, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}

They both work as expected but one results in concrete typing and the other does not:

varinfo_untyped[@varname(x)], varinfo_untyped[@varname(y)]
(Real[true, false], 0.47916069568504627)
varinfo_typed[@varname(x)], varinfo_typed[@varname(y)]
(Bool[1, 0], -1.084451729365911)

Notice that the untyped VarInfo uses Vector{Real} to store the boolean entries while the typed uses Vector{Bool}. This is because the untyped version needs the underlying container to be able to handle both the Bool for x and the Float64 for y, while the typed version can use a Vector{Bool} for x and a Vector{Float64} for y due to its usage of NamedTuple.

Warning

Of course, this NamedTuple approach is not necessarily going to help us in scenarios where the Symbol does not correspond to a unique type, e.g.

x[1] ~ Bernoulli(0.5)
 x[2] ~ Normal(0, 1)

In this case we'll end up with a NamedTuple((:x,), Tuple{Vx}) where Vx is a container with eltype Union{Bool, Float64} or something worse. This is not type-stable but will still be functional.

In practice, we rarely observe such mixing of types, therefore in DynamicPPL, and more widely in Turing.jl, we use a NamedTuple approach for type-stability with great success.

Warning

Another downside with such a NamedTuple approach is that if we have a model with lots of tilde-statements, e.g. a ~ Normal(), b ~ Normal(), ..., z ~ Normal() will result in a NamedTuple with 27 entries, potentially leading to long compilation times.

For these scenarios it can be useful to fall back to "untyped" representations.

Hence we obtain a "type-stable when possible"-representation by wrapping it in a NamedTuple and partially resolving the getindex, setindex!, etc. methods at compile-time. When type-stability is not desired, we can simply use a single metadata for all VarNames instead of a NamedTuple wrapping a collection of metadatas.

Efficient storage and iteration

Efficient storage and iteration we achieve through implementation of the metadata. In particular, we do so with DynamicPPL.VarNamedVector:

DynamicPPL.VarNamedVectorType
VarNamedVector

A container that stores values in a vectorised form, but indexable by variable names.

A VarNamedVector can be thought of as an ordered mapping from VarNames to pairs of (internal_value, transform). Here internal_value is a vectorised value for the variable and transform is a function such that transform(internal_value) is the "original" value of the variable, the one that the user sees. For instance, if the variable has a matrix value, internal_value could bea flattened Vector of its elements, and transform would be a reshape call.

transform may implement simply vectorisation, but it may do more. Most importantly, it may implement linking, where the internal storage of a random variable is in a form where all values in Euclidean space are valid. This is useful for sampling, because the sampler can make changes to internal_value without worrying about constraints on the space of the random variable.

The way to access this storage format directly is through the functions getindex_internal and setindex_internal. The transform argument for setindex_internal is optional, by default it is either the identity, or the existing transform if a value already exists for this VarName.

VarNamedVector also provides a Dict-like interface that hides away the internal vectorisation. This can be accessed with getindex and setindex!. setindex! only takes the value, the transform is automatically set to be a simple vectorisation. The only notable deviation from the behavior of a Dict is that setindex! will throw an error if one tries to set a new value for a variable that lives in a different "space" than the old one (e.g. is of a different type or size). This is because setindex! does not change the transform of a variable, e.g. preserve linking, and thus the new value must be compatible with the old transform.

For now, a third value is in fact stored for each VarName: a boolean indicating whether the variable has been transformed to unconstrained Euclidean space or not. This is only in place temporarily due to the needs of our old Gibbs sampler.

Internally, VarNamedVector stores the values of all variables in a single contiguous vector. This makes some operations more efficient, and means that one can access the entire contents of the internal storage quickly with getindex_internal(vnv, :). The other fields of VarNamedVector are mostly used to keep track of which part of the internal storage belongs to which VarName.

Fields

  • varname_to_index: mapping from a VarName to its integer index in varnames, ranges and transforms
  • varnames: vector of VarNames for the variables, where varnames[varname_to_index[vn]] == vn
  • ranges: vector of index ranges in vals corresponding to varnames; each VarName vn has a single index or a set of contiguous indices, such that the values of vn can be found at vals[ranges[varname_to_index[vn]]]
  • vals: vector of values of all variables; the value(s) of vn is/are vals[ranges[varname_to_index[vn]]]
  • transforms: vector of transformations, so that transforms[varname_to_index[vn]] is a callable that transforms the value of vn back to its original space, undoing any linking and vectorisation
  • is_unconstrained: vector of booleans indicating whether a variable has been transformed to unconstrained Euclidean space or not, i.e. whether its domain is all of ℝ^ⁿ. Having is_unconstrained[varname_to_index[vn]] == false does not necessarily mean that a variable is constrained, but rather that it's not guaranteed to not be.
  • num_inactive: mapping from a variable index to the number of inactive entries for that variable. Inactive entries are elements in vals that are not part of the value of any variable. They arise when a variable is set to a new value with a different dimension, in-place. Inactive entries always come after the last active entry for the given variable. See the extended help with ??VarNamedVector for more details.

Extended help

The values for different variables are internally all stored in a single vector. For instance,

julia> using DynamicPPL: ReshapeTransform, VarNamedVector, @varname, setindex!, update!, getindex_internal
 
 julia> vnv = VarNamedVector();
@@ -524,7 +66,7 @@
   3
   4
   5
-  6
source

In a DynamicPPL.VarNamedVector{<:VarName,T}, we achieve the desiderata by storing the values for different VarNames contiguously in a Vector{T} and keeping track of which ranges correspond to which VarNames.

This does require a bit of book-keeping, in particular when it comes to insertions and deletions. Internally, this is handled by assigning each VarName a unique Int index in the varname_to_index field, which is then used to index into the following fields:

  • varnames::Vector{<:VarName}: the VarNames in the order they appear in the Vector{T}.
  • ranges::Vector{UnitRange{Int}}: the ranges of indices in the Vector{T} that correspond to each VarName.
  • transforms::Vector: the transforms associated with each VarName.

Mutating functions, e.g. setindex_internal!(vnv::VarNamedVector, val, vn::VarName), are then treated according to the following rules:

  1. If vn is not already present: add it to the end of vnv.varnames, add the val to the underlying vnv.vals, etc.

  2. If vn is already present in vnv:

    1. If val has the same length as the existing value for vn: replace existing value.
    2. If val has a smaller length than the existing value for vn: replace existing value and mark the remaining indices as "inactive" by increasing the entry in vnv.num_inactive field.
    3. If val has a larger length than the existing value for vn: expand the underlying vnv.vals to accommodate the new value, update all VarNames occuring after vn, and update the vnv.ranges to point to the new range for vn.

This means that VarNamedVector is allowed to grow as needed, while "shrinking" (i.e. insertion of smaller elements) is handled by simply marking the redundant indices as "inactive". This turns out to be efficient for use-cases that we are generally interested in.

For example, we want to optimize code-paths which effectively boil down to inner-loop in the following example:

# Construct a `VarInfo` with types inferred from `model`.
+  6
source

In a DynamicPPL.VarNamedVector{<:VarName,T}, we achieve the desiderata by storing the values for different VarNames contiguously in a Vector{T} and keeping track of which ranges correspond to which VarNames.

This does require a bit of book-keeping, in particular when it comes to insertions and deletions. Internally, this is handled by assigning each VarName a unique Int index in the varname_to_index field, which is then used to index into the following fields:

  • varnames::Vector{<:VarName}: the VarNames in the order they appear in the Vector{T}.
  • ranges::Vector{UnitRange{Int}}: the ranges of indices in the Vector{T} that correspond to each VarName.
  • transforms::Vector: the transforms associated with each VarName.

Mutating functions, e.g. setindex_internal!(vnv::VarNamedVector, val, vn::VarName), are then treated according to the following rules:

  1. If vn is not already present: add it to the end of vnv.varnames, add the val to the underlying vnv.vals, etc.

  2. If vn is already present in vnv:

    1. If val has the same length as the existing value for vn: replace existing value.
    2. If val has a smaller length than the existing value for vn: replace existing value and mark the remaining indices as "inactive" by increasing the entry in vnv.num_inactive field.
    3. If val has a larger length than the existing value for vn: expand the underlying vnv.vals to accommodate the new value, update all VarNames occuring after vn, and update the vnv.ranges to point to the new range for vn.

This means that VarNamedVector is allowed to grow as needed, while "shrinking" (i.e. insertion of smaller elements) is handled by simply marking the redundant indices as "inactive". This turns out to be efficient for use-cases that we are generally interested in.

For example, we want to optimize code-paths which effectively boil down to inner-loop in the following example:

# Construct a `VarInfo` with types inferred from `model`.
 varinfo = VarInfo(model)
 
 # Repeatedly sample from `model`.
@@ -533,9 +75,9 @@
 
     # Do something with `varinfo`.
     # ...
-end

There are typically a few scenarios where we encounter changing representation sizes of a random variable x:

  1. We're working with a transformed version x which is represented in a lower-dimensional space, e.g. transforming a x ~ LKJ(2, 1) to unconstrained y = f(x) takes us from 2-by-2 Matrix{Float64} to a 1-length Vector{Float64}.
  2. x has a random size, e.g. in a mixture model with a prior on the number of components. Here the size of x can vary widly between every realization of the Model.

In scenario (1), we're usually shrinking the representation of x, and so we end up not making any allocations for the underlying Vector{T} but instead just marking the redundant part as "inactive".

In scenario (2), we end up increasing the allocated memory for the randomly sized x, eventually leading to a vector that is large enough to hold realizations without needing to reallocate. But this can still lead to unnecessary memory usage, which might be undesirable. Hence one has to make a decision regarding the trade-off between memory usage and performance for the use-case at hand.

To help with this, we have the following functions:

DynamicPPL.num_allocatedFunction
num_allocated(vnv::VarNamedVector)
+end

There are typically a few scenarios where we encounter changing representation sizes of a random variable x:

  1. We're working with a transformed version x which is represented in a lower-dimensional space, e.g. transforming a x ~ LKJ(2, 1) to unconstrained y = f(x) takes us from 2-by-2 Matrix{Float64} to a 1-length Vector{Float64}.
  2. x has a random size, e.g. in a mixture model with a prior on the number of components. Here the size of x can vary widly between every realization of the Model.

In scenario (1), we're usually shrinking the representation of x, and so we end up not making any allocations for the underlying Vector{T} but instead just marking the redundant part as "inactive".

In scenario (2), we end up increasing the allocated memory for the randomly sized x, eventually leading to a vector that is large enough to hold realizations without needing to reallocate. But this can still lead to unnecessary memory usage, which might be undesirable. Hence one has to make a decision regarding the trade-off between memory usage and performance for the use-case at hand.

To help with this, we have the following functions:

DynamicPPL.num_allocatedFunction
num_allocated(vnv::VarNamedVector)
 num_allocated(vnv::VarNamedVector[, vn::VarName])
-num_allocated(vnv::VarNamedVector[, idx::Int])

Return the number of allocated entries in vnv, both active and inactive.

If either a VarName or an Int index is specified, only count entries allocated for that variable.

Allocated entries take up memory in vnv.vals, but, if inactive, may not currently hold any meaningful data. One can remove them with contiguify!, but doing so may cause more memory allocations in the future if variables change dimension.

source
DynamicPPL.contiguify!Function
contiguify!(vnv::VarNamedVector)

Re-contiguify the underlying vector and shrink if possible.

Examples

julia> using DynamicPPL: VarNamedVector, @varname, contiguify!, update!, has_inactive
+num_allocated(vnv::VarNamedVector[, idx::Int])

Return the number of allocated entries in vnv, both active and inactive.

If either a VarName or an Int index is specified, only count entries allocated for that variable.

Allocated entries take up memory in vnv.vals, but, if inactive, may not currently hold any meaningful data. One can remove them with contiguify!, but doing so may cause more memory allocations in the future if variables change dimension.

source
DynamicPPL.contiguify!Function
contiguify!(vnv::VarNamedVector)

Re-contiguify the underlying vector and shrink if possible.

Examples

julia> using DynamicPPL: VarNamedVector, @varname, contiguify!, update!, has_inactive
 
 julia> vnv = VarNamedVector(@varname(x) => [1.0, 2.0, 3.0], @varname(y) => [3.0]);
 
@@ -558,7 +100,7 @@
 julia> vnv[@varname(x)]  # All the values are still there.
 2-element Vector{Float64}:
  23.0
- 24.0
source

For example, one might encounter the following scenario:

vnv = DynamicPPL.VarNamedVector(@varname(x) => [true])
+ 24.0
source

For example, one might encounter the following scenario:

vnv = DynamicPPL.VarNamedVector(@varname(x) => [true])
 println("Before insertion: number of allocated entries  $(DynamicPPL.num_allocated(vnv))")
 
 for i in 1:5
@@ -568,11 +110,11 @@
         "After insertion #$(i) of length $(length(x)): number of allocated entries  $(DynamicPPL.num_allocated(vnv))",
     )
 end
Before insertion: number of allocated entries  1
-After insertion #1 of length 1: number of allocated entries  1
-After insertion #2 of length 39: number of allocated entries  39
-After insertion #3 of length 41: number of allocated entries  41
-After insertion #4 of length 82: number of allocated entries  82
-After insertion #5 of length 18: number of allocated entries  82

We can then insert a call to DynamicPPL.contiguify! after every insertion whenever the allocation grows too large to reduce overall memory usage:

vnv = DynamicPPL.VarNamedVector(@varname(x) => [true])
+After insertion #1 of length 9: number of allocated entries  9
+After insertion #2 of length 34: number of allocated entries  34
+After insertion #3 of length 88: number of allocated entries  88
+After insertion #4 of length 77: number of allocated entries  88
+After insertion #5 of length 7: number of allocated entries  88

We can then insert a call to DynamicPPL.contiguify! after every insertion whenever the allocation grows too large to reduce overall memory usage:

vnv = DynamicPPL.VarNamedVector(@varname(x) => [true])
 println("Before insertion: number of allocated entries  $(DynamicPPL.num_allocated(vnv))")
 
 for i in 1:5
@@ -585,15 +127,15 @@
         "After insertion #$(i) of length $(length(x)): number of allocated entries  $(DynamicPPL.num_allocated(vnv))",
     )
 end
Before insertion: number of allocated entries  1
-After insertion #1 of length 66: number of allocated entries  66
-After insertion #2 of length 84: number of allocated entries  84
-After insertion #3 of length 15: number of allocated entries  15
-After insertion #4 of length 75: number of allocated entries  75
-After insertion #5 of length 52: number of allocated entries  52

This does incur a runtime cost as it requires re-allocation of the ranges in addition to a resize! of the underlying Vector{T}. However, this also ensures that the the underlying Vector{T} is contiguous, which is important for performance. Hence, if we're about to do a lot of work with the VarNamedVector without insertions, etc., it can be worth it to do a sweep to ensure that the underlying Vector{T} is contiguous.

Note

Higher-dimensional arrays, e.g. Matrix, are handled by simply vectorizing them before storing them in the Vector{T}, and composing the VarName's transformation with a DynamicPPL.ReshapeTransform.

Continuing from the example from the previous section, we can use a VarInfo with a VarNamedVector as the metadata field:

# Type-unstable
+After insertion #1 of length 64: number of allocated entries  64
+After insertion #2 of length 71: number of allocated entries  71
+After insertion #3 of length 8: number of allocated entries  8
+After insertion #4 of length 24: number of allocated entries  24
+After insertion #5 of length 36: number of allocated entries  36

This does incur a runtime cost as it requires re-allocation of the ranges in addition to a resize! of the underlying Vector{T}. However, this also ensures that the the underlying Vector{T} is contiguous, which is important for performance. Hence, if we're about to do a lot of work with the VarNamedVector without insertions, etc., it can be worth it to do a sweep to ensure that the underlying Vector{T} is contiguous.

Note

Higher-dimensional arrays, e.g. Matrix, are handled by simply vectorizing them before storing them in the Vector{T}, and composing the VarName's transformation with a DynamicPPL.ReshapeTransform.

Continuing from the example from the previous section, we can use a VarInfo with a VarNamedVector as the metadata field:

# Type-unstable
 varinfo_untyped_vnv = DynamicPPL.VectorVarInfo(varinfo_untyped)
-varinfo_untyped_vnv[@varname(x)], varinfo_untyped_vnv[@varname(y)]
(Real[true, true], 0.8746783638310272)
# Type-stable
+varinfo_untyped_vnv[@varname(x)], varinfo_untyped_vnv[@varname(y)]
(Real[true, false], 0.47916069568504627)
# Type-stable
 varinfo_typed_vnv = DynamicPPL.VectorVarInfo(varinfo_typed)
-varinfo_typed_vnv[@varname(x)], varinfo_typed_vnv[@varname(y)]
(Bool[0, 1], 0.17706997268428606)

If we now try to delete! @varname(x)

haskey(varinfo_untyped_vnv, @varname(x))
true
DynamicPPL.has_inactive(varinfo_untyped_vnv.metadata)
false
# `delete!`
+varinfo_typed_vnv[@varname(x)], varinfo_typed_vnv[@varname(y)]
(Bool[1, 0], -1.084451729365911)

If we now try to delete! @varname(x)

haskey(varinfo_untyped_vnv, @varname(x))
true
DynamicPPL.has_inactive(varinfo_untyped_vnv.metadata)
false
# `delete!`
 DynamicPPL.delete!(varinfo_untyped_vnv.metadata, @varname(x))
 DynamicPPL.has_inactive(varinfo_untyped_vnv.metadata)
false
haskey(varinfo_untyped_vnv, @varname(x))
false

Or insert a differently-sized value for @varname(x)

DynamicPPL.insert!(varinfo_untyped_vnv.metadata, fill(true, 1), @varname(x))
 varinfo_untyped_vnv[@varname(x)]
1-element Vector{Real}:
@@ -614,7 +156,7 @@
 
 julia> ForwardDiff.gradient(f, [1.0])
 1-element Vector{Float64}:
- 2.0
source
DynamicPPL.values_asMethod
values_as(vnv::VarNamedVector[, T])

Return the values/realizations in vnv as type T, if implemented.

If no type T is provided, return values as stored in vnv.

Examples

julia> using DynamicPPL: VarNamedVector
+ 2.0
source
DynamicPPL.values_asMethod
values_as(vnv::VarNamedVector[, T])

Return the values/realizations in vnv as type T, if implemented.

If no type T is provided, return values as stored in vnv.

Examples

julia> using DynamicPPL: VarNamedVector
 
 julia> vnv = VarNamedVector(@varname(x) => 1, @varname(y) => [2.0]);
 
@@ -628,10 +170,9 @@
 true
 
 julia> values_as(vnv, NamedTuple) == (x = 1.0, y = [2.0])
-true
source
- + diff --git a/dev/objects.inv b/dev/objects.inv index 37c766573..5bb96d513 100644 Binary files a/dev/objects.inv and b/dev/objects.inv differ diff --git a/index.html b/index.html index 3ac259691..6a5afc301 100644 --- a/index.html +++ b/index.html @@ -1,3 +1,2 @@ -