diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 9f9b678e..695a8467 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-10-06T01:25:10","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-10-07T01:26:04","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/api/data/index.html b/dev/api/data/index.html index 22a5ff1c..a112077c 100644 --- a/dev/api/data/index.html +++ b/dev/api/data/index.html @@ -48,4 +48,4 @@ > posterior julia> to_netcdf(idata, "data.nc") -"data.nc"source +"data.nc"source diff --git a/dev/api/dataset/index.html b/dev/api/dataset/index.html index 4e77c46d..dab523a9 100644 --- a/dev/api/dataset/index.html +++ b/dev/api/dataset/index.html @@ -10,4 +10,4 @@ data::NamedTuple, dims::Tuple{Vararg{DimensionalData.Dimension}}; metadata=DimensionalData.NoMetadata(), -)

In most cases, use convert_to_dataset to create a Dataset instead of directly using a constructor.

source

General conversion

InferenceObjects.convert_to_datasetFunction
convert_to_dataset(obj; group = :posterior, kwargs...) -> Dataset

Convert a supported object to a Dataset.

In most cases, this function calls convert_to_inference_data and returns the corresponding group.

source
InferenceObjects.namedtuple_to_datasetFunction
namedtuple_to_dataset(data; kwargs...) -> Dataset

Convert NamedTuple mapping variable names to arrays to a Dataset.

Any non-array values will be converted to a 0-dimensional array.

Keywords

  • attrs::AbstractDict{<:AbstractString}: a collection of metadata to attach to the dataset, in addition to defaults. Values should be JSON serializable.
  • library::Union{String,Module}: library used for performing inference. Will be attached to the attrs metadata.
  • dims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:
    • Symbol: dimension name
    • Type{<:DimensionsionalData.Dimension}: dimension type
    • DimensionsionalData.Dimension: dimension, potentially with indices
    • Nothing: no dimension name provided, dimension name is automatically generated
  • coords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.
source

DimensionalData

As a DimensionalData.AbstractDimStack, Dataset also implements the AbstractDimStack API and can be used like a DimStack. See DimensionalData's documentation for example usage.

Tables inteface

Dataset implements the Tables interface. This allows Datasets to be used as sources for any function that can accept a table. For example, it's straightforward to:

+)

In most cases, use convert_to_dataset to create a Dataset instead of directly using a constructor.

source

General conversion

InferenceObjects.convert_to_datasetFunction
convert_to_dataset(obj; group = :posterior, kwargs...) -> Dataset

Convert a supported object to a Dataset.

In most cases, this function calls convert_to_inference_data and returns the corresponding group.

source
InferenceObjects.namedtuple_to_datasetFunction
namedtuple_to_dataset(data; kwargs...) -> Dataset

Convert NamedTuple mapping variable names to arrays to a Dataset.

Any non-array values will be converted to a 0-dimensional array.

Keywords

  • attrs::AbstractDict{<:AbstractString}: a collection of metadata to attach to the dataset, in addition to defaults. Values should be JSON serializable.
  • library::Union{String,Module}: library used for performing inference. Will be attached to the attrs metadata.
  • dims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:
    • Symbol: dimension name
    • Type{<:DimensionsionalData.Dimension}: dimension type
    • DimensionsionalData.Dimension: dimension, potentially with indices
    • Nothing: no dimension name provided, dimension name is automatically generated
  • coords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.
source

DimensionalData

As a DimensionalData.AbstractDimStack, Dataset also implements the AbstractDimStack API and can be used like a DimStack. See DimensionalData's documentation for example usage.

Tables inteface

Dataset implements the Tables interface. This allows Datasets to be used as sources for any function that can accept a table. For example, it's straightforward to:

diff --git a/dev/api/diagnostics/index.html b/dev/api/diagnostics/index.html index 1e44d15e..77630959 100644 --- a/dev/api/diagnostics/index.html +++ b/dev/api/diagnostics/index.html @@ -64,4 +64,4 @@ julia> value = rstar(evotree_deterministic, samples); julia> round(value; digits=2) -1.0

References

Lambert, B., & Vehtari, A. (2020). $R^*$: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.

source
+1.0

References

Lambert, B., & Vehtari, A. (2020). $R^*$: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.

source
diff --git a/dev/api/index.html b/dev/api/index.html index 0cf02d77..d1bf3a5a 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-W1G68W77YV', {'page_path': location.pathname + location.search + location.hash}); -
+
diff --git a/dev/api/inference_data/index.html b/dev/api/inference_data/index.html index baa0b75e..e24b5018 100644 --- a/dev/api/inference_data/index.html +++ b/dev/api/inference_data/index.html @@ -172,4 +172,4 @@ julia> idata_merged = merge(idata1, idata2) InferenceData with groups: > posterior - > priorsource + > priorsource diff --git a/dev/api/stats/index.html b/dev/api/stats/index.html index 36849c5e..3c168dbd 100644 --- a/dev/api/stats/index.html +++ b/dev/api/stats/index.html @@ -257,4 +257,4 @@ "Hotchkiss" 0.295321 "Lawrenceville" 0.403318 "St. Paul's" 0.902508 - "Mt. Hermon" 0.655275source

Utilities

PosteriorStats.smooth_dataFunction
smooth_data(y; dims=:, interp_method=CubicSpline, offset_frac=0.01)

Smooth y along dims using interp_method.

interp_method is a 2-argument callabale that takes the arguments y and x and returns a DataInterpolations.jl interpolation method, defaulting to a cubic spline interpolator.

offset_frac is the fraction of the length of y to use as an offset when interpolating.

source
+ "Mt. Hermon" 0.655275source

Utilities

PosteriorStats.smooth_dataFunction
smooth_data(y; dims=:, interp_method=CubicSpline, offset_frac=0.01)

Smooth y along dims using interp_method.

interp_method is a 2-argument callabale that takes the arguments y and x and returns a DataInterpolations.jl interpolation method, defaulting to a cubic spline interpolator.

offset_frac is the fraction of the length of y to use as an offset when interpolating.

source
diff --git a/dev/creating_custom_plots/index.html b/dev/creating_custom_plots/index.html index 07116243..00e49dab 100644 --- a/dev/creating_custom_plots/index.html +++ b/dev/creating_custom_plots/index.html @@ -364,4 +364,4 @@

Environm JULIA_REVISE_WORKER_ONLY = 1 - + diff --git a/dev/index.html b/dev/index.html index 827ecb89..84b6a866 100644 --- a/dev/index.html +++ b/dev/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-W1G68W77YV', {'page_path': location.pathname + location.search + location.hash}); -

ArviZ.jl: Exploratory analysis of Bayesian models in Julia

ArviZ.jl is a Julia meta-package for exploratory analysis of Bayesian models. It is part of the ArviZ project, which also includes a related Python package.

ArviZ consists of and re-exports the following subpackages, along with extensions integrating them with InferenceObjects:

Additional functionality can be loaded with the following packages:

See the navigation bar for more useful packages.

Installation

From the Julia REPL, type ] to enter the Pkg REPL mode and run

pkg> add ArviZ

Usage

See the Quickstart for example usage and the API Overview for description of functions.

Extending ArviZ.jl

To use a custom data type with ArviZ.jl, simply overload InferenceObjects.convert_to_inference_data to convert your input(s) to an InferenceObjects.InferenceData.

+

ArviZ.jl: Exploratory analysis of Bayesian models in Julia

ArviZ.jl is a Julia meta-package for exploratory analysis of Bayesian models. It is part of the ArviZ project, which also includes a related Python package.

ArviZ consists of and re-exports the following subpackages, along with extensions integrating them with InferenceObjects:

Additional functionality can be loaded with the following packages:

See the navigation bar for more useful packages.

Installation

From the Julia REPL, type ] to enter the Pkg REPL mode and run

pkg> add ArviZ

Usage

See the Quickstart for example usage and the API Overview for description of functions.

Extending ArviZ.jl

To use a custom data type with ArviZ.jl, simply overload InferenceObjects.convert_to_inference_data to convert your input(s) to an InferenceObjects.InferenceData.

diff --git a/dev/quickstart/index.html b/dev/quickstart/index.html index 4c4f5401..c55fe575 100644 --- a/dev/quickstart/index.html +++ b/dev/quickstart/index.html @@ -132,7 +132,7 @@ :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8 ├──────────────────────────────────────────────────────────────────── metadata ┤ Dict{String, Any} with 2 entries: - "created_at" => "2024-10-06T01:21:12.19" + "created_at" => "2024-10-07T01:21:45.317" "inference_library" => "Turing"
sample_stats
╭────────────────╮
 │ 1000×4 Dataset │
@@ -153,7 +153,7 @@
   :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4
 ├───────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 2 entries:
-  "created_at" => "2024-10-06T01:21:12.134"
+  "created_at" => "2024-10-07T01:21:45.258"
   "inference_library" => "Turing"
 
@@ -173,7 +173,7 @@ :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8 ├──────────────────────────────────────────────────────────────────────────── metadata ┤ Dict{String, Any} with 2 entries: - "created_at" => "2024-10-06T01:21:12.19" + "created_at" => "2024-10-07T01:21:45.317" "inference_library" => "Turing" @@ -259,7 +259,7 @@ :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8 ├──────────────────────────────────────────────────────────────────── metadata ┤ Dict{String, Any} with 3 entries: - "created_at" => "2024-10-06T01:21:40.45" + "created_at" => "2024-10-07T01:22:16.652" "inference_library_version" => "0.34.1" "inference_library" => "Turing"
posterior_predictive
╭──────────────────╮
@@ -272,7 +272,7 @@
   :y eltype: Float64 dims: draw, chain, school size: 1000×4×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.177"
+  "created_at" => "2024-10-07T01:22:16.334"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
log_likelihood
╭──────────────────╮
@@ -285,7 +285,7 @@
   :y eltype: Float64 dims: draw, chain, school size: 1000×4×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.321"
+  "created_at" => "2024-10-07T01:22:16.516"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
sample_stats
╭────────────────╮
@@ -307,7 +307,7 @@
   :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4
 ├───────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.45"
+  "created_at" => "2024-10-07T01:22:16.652"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
prior
╭──────────────────╮
@@ -322,7 +322,7 @@
   :θ eltype: Float64 dims: draw, chain, school size: 1000×1×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.914"
+  "created_at" => "2024-10-07T01:22:17.161"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
prior_predictive
╭──────────────────╮
@@ -335,7 +335,7 @@
   :y eltype: Float64 dims: draw, chain, school size: 1000×1×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.786"
+  "created_at" => "2024-10-07T01:22:17.027"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
sample_stats_prior
╭────────────────╮
@@ -346,7 +346,7 @@
   :lp eltype: Float64 dims: draw, chain size: 1000×1
 ├─────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:40.874"
+  "created_at" => "2024-10-07T01:22:17.118"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
observed_data
╭───────────────────╮
@@ -357,7 +357,7 @@
   :y eltype: Float64 dims: school size: 8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 3 entries:
-  "created_at" => "2024-10-06T01:21:41.122"
+  "created_at" => "2024-10-07T01:22:17.343"
   "inference_library_version" => "0.34.1"
   "inference_library" => "Turing"
 
@@ -455,7 +455,7 @@ :theta eltype: Float64 dims: draw, chain, school size: 1000×4×8 ├──────────────────────────────────────────────────────────────────── metadata ┤ Dict{String, Any} with 1 entry: - "created_at" => "2024-10-06T01:22:21.392" + "created_at" => "2024-10-07T01:23:00.581"
posterior_predictive
╭──────────────────╮
 │ 1000×4×8 Dataset │
 ├──────────────────┴───────────────────────────────────────────────────── dims ┐
@@ -466,7 +466,7 @@
   :y_hat eltype: Float64 dims: draw, chain, school size: 1000×4×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 1 entry:
-  "created_at" => "2024-10-06T01:22:20.986"
+  "created_at" => "2024-10-07T01:23:00.124"
 
log_likelihood
╭──────────────────╮
 │ 1000×4×8 Dataset │
 ├──────────────────┴───────────────────────────────────────────────────── dims ┐
@@ -477,7 +477,7 @@
   :log_lik eltype: Float64 dims: draw, chain, school size: 1000×4×8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 1 entry:
-  "created_at" => "2024-10-06T01:22:21.309"
+  "created_at" => "2024-10-07T01:23:00.487"
 
sample_stats
╭────────────────╮
 │ 1000×4 Dataset │
 ├────────────────┴ dims ┐
@@ -492,7 +492,7 @@
   :step_size       eltype: Float64 dims: draw, chain size: 1000×4
 ├──────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 1 entry:
-  "created_at" => "2024-10-06T01:22:21.084"
+  "created_at" => "2024-10-07T01:23:00.233"
 
observed_data
╭───────────────────╮
 │ 8-element Dataset │
 ├───────────────────┴──────────────────────────────────────────────────── dims ┐
@@ -501,7 +501,7 @@
   :y eltype: Float64 dims: school size: 8
 ├──────────────────────────────────────────────────────────────────── metadata ┤
   Dict{String, Any} with 1 entry:
-  "created_at" => "2024-10-06T01:22:21.439"
+  "created_at" => "2024-10-07T01:23:00.635"
 
begin
@@ -581,4 +581,4 @@
   JULIA_PYTHONCALL_EXE = /home/runner/work/ArviZ.jl/ArviZ.jl/docs/.CondaPkg/env/bin/python
 
- + diff --git a/dev/search_index.js b/dev/search_index.js index 1fa8ca8e..8028b47c 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"api/inference_data/#inferencedata-api","page":"InferenceData","title":"InferenceData","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"Pages = [\"inference_data.md\"]","category":"page"},{"location":"api/inference_data/#Type-definition","page":"InferenceData","title":"Type definition","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"InferenceData","category":"page"},{"location":"api/inference_data/#InferenceObjects.InferenceData","page":"InferenceData","title":"InferenceObjects.InferenceData","text":"InferenceData{group_names,group_types}\n\nContainer for inference data storage using DimensionalData.\n\nThis object implements the InferenceData schema.\n\nInternally, groups are stored in a NamedTuple, which can be accessed using parent(::InferenceData).\n\nConstructors\n\nInferenceData(groups::NamedTuple)\nInferenceData(; groups...)\n\nConstruct an inference data from either a NamedTuple or keyword arguments of groups.\n\nGroups must be Dataset objects.\n\nInstead of directly creating an InferenceData, use the exported from_xyz functions or convert_to_inference_data.\n\n\n\n\n\n","category":"type"},{"location":"api/inference_data/#Property-interface","page":"InferenceData","title":"Property interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"getproperty\npropertynames","category":"page"},{"location":"api/inference_data/#Base.getproperty","page":"InferenceData","title":"Base.getproperty","text":"getproperty(data::InferenceData, name::Symbol) -> Dataset\n\nGet group with the specified name.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.propertynames","page":"InferenceData","title":"Base.propertynames","text":"propertynames(data::InferenceData) -> Tuple{Symbol}\n\nGet names of groups\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Indexing-interface","page":"InferenceData","title":"Indexing interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"getindex\nBase.setindex","category":"page"},{"location":"api/inference_data/#Base.getindex","page":"InferenceData","title":"Base.getindex","text":"Base.getindex(data::InferenceData, groups::Symbol; coords...) -> Dataset\nBase.getindex(data::InferenceData, groups; coords...) -> InferenceData\n\nReturn a new InferenceData containing the specified groups sliced to the specified coords.\n\ncoords specifies a dimension name mapping to an index, a DimensionalData.Selector, or an IntervalSets.AbstractInterval.\n\nIf one or more groups lack the specified dimension, a warning is raised but can be ignored. All groups that contain the dimension must also contain the specified indices, or an exception will be raised.\n\nExamples\n\nSelect data from all groups for just the specified id values.\n\njulia> using InferenceObjects, DimensionalData\n\njulia> idata = from_namedtuple(\n (θ=randn(4, 100, 4), τ=randn(4, 100));\n prior=(θ=randn(4, 100, 4), τ=randn(4, 100)),\n observed_data=(y=randn(4),),\n dims=(θ=[:id], y=[:id]),\n coords=(id=[\"a\", \"b\", \"c\", \"d\"],),\n )\nInferenceData with groups:\n > posterior\n > prior\n > observed_data\n\njulia> idata.posterior\nDataset with dimensions:\n Dim{:chain} Sampled 1:4 ForwardOrdered Regular Points,\n Dim{:draw} Sampled 1:100 ForwardOrdered Regular Points,\n Dim{:id} Categorical String[a, b, c, d] ForwardOrdered\nand 2 layers:\n :θ Float64 dims: Dim{:chain}, Dim{:draw}, Dim{:id} (4×100×4)\n :τ Float64 dims: Dim{:chain}, Dim{:draw} (4×100)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:15:21.4\"\n\njulia> idata_sel = idata[id=At([\"a\", \"b\"])]\nInferenceData with groups:\n > posterior\n > prior\n > observed_data\n\njulia> idata_sel.posterior\nDataset with dimensions:\n Dim{:chain} Sampled 1:4 ForwardOrdered Regular Points,\n Dim{:draw} Sampled 1:100 ForwardOrdered Regular Points,\n Dim{:id} Categorical String[a, b] ForwardOrdered\nand 2 layers:\n :θ Float64 dims: Dim{:chain}, Dim{:draw}, Dim{:id} (4×100×2)\n :τ Float64 dims: Dim{:chain}, Dim{:draw} (4×100)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:15:21.4\"\n\nSelect data from just the posterior, returning a Dataset if the indices index more than one element from any of the variables:\n\njulia> idata[:observed_data, id=At([\"a\"])]\nDataset with dimensions:\n Dim{:id} Categorical String[a] ForwardOrdered\nand 1 layer:\n :y Float64 dims: Dim{:id} (1)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:19:25.982\"\n\nNote that if a single index is provided, the behavior is still to slice so that the dimension is preserved.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.setindex","page":"InferenceData","title":"Base.setindex","text":"Base.setindex(data::InferenceData, group::Dataset, name::Symbol) -> InferenceData\n\nCreate a new InferenceData containing the group with the specified name.\n\nIf a group with name is already in data, it is replaced.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Iteration-interface","page":"InferenceData","title":"Iteration interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"InferenceData also implements the same iteration interface as its underlying NamedTuple. That is, iterating over an InferenceData iterates over its groups.","category":"page"},{"location":"api/inference_data/#General-conversion","page":"InferenceData","title":"General conversion","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"convert_to_inference_data\nfrom_dict\nfrom_namedtuple","category":"page"},{"location":"api/inference_data/#InferenceObjects.convert_to_inference_data","page":"InferenceData","title":"InferenceObjects.convert_to_inference_data","text":"convert_to_inference_data(obj; group, kwargs...) -> InferenceData\n\nConvert a supported object to an InferenceData object.\n\nIf obj converts to a single dataset, group specifies which dataset in the resulting InferenceData that is.\n\nSee convert_to_dataset\n\nArguments\n\nobj can be many objects. Basic supported types are:\nInferenceData: return unchanged\nDataset/DimensionalData.AbstractDimStack: add to InferenceData as the only group\nNamedTuple/AbstractDict: create a Dataset as the only group\nAbstractArray{<:Real}: create a Dataset as the only group, given an arbitrary name, if the name is not set\n\nMore specific types may be documented separately.\n\nKeywords\n\ngroup::Symbol = :posterior: If obj converts to a single dataset, assign the resulting dataset to this group.\ndims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:\nSymbol: dimension name\nType{<:DimensionsionalData.Dimension}: dimension type\nDimensionsionalData.Dimension: dimension, potentially with indices\nNothing: no dimension name provided, dimension name is automatically generated\ncoords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.\nkwargs: remaining keywords forwarded to converter functions\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#InferenceObjects.from_dict","page":"InferenceData","title":"InferenceObjects.from_dict","text":"from_dict(posterior::AbstractDict; kwargs...) -> InferenceData\n\nConvert a Dict to an InferenceData.\n\nArguments\n\nposterior: The data to be converted. Its strings must be Symbol or AbstractString, and its values must be arrays.\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution\nsample_stats::Any=nothing: Statistics of the posterior sampling process\npredictions::Any=nothing: Out-of-sample predictions for the posterior.\nprior::Dict=nothing: Draws from the prior\nprior_predictive::Any=nothing: Draws from the prior predictive distribution\nsample_stats_prior::Any=nothing: Statistics of the prior sampling process\nobserved_data::NamedTuple: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data::NamedTuple: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.\npredictions_constant_data::NamedTuple: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a NamedTuple whose keys are observed variable names and whose values are log likelihood arrays.\nlibrary: Name of library that generated the draws\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\nExamples\n\nusing InferenceObjects\nnchains = 2\nndraws = 100\n\ndata = Dict(\n :x => rand(ndraws, nchains),\n :y => randn(2, ndraws, nchains),\n :z => randn(3, 2, ndraws, nchains),\n)\nidata = from_dict(data)\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#InferenceObjects.from_namedtuple","page":"InferenceData","title":"InferenceObjects.from_namedtuple","text":"from_namedtuple(posterior::NamedTuple; kwargs...) -> InferenceData\nfrom_namedtuple(posterior::Vector{Vector{<:NamedTuple}}; kwargs...) -> InferenceData\nfrom_namedtuple(\n posterior::NamedTuple,\n sample_stats::Any,\n posterior_predictive::Any,\n predictions::Any,\n log_likelihood::Any;\n kwargs...\n) -> InferenceData\n\nConvert a NamedTuple or container of NamedTuples to an InferenceData.\n\nIf containers are passed, they are flattened into a single NamedTuple with array elements whose first dimensions correspond to the dimensions of the containers.\n\nArguments\n\nposterior: The data to be converted. It may be of the following types:\n::NamedTuple: The keys are the variable names and the values are arrays with dimensions (ndraws, nchains[, sizes...]).\n::Vector{Vector{<:NamedTuple}}: A vector of length nchains whose elements have length ndraws.\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution\nsample_stats::Any=nothing: Statistics of the posterior sampling process\npredictions::Any=nothing: Out-of-sample predictions for the posterior.\nprior=nothing: Draws from the prior. Accepts the same types as posterior.\nprior_predictive::Any=nothing: Draws from the prior predictive distribution\nsample_stats_prior::Any=nothing: Statistics of the prior sampling process\nobserved_data::NamedTuple: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data::NamedTuple: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.\npredictions_constant_data::NamedTuple: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a NamedTuple whose keys are observed variable names and whose values are log likelihood arrays.\nlibrary: Name of library that generated the draws\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\nnote: Note\nIf a NamedTuple is provided for observed_data, constant_data, or predictionsconstantdata`, any non-array values (e.g. integers) are converted to 0-dimensional arrays.\n\nExamples\n\nusing InferenceObjects\nnchains = 2\nndraws = 100\n\ndata1 = (\n x=rand(ndraws, nchains), y=randn(ndraws, nchains, 2), z=randn(ndraws, nchains, 3, 2)\n)\nidata1 = from_namedtuple(data1)\n\ndata2 = [[(x=rand(), y=randn(2), z=randn(3, 2)) for _ in 1:ndraws] for _ in 1:nchains];\nidata2 = from_namedtuple(data2)\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#General-functions","page":"InferenceData","title":"General functions","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"cat\nmerge","category":"page"},{"location":"api/inference_data/#Base.cat","page":"InferenceData","title":"Base.cat","text":"cat(data::InferenceData...; [groups=keys(data[1]),] dims) -> InferenceData\n\nConcatenate InferenceData objects along the specified dimension dims.\n\nOnly the groups in groups are concatenated. Remaining groups are merged into the new InferenceData object.\n\nExamples\n\nHere is how we can concatenate all groups of two InferenceData objects along the existing chain dimension:\n\njulia> coords = (; a_dim=[\"x\", \"y\", \"z\"]);\n\njulia> dims = dims=(; a=[:a_dim]);\n\njulia> data = Dict(:a => randn(100, 4, 3), :b => randn(100, 4));\n\njulia> idata = from_dict(data; coords=coords, dims=dims)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat1 = cat(idata, idata; dims=:chain)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat1.posterior\n╭─────────────────╮\n│ 100×8×3 Dataset │\n├─────────────────┴──────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered\n├──────────────────────────────────────────────────── layers ┤\n :a eltype: Float64 dims: draw, chain, a_dim size: 100×8×3\n :b eltype: Float64 dims: draw, chain size: 100×8\n├────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\nAlternatively, we can concatenate along a new run dimension, which will be created.\n\njulia> idata_cat2 = cat(idata, idata; dims=:run)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat2.posterior\n╭───────────────────╮\n│ 100×4×3×2 Dataset │\n├───────────────────┴─────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered,\n ⬔ run\n├─────────────────────────────────────────────────────────────┴ layers ┐\n :a eltype: Float64 dims: draw, chain, a_dim, run size: 100×4×3×2\n :b eltype: Float64 dims: draw, chain, run size: 100×4×2\n├──────────────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\nWe can also concatenate only a subset of groups and merge the rest, which is useful when some groups are present only in some of the InferenceData objects or will be identical in all of them:\n\njulia> observed_data = Dict(:y => randn(10));\n\njulia> idata2 = from_dict(data; observed_data=observed_data, coords=coords, dims=dims)\nInferenceData with groups:\n > posterior\n > observed_data\n\njulia> idata_cat3 = cat(idata, idata2; groups=(:posterior,), dims=:run)\nInferenceData with groups:\n > posterior\n > observed_data\n\njulia> idata_cat3.posterior\n╭───────────────────╮\n│ 100×4×3×2 Dataset │\n├───────────────────┴─────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered,\n ⬔ run\n├─────────────────────────────────────────────────────────────┴ layers ┐\n :a eltype: Float64 dims: draw, chain, a_dim, run size: 100×4×3×2\n :b eltype: Float64 dims: draw, chain, run size: 100×4×2\n├──────────────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\njulia> idata_cat3.observed_data\n╭────────────────────╮\n│ 10-element Dataset │\n├────────────── dims ┤\n ↓ y_dim_1\n├────────────────────┴─────────────── layers ┐\n :y eltype: Float64 dims: y_dim_1 size: 10\n├────────────────────────────────────────────┴ metadata ┐\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:53.539\"\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.merge","page":"InferenceData","title":"Base.merge","text":"merge(data::InferenceData...) -> InferenceData\n\nMerge InferenceData objects.\n\nThe result contains all groups in data and others. If a group appears more than once, the one that occurs last is kept.\n\nSee also: cat\n\nExamples\n\nHere we merge an InferenceData containing only a posterior group with one containing only a prior group to create a new one containing both groups.\n\njulia> idata1 = from_dict(Dict(:a => randn(100, 4, 3), :b => randn(100, 4)))\nInferenceData with groups:\n > posterior\n\njulia> idata2 = from_dict(; prior=Dict(:a => randn(100, 1, 3), :c => randn(100, 1)))\nInferenceData with groups:\n > prior\n\njulia> idata_merged = merge(idata1, idata2)\nInferenceData with groups:\n > posterior\n > prior\n\n\n\n\n\n","category":"function"},{"location":"quickstart/","page":"Quickstart","title":"Quickstart","text":"\n\n\n

ArviZ Quickstart

Note

This tutorial is adapted from ArviZ's quickstart.

\n\n\n

Setup

Here we add the necessary packages for this notebook and load a few we will use throughout.

\n\n\n\n\n
using ArviZ, ArviZPythonPlots, Distributions, LinearAlgebra, Random, StanSample, Turing
\n\n\n
# ArviZPythonPlots ships with style sheets!\nuse_style(\"arviz-darkgrid\")
\n\n\n\n

Get started with plotting

To plot with ArviZ, we need to load the ArviZPythonPlots package. ArviZ is designed to be used with libraries like Stan, Turing.jl, and Soss.jl but works fine with raw arrays.

\n\n
rng1 = Random.MersenneTwister(37772);
\n\n\n
begin\n    plot_posterior(randn(rng1, 100_000))\n    gcf()\nend
\n\n\n\n

Plotting a dictionary of arrays, ArviZ will interpret each key as the name of a different random variable. Each row of an array is treated as an independent series of draws from the variable, called a chain. Below, we have 10 chains of 50 draws each for four different distributions.

\n\n
let\n    s = (50, 10)\n    plot_forest((\n        normal=randn(rng1, s),\n        gumbel=rand(rng1, Gumbel(), s),\n        student_t=rand(rng1, TDist(6), s),\n        exponential=rand(rng1, Exponential(), s),\n    ),)\n    gcf()\nend
\n\n\n\n

Plotting with MCMCChains.jl's Chains objects produced by Turing.jl

ArviZ is designed to work well with high dimensional, labelled data. Consider the eight schools model, which roughly tries to measure the effectiveness of SAT classes at eight different schools. To show off ArviZ's labelling, I give the schools the names of a different eight schools.

This model is small enough to write down, is hierarchical, and uses labelling. Additionally, a centered parameterization causes divergences (which are interesting for illustration).

First we create our data and set some sampling parameters.

\n\n
begin\n    J = 8\n    y = [28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0]\n    σ = [15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0]\n    schools = [\n        \"Choate\",\n        \"Deerfield\",\n        \"Phillips Andover\",\n        \"Phillips Exeter\",\n        \"Hotchkiss\",\n        \"Lawrenceville\",\n        \"St. Paul's\",\n        \"Mt. Hermon\",\n    ]\n    ndraws = 1_000\n    ndraws_warmup = 1_000\n    nchains = 4\nend;
\n\n\n\n

Now we write and run the model using Turing:

\n\n
Turing.@model function model_turing(y, σ, J=length(y))\n    μ ~ Normal(0, 5)\n    τ ~ truncated(Cauchy(0, 5), 0, Inf)\n    θ ~ filldist(Normal(μ, τ), J)\n    for i in 1:J\n        y[i] ~ Normal(θ[i], σ[i])\n    end\nend
\n
model_turing (generic function with 4 methods)
\n\n
rng2 = Random.MersenneTwister(16653);
\n\n\n
begin\n    param_mod_turing = model_turing(y, σ)\n    sampler = NUTS(ndraws_warmup, 0.8)\n\n    turing_chns = Turing.sample(\n        rng2, model_turing(y, σ), sampler, MCMCThreads(), ndraws, nchains\n    )\nend;
\n\n\n\n

Most ArviZ functions work fine with Chains objects from Turing:

\n\n
begin\n    plot_autocorr(turing_chns; var_names=(:μ, :τ))\n    gcf()\nend
\n\n\n\n

Convert to InferenceData

For much more powerful querying, analysis and plotting, we can use built-in ArviZ utilities to convert Chains objects to multidimensional data structures with named dimensions and indices. Note that for such dimensions, the information is not contained in Chains, so we need to provide it.

ArviZ is built to work with InferenceData, and the more groups it has access to, the more powerful analyses it can perform.

\n\n
idata_turing_post = from_mcmcchains(\n    turing_chns;\n    coords=(; school=schools),\n    dims=NamedTuple(k => (:school,) for k in (:y, :σ, :θ)),\n    library=\"Turing\",\n)
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\" => \"2024-10-06T01:21:12.19\"\n  \"inference_library\" => \"Turing\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴───────────────────────────────────────── layers ┐\n  :energy           eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps          eltype: Int64 dims: draw, chain size: 1000×4\n  :diverging        eltype: Bool dims: draw, chain size: 1000×4\n  :max_energy_error eltype: Float64 dims: draw, chain size: 1000×4\n  :energy_error     eltype: Float64 dims: draw, chain size: 1000×4\n  :is_accept        eltype: Bool dims: draw, chain size: 1000×4\n  :log_density      eltype: Float64 dims: draw, chain size: 1000×4\n  :tree_depth       eltype: Int64 dims: draw, chain size: 1000×4\n  :step_size        eltype: Float64 dims: draw, chain size: 1000×4\n  :acceptance_rate  eltype: Float64 dims: draw, chain size: 1000×4\n  :lp               eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4\n├───────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\" => \"2024-10-06T01:21:12.134\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Each group is an ArviZ.Dataset, a DimensionalData.AbstractDimStack that can be used identically to a DimensionalData.Dimstack. We can view a summary of the dataset.

\n\n
idata_turing_post.posterior
\n
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\"        => \"2024-10-06T01:21:12.19\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Here is a plot of the trace. Note the intelligent labels.

\n\n
begin\n    plot_trace(idata_turing_post)\n    gcf()\nend
\n\n\n\n

We can also generate summary stats...

\n\n
summarystats(idata_turing_post)
\n
SummaryStats
meanstdhdi_3%hdi_97%mcse_meanmcse_stdess_tailess_bulkrhat
μ4.33.3-1.8110.50.110.06211928451.01
τ4.43.30.67310.40.200.121091151.05
θ[Choate]6.66.1-4.0118.00.210.1916277501.01
θ[Deerfield]5.05.0-4.8114.20.140.14195212771.01
θ[Phillips Andover]3.75.7-7.0714.60.140.16197914291.01
θ[Phillips Exeter]4.85.1-4.6014.40.140.14206411781.00
θ[Hotchkiss]3.34.9-6.0812.60.150.11180410981.01
θ[Lawrenceville]3.85.2-6.1213.30.130.14196513311.00
θ[St. Paul's]6.65.4-2.6817.40.180.1418428531.01
θ[Mt. Hermon]4.95.6-5.7114.80.140.19179413931.00
\n\n\n

...and examine the energy distribution of the Hamiltonian sampler.

\n\n
begin\n    plot_energy(idata_turing_post)\n    gcf()\nend
\n\n\n\n

Additional information in Turing.jl

With a few more steps, we can use Turing to compute additional useful groups to add to the InferenceData.

To sample from the prior, one simply calls sample but with the Prior sampler:

\n\n
prior = Turing.sample(rng2, param_mod_turing, Prior(), ndraws);
\n\n\n\n

To draw from the prior and posterior predictive distributions we can instantiate a \"predictive model\", i.e. a Turing model but with the observations set to missing, and then calling predict on the predictive model and the previously drawn samples:

\n\n
begin\n    # Instantiate the predictive model\n    param_mod_predict = model_turing(similar(y, Missing), σ)\n    # and then sample!\n    prior_predictive = Turing.predict(rng2, param_mod_predict, prior)\n    posterior_predictive = Turing.predict(rng2, param_mod_predict, turing_chns)\nend;
\n\n\n\n

And to extract the pointwise log-likelihoods, which is useful if you want to compute metrics such as loo,

\n\n
log_likelihood = let\n    log_likelihood = Turing.pointwise_loglikelihoods(\n        param_mod_turing, MCMCChains.get_sections(turing_chns, :parameters)\n    )\n    # Ensure the ordering of the loglikelihoods matches the ordering of `posterior_predictive`\n    ynames = string.(keys(posterior_predictive))\n    log_likelihood_y = getindex.(Ref(log_likelihood), ynames)\n    (; y=cat(log_likelihood_y...; dims=3))\nend;
\n\n\n\n

This can then be included in the from_mcmcchains call from above:

\n\n
idata_turing = from_mcmcchains(\n    turing_chns;\n    posterior_predictive,\n    log_likelihood,\n    prior,\n    prior_predictive,\n    observed_data=(; y),\n    coords=(; school=schools),\n    dims=NamedTuple(k => (:school,) for k in (:y, :σ, :θ)),\n    library=Turing,\n)
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.45\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
posterior_predictive
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.177\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
log_likelihood
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.321\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴───────────────────────────────────────── layers ┐\n  :energy           eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps          eltype: Int64 dims: draw, chain size: 1000×4\n  :diverging        eltype: Bool dims: draw, chain size: 1000×4\n  :max_energy_error eltype: Float64 dims: draw, chain size: 1000×4\n  :energy_error     eltype: Float64 dims: draw, chain size: 1000×4\n  :is_accept        eltype: Bool dims: draw, chain size: 1000×4\n  :log_density      eltype: Float64 dims: draw, chain size: 1000×4\n  :tree_depth       eltype: Int64 dims: draw, chain size: 1000×4\n  :step_size        eltype: Float64 dims: draw, chain size: 1000×4\n  :acceptance_rate  eltype: Float64 dims: draw, chain size: 1000×4\n  :lp               eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4\n├───────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.45\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
prior
╭──────────────────╮\n│ 1000×1×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×1\n  :τ eltype: Float64 dims: draw, chain size: 1000×1\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×1×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.914\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
prior_predictive
╭──────────────────╮\n│ 1000×1×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×1×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.786\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
sample_stats_prior
╭────────────────╮\n│ 1000×1 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴─────────────────────────── layers ┐\n  :lp eltype: Float64 dims: draw, chain size: 1000×1\n├─────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:40.874\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-06T01:21:41.122\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Then we can for example compute the expected leave-one-out (LOO) predictive density, which is an estimate of the out-of-distribution predictive fit of the model:

\n\n
loo(idata_turing) # higher ELPD is better
\n
PSISLOOResult with estimates\n elpd  elpd_mcse    p  p_mcse\n  -31        1.4  1.0    0.33\n\nand PSISResult with 1000 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n                    Count      Min. ESS\n (-Inf, 0.5]  good  5 (62.5%)  404\n  (0.5, 0.7]  okay  3 (37.5%)  788
\n\n\n

If the model is well-calibrated, i.e. it replicates the true generative process well, the CDF of the pointwise LOO values should be similarly distributed to a uniform distribution. This can be inspected visually:

\n\n
begin\n    plot_loo_pit(idata_turing; y=:y, ecdf=true)\n    gcf()\nend
\n\n\n\n

Plotting with Stan.jl outputs

StanSample.jl comes with built-in support for producing InferenceData outputs.

Here is the same centered eight schools model in Stan:

\n\n
begin\n    schools_code = \"\"\"\n    data {\n      int<lower=0> J;\n      array[J] real y;\n      array[J] real<lower=0> sigma;\n    }\n\n    parameters {\n      real mu;\n      real<lower=0> tau;\n      array[J] real theta;\n    }\n\n    model {\n      mu ~ normal(0, 5);\n      tau ~ cauchy(0, 5);\n      theta ~ normal(mu, tau);\n      y ~ normal(theta, sigma);\n    }\n\n    generated quantities {\n        vector[J] log_lik;\n        vector[J] y_hat;\n        for (j in 1:J) {\n            log_lik[j] = normal_lpdf(y[j] | theta[j], sigma[j]);\n            y_hat[j] = normal_rng(theta[j], sigma[j]);\n        }\n    }\n    \"\"\"\n\n    schools_data = Dict(\"J\" => J, \"y\" => y, \"sigma\" => σ)\n    idata_stan = mktempdir() do path\n        stan_model = SampleModel(\"schools\", schools_code, path)\n        _ = stan_sample(\n            stan_model;\n            data=schools_data,\n            num_chains=nchains,\n            num_warmups=ndraws_warmup,\n            num_samples=ndraws,\n            seed=28983,\n            summary=false,\n        )\n        return StanSample.inferencedata(\n            stan_model;\n            posterior_predictive_var=:y_hat,\n            observed_data=(; y),\n            log_likelihood_var=:log_lik,\n            coords=(; school=schools),\n            dims=NamedTuple(\n                k => (:school,) for k in (:y, :sigma, :theta, :log_lik, :y_hat)\n            ),\n        )\n    end\nend
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 1000×4\n  :tau   eltype: Float64 dims: draw, chain size: 1000×4\n  :theta eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-06T01:22:21.392\"\n
posterior_predictive
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y_hat eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-06T01:22:20.986\"\n
log_likelihood
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :log_lik eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-06T01:22:21.309\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴──────────────────────────────────────── layers ┐\n  :tree_depth      eltype: Int64 dims: draw, chain size: 1000×4\n  :energy          eltype: Float64 dims: draw, chain size: 1000×4\n  :diverging       eltype: Bool dims: draw, chain size: 1000×4\n  :acceptance_rate eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps         eltype: Int64 dims: draw, chain size: 1000×4\n  :lp              eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size       eltype: Float64 dims: draw, chain size: 1000×4\n├──────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-06T01:22:21.084\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-06T01:22:21.439\"\n
\n\n
begin\n    plot_density(idata_stan; var_names=(:mu, :tau))\n    gcf()\nend
\n\n\n\n

Here is a plot showing where the Hamiltonian sampler had divergences:

\n\n
begin\n    plot_pair(\n        idata_stan;\n        coords=Dict(:school => [\"Choate\", \"Deerfield\", \"Phillips Andover\"]),\n        divergences=true,\n    )\n    gcf()\nend
\n\n\n\n\n\n
using PlutoUI
\n\n\n
using Pkg, InteractiveUtils
\n\n\n
with_terminal(Pkg.status; color=false)
\n
Status `~/work/ArviZ.jl/ArviZ.jl/docs/Project.toml`\n  [cbdf2221] AlgebraOfGraphics v0.8.11\n  [131c737c] ArviZ v0.12.1 `~/work/ArviZ.jl/ArviZ.jl`\n  [2f96bb34] ArviZExampleData v0.1.11\n  [4a6e88f0] ArviZPythonPlots v0.1.7\n  [13f3f980] CairoMakie v0.12.12\n  [a93c6f00] DataFrames v1.7.0\n⌅ [0703355e] DimensionalData v0.27.9\n  [31c24e10] Distributions v0.25.112\n  [e30172f5] Documenter v1.7.0\n  [f6006082] EvoTrees v0.16.7\n  [b5cf5a8d] InferenceObjects v0.4.3\n  [be115224] MCMCDiagnosticTools v0.3.10\n  [a7f614a8] MLJBase v1.7.0\n  [614be32b] MLJIteration v0.6.3\n  [ce719bf2] PSIS v0.9.6\n  [359b1769] PlutoStaticHTML v6.0.28\n  [7f904dfe] PlutoUI v0.7.60\n  [7f36be82] PosteriorStats v0.2.5\n  [c1514b29] StanSample v7.10.1\n  [a19d573c] StatisticalMeasures v0.1.7\n  [2913bbd2] StatsBase v0.34.3\n  [fce5fe82] Turing v0.34.1\n  [f43a241f] Downloads v1.6.0\n  [37e2e46d] LinearAlgebra\n  [10745b16] Statistics v1.10.0\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n
\n\n
with_terminal(versioninfo)
\n
Julia Version 1.10.5\nCommit 6f3fdf7b362 (2024-08-27 14:19 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 2 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_PKG_SERVER_REGISTRY_PREFERENCE = eager\n  JULIA_NUM_THREADS = 2\n  JULIA_REVISE_WORKER_ONLY = 1\n  JULIA_PYTHONCALL_EXE = /home/runner/work/ArviZ.jl/ArviZ.jl/docs/.CondaPkg/env/bin/python\n
\n\n","category":"page"},{"location":"quickstart/","page":"Quickstart","title":"Quickstart","text":"EditURL = \"https://github.com/arviz-devs/ArviZ.jl/blob/main/docs/src/quickstart.jl\"","category":"page"},{"location":"api/data/#data-api","page":"Data","title":"Data","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"Pages = [\"data.md\"]","category":"page"},{"location":"api/data/#Inference-library-converters","page":"Data","title":"Inference library converters","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"from_mcmcchains\nfrom_samplechains","category":"page"},{"location":"api/data/#ArviZ.from_mcmcchains","page":"Data","title":"ArviZ.from_mcmcchains","text":"from_mcmcchains(posterior::MCMCChains.Chains; kwargs...) -> InferenceData\nfrom_mcmcchains(; kwargs...) -> InferenceData\nfrom_mcmcchains(\n posterior::MCMCChains.Chains,\n posterior_predictive,\n predictions,\n log_likelihood;\n kwargs...\n) -> InferenceData\n\nConvert data in an MCMCChains.Chains format into an InferenceData.\n\nAny keyword argument below without an an explicitly annotated type above is allowed, so long as it can be passed to convert_to_inference_data.\n\nArguments\n\nposterior::MCMCChains.Chains: Draws from the posterior\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution or name(s) of predictive variables in posterior\npredictions: Out-of-sample predictions for the posterior.\nprior: Draws from the prior\nprior_predictive: Draws from the prior predictive distribution or name(s) of predictive variables in prior\nobserved_data: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data: Model constants, data included in the model that are not modeled as random variables. Keys are parameter names.\npredictions_constant_data: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a named tuple whose keys are observed variable names and whose values are log likelihood arrays. Alternatively, provide the name of variable in posterior containing log likelihoods.\nlibrary=MCMCChains: Name of library that generated the chains\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\neltypes: Map from variable names to eltypes. This is primarily used to assign discrete eltypes to discrete variables that were stored in Chains as floats.\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\n\n\n\n\n","category":"function"},{"location":"api/data/#ArviZ.from_samplechains","page":"Data","title":"ArviZ.from_samplechains","text":"from_samplechains(\n posterior=nothing;\n prior=nothing,\n library=SampleChains,\n kwargs...,\n) -> InferenceData\n\nConvert SampleChains samples to an InferenceData.\n\nEither posterior or prior may be a SampleChains.AbstractChain or SampleChains.MultiChain object.\n\nFor descriptions of remaining kwargs, see from_namedtuple.\n\n\n\n\n\n","category":"function"},{"location":"api/data/#IO-/-Conversion","page":"Data","title":"IO / Conversion","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"from_netcdf\nto_netcdf","category":"page"},{"location":"api/data/#InferenceObjects.from_netcdf","page":"Data","title":"InferenceObjects.from_netcdf","text":"from_netcdf(path::AbstractString; kwargs...) -> InferenceData\n\nLoad an InferenceData from an unopened NetCDF file.\n\nRemaining kwargs are passed to NCDatasets.NCDataset. This method loads data eagerly. To instead load data lazily, pass an opened NCDataset to from_netcdf.\n\nnote: Note\nThis method requires that NCDatasets is loaded before it can be used.\n\nExamples\n\njulia> using InferenceObjects, NCDatasets\n\njulia> idata = from_netcdf(\"centered_eight.nc\")\nInferenceData with groups:\n > posterior\n > posterior_predictive\n > sample_stats\n > prior\n > observed_data\n\nfrom_netcdf(ds::NCDatasets.NCDataset; load_mode) -> InferenceData\n\nLoad an InferenceData from an opened NetCDF file.\n\nload_mode defaults to :lazy, which avoids reading variables into memory. Operations on these arrays will be slow. load_mode can also be :eager, which copies all variables into memory. It is then safe to close ds. If load_mode is :lazy and ds is closed after constructing InferenceData, using the variable arrays will have undefined behavior.\n\nExamples\n\nHere is how we might load an InferenceData from an InferenceData lazily from a web-hosted NetCDF file.\n\njulia> using HTTP, InferenceObjects, NCDatasets\n\njulia> resp = HTTP.get(\"https://github.com/arviz-devs/arviz_example_data/blob/main/data/centered_eight.nc?raw=true\");\n\njulia> ds = NCDataset(\"centered_eight\", \"r\"; memory = resp.body);\n\njulia> idata = from_netcdf(ds)\nInferenceData with groups:\n > posterior\n > posterior_predictive\n > sample_stats\n > prior\n > observed_data\n\njulia> idata_copy = copy(idata); # disconnect from the loaded dataset\n\njulia> close(ds);\n\n\n\n\n\n","category":"function"},{"location":"api/data/#InferenceObjects.to_netcdf","page":"Data","title":"InferenceObjects.to_netcdf","text":"to_netcdf(data, dest::AbstractString; group::Symbol=:posterior, kwargs...)\nto_netcdf(data, dest::NCDatasets.NCDataset; group::Symbol=:posterior)\n\nWrite data to a NetCDF file.\n\ndata is any type that can be converted to an InferenceData using convert_to_inference_data. If not an InferenceData, then group specifies which group the data represents.\n\ndest specifies either the path to the NetCDF file or an opened NetCDF file. If dest is a path, remaining kwargs are passed to NCDatasets.NCDataset.\n\nnote: Note\nThis method requires that NCDatasets is loaded before it can be used.\n\nExamples\n\njulia> using InferenceObjects, NCDatasets\n\njulia> idata = from_namedtuple((; x = randn(4, 100, 3), z = randn(4, 100)))\nInferenceData with groups:\n > posterior\n\njulia> to_netcdf(idata, \"data.nc\")\n\"data.nc\"\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#diagnostics-api","page":"Diagnostics","title":"Diagnostics","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"Pages = [\"diagnostics.md\"]","category":"page"},{"location":"api/diagnostics/#bfmi","page":"Diagnostics","title":"Bayesian fraction of missing information","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.bfmi","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.bfmi","page":"Diagnostics","title":"MCMCDiagnosticTools.bfmi","text":"bfmi(energy::AbstractVector{<:Real}) -> Real\nbfmi(energy::AbstractMatrix{<:Real}; dims::Int=1) -> AbstractVector{<:Real}\n\nCalculate the estimated Bayesian fraction of missing information (BFMI).\n\nWhen sampling with Hamiltonian Monte Carlo (HMC), BFMI quantifies how well momentum resampling matches the marginal energy distribution.\n\nThe current advice is that values smaller than 0.3 indicate poor sampling. However, this threshold is provisional and may change. A BFMI value below the threshold often indicates poor adaptation of sampling parameters or that the target distribution has heavy tails that were not well explored by the Markov chain.\n\nFor more information, see Section 6.1 of [Betancourt2018] or [Betancourt2016] for a complete account.\n\nenergy is either a vector of Hamiltonian energies of draws or a matrix of energies of draws for multiple chains. dims indicates the dimension in energy that contains the draws. The default dims=1 assumes energy has the shape draws or (draws, chains). If a different shape is provided, dims must be set accordingly.\n\nIf energy is a vector, a single BFMI value is returned. Otherwise, a vector of BFMI values for each chain is returned.\n\n[Betancourt2018]: Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]\n\n[Betancourt2016]: Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#ess_rhat","page":"Diagnostics","title":"Effective sample size and widehatR diagnostic","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.ess\nMCMCDiagnosticTools.rhat\nMCMCDiagnosticTools.ess_rhat","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.ess","page":"Diagnostics","title":"MCMCDiagnosticTools.ess","text":"ess(data::InferenceData; kwargs...) -> Dataset\ness(data::Dataset; kwargs...) -> Dataset\n\nCalculate the effective sample size (ESS) for each parameter in the data.\n\n\n\n\n\ness(\n samples::AbstractArray{<:Union{Missing,Real}};\n kind=:bulk,\n relative::Bool=false,\n autocov_method=AutocovMethod(),\n split_chains::Int=2,\n maxlag::Int=250,\n kwargs...\n)\n\nEstimate the effective sample size (ESS) of the samples of shape (draws, [chains[, parameters...]]) with the autocov_method.\n\nOptionally, the kind of ESS estimate to be computed can be specified (see below). Some kinds accept additional kwargs.\n\nIf relative is true, the relative ESS is returned, i.e. ess / (draws * chains).\n\nsplit_chains indicates the number of chains each chain is split into. When split_chains > 1, then the diagnostics check for within-chain convergence. When d = mod(draws, split_chains) > 0, i.e. the chains cannot be evenly split, then 1 draw is discarded after each of the first d splits within each chain. There must be at least 3 draws in each chain after splitting.\n\nmaxlag indicates the maximum lag for which autocovariance is computed and must be greater than 0.\n\nFor a given estimand, it is recommended that the ESS is at least 100 * chains and that widehatR 101.[VehtariGelman2021]\n\nSee also: AutocovMethod, FFTAutocovMethod, BDAAutocovMethod, rhat, ess_rhat, mcse\n\nKinds of ESS estimates\n\nIf kind isa a Symbol, it may take one of the following values:\n\n:bulk: basic ESS computed on rank-normalized draws. This kind diagnoses poor convergence in the bulk of the distribution due to trends or different locations of the chains.\n:tail: minimum of the quantile-ESS for the symmetric quantiles where tail_prob=0.1 is the probability in the tails. This kind diagnoses poor convergence in the tails of the distribution. If this kind is chosen, kwargs may contain a tail_prob keyword.\n:basic: basic ESS, equivalent to specifying kind=Statistics.mean.\n\nnote: Note\nWhile Bulk-ESS is conceptually related to basic ESS, it is well-defined even if the chains do not have finite variance.[VehtariGelman2021] For each parameter, rank-normalization proceeds by first ranking the inputs using \"tied ranking\" and then transforming the ranks to normal quantiles so that the result is standard normally distributed. This transform is monotonic.\n\nOtherwise, kind specifies one of the following estimators, whose ESS is to be estimated:\n\nStatistics.mean\nStatistics.median\nStatistics.std\nStatsBase.mad\nBase.Fix2(Statistics.quantile, p::Real)\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#MCMCDiagnosticTools.rhat","page":"Diagnostics","title":"MCMCDiagnosticTools.rhat","text":"rhat(data::InferenceData; kwargs...) -> Dataset\nrhat(data::Dataset; kwargs...) -> Dataset\n\nCalculate the widehatR diagnostic for each parameter in the data.\n\n\n\n\n\nrhat(samples::AbstractArray{Union{Real,Missing}}; kind::Symbol=:rank, split_chains=2)\n\nCompute the widehatR diagnostics for each parameter in samples of shape (draws, [chains[, parameters...]]).[VehtariGelman2021]\n\nkind indicates the kind of widehatR to compute (see below).\n\nsplit_chains indicates the number of chains each chain is split into. When split_chains > 1, then the diagnostics check for within-chain convergence. When d = mod(draws, split_chains) > 0, i.e. the chains cannot be evenly split, then 1 draw is discarded after each of the first d splits within each chain.\n\nSee also ess, ess_rhat, rstar\n\nKinds of widehatR\n\nThe following kinds are supported:\n\n:rank: maximum of widehatR with kind=:bulk and kind=:tail.\n:bulk: basic widehatR computed on rank-normalized draws. This kind diagnoses poor convergence in the bulk of the distribution due to trends or different locations of the chains.\n:tail: widehatR computed on draws folded around the median and then rank-normalized. This kind diagnoses poor convergence in the tails of the distribution due to different scales of the chains.\n:basic: Classic widehatR.\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#MCMCDiagnosticTools.ess_rhat","page":"Diagnostics","title":"MCMCDiagnosticTools.ess_rhat","text":"ess_rhat(data::InferenceData; kwargs...) -> Dataset\ness_rhat(data::Dataset; kwargs...) -> Dataset\n\nCalculate the effective sample size (ESS) and widehatR diagnostic for each parameter in the data.\n\n\n\n\n\ness_rhat(\n samples::AbstractArray{<:Union{Missing,Real}};\n kind::Symbol=:rank,\n kwargs...,\n) -> NamedTuple{(:ess, :rhat)}\n\nEstimate the effective sample size and widehatR of the samples of shape (draws, [chains[, parameters...]]).\n\nWhen both ESS and widehatR are needed, this method is often more efficient than calling ess and rhat separately.\n\nSee rhat for a description of supported kinds and ess for a description of kwargs.\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"The following autocovariance methods are supported:","category":"page"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.AutocovMethod\nMCMCDiagnosticTools.FFTAutocovMethod\nMCMCDiagnosticTools.BDAAutocovMethod","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.AutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.AutocovMethod","text":"AutocovMethod <: AbstractAutocovMethod\n\nThe AutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nIt is is based on the discussion by [VehtariGelman2021] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992].\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n[Geyer1992]: Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#MCMCDiagnosticTools.FFTAutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.FFTAutocovMethod","text":"FFTAutocovMethod <: AbstractAutocovMethod\n\nThe FFTAutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nThe algorithm is the same as the one of AutocovMethod but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.\n\ninfo: Info\nTo be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#MCMCDiagnosticTools.BDAAutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.BDAAutocovMethod","text":"BDAAutocovMethod <: AbstractAutocovMethod\n\nThe BDAAutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nIt is is based on the discussion by [VehtariGelman2021]. and uses the variogram estimator of the autocorrelation function discussed by [BDA3].\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n[BDA3]: Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#mcse","page":"Diagnostics","title":"Monte Carlo standard error","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.mcse","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.mcse","page":"Diagnostics","title":"MCMCDiagnosticTools.mcse","text":"mcse(data::InferenceData; kwargs...) -> Dataset\nmcse(data::Dataset; kwargs...) -> Dataset\n\nCalculate the Monte Carlo standard error (MCSE) for each parameter in the data.\n\n\n\n\n\nmcse(samples::AbstractArray{<:Union{Missing,Real}}; kind=Statistics.mean, kwargs...)\n\nEstimate the Monte Carlo standard errors (MCSE) of the estimator kind applied to samples of shape (draws, [chains[, parameters...]]).\n\nSee also: ess\n\nKinds of MCSE estimates\n\nThe estimator whose MCSE should be estimated is specified with kind. kind must accept a vector of the same eltype as samples and return a real estimate.\n\nFor the following estimators, the effective sample size ess and an estimate of the asymptotic variance are used to compute the MCSE, and kwargs are forwarded to ess:\n\nStatistics.mean\nStatistics.median\nStatistics.std\nBase.Fix2(Statistics.quantile, p::Real)\n\nFor other estimators, the subsampling bootstrap method (SBM)[FlegalJones2011][Flegal2012] is used as a fallback, and the only accepted kwargs are batch_size, which indicates the size of the overlapping batches used to estimate the MCSE, defaulting to floor(Int, sqrt(draws * chains)). Note that SBM tends to underestimate the MCSE, especially for highly autocorrelated chains. One should verify that autocorrelation is low by checking the bulk- and tail-ESS values.\n\n[FlegalJones2011]: Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf\n\n[Flegal2012]: Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#rstar","page":"Diagnostics","title":"R^* diagnostic","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.rstar","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.rstar","page":"Diagnostics","title":"MCMCDiagnosticTools.rstar","text":"rstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n data::Union{InferenceData,Dataset};\n kwargs...,\n)\n\nCalculate the R^* diagnostic for the data.\n\n\n\n\n\nrstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n samples,\n chain_indices::AbstractVector{Int};\n subset::Real=0.7,\n split_chains::Int=2,\n verbosity::Int=0,\n)\n\nCompute the R^* convergence statistic of the table samples with the classifier.\n\nsamples must be either an AbstractMatrix, an AbstractVector, or a table (i.e. implements the Tables.jl interface) whose rows are draws and whose columns are parameters.\n\nchain_indices indicates the chain ids of each row of samples.\n\nThis method supports ragged chains, i.e. chains of nonequal lengths.\n\n\n\n\n\nrstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n samples::AbstractArray{<:Real};\n subset::Real=0.7,\n split_chains::Int=2,\n verbosity::Int=0,\n)\n\nCompute the R^* convergence statistic of the samples with the classifier.\n\nsamples is an array of draws with the shape (draws, [chains[, parameters...]]).`\n\nThis implementation is an adaption of algorithms 1 and 2 described by Lambert and Vehtari.\n\nThe classifier has to be a supervised classifier of the MLJ framework (see the MLJ documentation for a list of supported models). It is trained with a subset of the samples from each chain. Each chain is split into split_chains separate chains to additionally check for within-chain convergence. The training of the classifier can be inspected by adjusting the verbosity level.\n\nIf the classifier is deterministic, i.e., if it predicts a class, the value of the R^* statistic is returned (algorithm 1). If the classifier is probabilistic, i.e., if it outputs probabilities of classes, the scaled Poisson-binomial distribution of the R^* statistic is returned (algorithm 2).\n\nnote: Note\nThe correctness of the statistic depends on the convergence of the classifier used internally in the statistic.\n\nExamples\n\njulia> using MLJBase, MLJIteration, EvoTrees, Statistics, StatisticalMeasures\n\njulia> samples = fill(4.0, 100, 3, 2);\n\nOne can compute the distribution of the R^* statistic (algorithm 2) with a probabilistic classifier. For instance, we can use a gradient-boosted trees model with nrounds = 100 sequentially stacked trees and learning rate eta = 0.05:\n\njulia> model = EvoTreeClassifier(; nrounds=100, eta=0.05);\n\njulia> distribution = rstar(model, samples);\n\njulia> round(mean(distribution); digits=2)\n1.0f0\n\nNote, however, that it is recommended to determine nrounds based on early-stopping. With the MLJ framework, this can be achieved in the following way (see the MLJ documentation for additional explanations):\n\njulia> model = IteratedModel(;\n model=EvoTreeClassifier(; eta=0.05),\n iteration_parameter=:nrounds,\n resampling=Holdout(),\n measures=log_loss,\n controls=[Step(5), Patience(2), NumberLimit(100)],\n retrain=true,\n );\n\njulia> distribution = rstar(model, samples);\n\njulia> round(mean(distribution); digits=2)\n1.0f0\n\nFor deterministic classifiers, a single R^* statistic (algorithm 1) is returned. Deterministic classifiers can also be derived from probabilistic classifiers by e.g. predicting the mode. In MLJ this corresponds to a pipeline of models.\n\njulia> evotree_deterministic = Pipeline(model; operation=predict_mode);\n\njulia> value = rstar(evotree_deterministic, samples);\n\njulia> round(value; digits=2)\n1.0\n\nReferences\n\nLambert, B., & Vehtari, A. (2020). R^*: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.\n\n\n\n\n\n","category":"function"},{"location":"api/#api","page":"API Overview","title":"API Overview","text":"","category":"section"},{"location":"api/","page":"API Overview","title":"API Overview","text":"Pages = [\"data.md\", \"dataset.md\", \"diagnostics.md\", \"inference_data.md\", \"stats.md\"]\nDepth = 1","category":"page"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"\n\n\n

Creating custom plots

\n\n\n\n\n\n

While ArviZ includes many plotting functions for visualizing the data stored in InferenceData objects, you will often need to construct custom plots, or you may want to tweak some of our plots in your favorite plotting package.

In this tutorial, we will show you a few useful techniques you can use to construct these plots using Julia's plotting packages. For demonstration purposes, we'll use Makie.jl and AlgebraOfGraphics.jl, which can consume Dataset objects since they implement the Tables interface. However, we could just as easily have used StatsPlots.jl.

\n\n
begin\n    using ArviZ, ArviZExampleData, DimensionalData, DataFrames, Statistics\n    using AlgebraOfGraphics, CairoMakie\n    using AlgebraOfGraphics: density\n    set_aog_theme!()\nend;
\n\n\n\n

We'll start by loading some draws from an implementation of the non-centered parameterization of the 8 schools model. In this parameterization, the model has some sampling issues.

\n\n
idata = load_example_data(\"centered_eight\")
\n
InferenceData
posterior
╭─────────────────╮\n│ 500×4×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 500×4\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×4\n  :tau   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.315398\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\" => 7.48011\n  \"tuning_steps\" => 1000\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
posterior_predictive
╭─────────────────╮\n│ 8×500×4 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:41.460544\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
log_likelihood
╭─────────────────╮\n│ 8×500×4 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.487399\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
sample_stats
╭───────────────╮\n│ 500×4 Dataset │\n├───────────────┴─────────────────────────────────────────────────────── dims ┐\n  ↓ draw  Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├─────────────────────────────────────────────────────────────────────────────┴ layers ┐\n  :max_energy_error    eltype: Float64 dims: draw, chain size: 500×4\n  :energy_error        eltype: Float64 dims: draw, chain size: 500×4\n  :lp                  eltype: Float64 dims: draw, chain size: 500×4\n  :index_in_trajectory eltype: Int64 dims: draw, chain size: 500×4\n  :acceptance_rate     eltype: Float64 dims: draw, chain size: 500×4\n  :diverging           eltype: Bool dims: draw, chain size: 500×4\n  :process_time_diff   eltype: Float64 dims: draw, chain size: 500×4\n  :n_steps             eltype: Float64 dims: draw, chain size: 500×4\n  :perf_counter_start  eltype: Float64 dims: draw, chain size: 500×4\n  :largest_eigval      eltype: Union{Missing, Float64} dims: draw, chain size: 500×4\n  :smallest_eigval     eltype: Union{Missing, Float64} dims: draw, chain size: 500×4\n  :step_size_bar       eltype: Float64 dims: draw, chain size: 500×4\n  :step_size           eltype: Float64 dims: draw, chain size: 500×4\n  :energy              eltype: Float64 dims: draw, chain size: 500×4\n  :tree_depth          eltype: Int64 dims: draw, chain size: 500×4\n  :perf_counter_diff   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.324929\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\" => 7.48011\n  \"tuning_steps\" => 1000\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
prior
╭─────────────────╮\n│ 500×1×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :tau   eltype: Float64 dims: draw, chain size: 500×1\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×1\n  :mu    eltype: Float64 dims: draw, chain size: 500×1\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.602116\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
prior_predictive
╭─────────────────╮\n│ 8×500×1 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×1\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.604969\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.606375\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
constant_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :scores eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.607471\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
\n\n
idata.posterior
\n
╭─────────────────╮\n│ 500×4×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 500×4\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×4\n  :tau   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\"                => \"2022-10-13T14:37:37.315398\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\"             => 7.48011\n  \"tuning_steps\"              => 1000\n  \"arviz_version\"             => \"0.13.0.dev0\"\n  \"inference_library\"         => \"pymc\"\n
\n\n\n

The plotting functions we'll be using interact with a tabular view of a Dataset. Let's see what that view looks like for a Dataset:

\n\n
df = DataFrame(idata.posterior)
\n
drawchainschoolmuthetatau
100\"Choate\"7.871812.32074.72574
210\"Choate\"3.3845511.28563.90899
320\"Choate\"9.100485.708514.84403
430\"Choate\"7.3042910.03731.8567
540\"Choate\"9.879689.149154.74841
650\"Choate\"7.0420314.73593.51387
760\"Choate\"10.378514.3044.20898
870\"Choate\"10.0613.32982.6834
980\"Choate\"10.425310.44981.16889
1090\"Choate\"10.810811.47311.21052
...
160004993\"Mt. Hermon\"3.404461.295054.46125
\n\n\n

The tabular view includes dimensions and variables as columns.

When variables with different dimensions are flattened into a tabular form, there's always some duplication of values. As a simple case, note that chain, draw, and school all have repeated values in the above table.

In this case, theta has the school dimension, but tau doesn't, so the values of tau will be repeated in the table for each value of school.

\n\n
df[df.school .== Ref(\"Choate\"), :].tau == df[df.school .== Ref(\"Deerfield\"), :].tau
\n
true
\n\n\n

In our first example, this will be important.

Here, let's construct a trace plot. Besides idata, all functions and types in the following cell are defined in AlgebraOfGraphics or Makie:

  • data(...) indicates that the wrapped object implements the Tables interface

  • mapping indicates how the data should be used. The symbols are all column names in the table, which for us are our variable names and dimensions.

  • visual specifies how the data should be converted to a plot.

  • Lines is a plot type defined in Makie.

  • draw takes this combination and plots it.

\n\n
draw(\n    data(idata.posterior.mu) *\n    mapping(:draw, :mu; color=:chain => nonnumeric) *\n    visual(Lines; alpha=0.8),\n)
\n\n\n\n

Note the line idata.posterior.mu. If we had just used idata.posterior, the plot would have looked more-or-less the same, but there would be artifacts due to mu being copied many times. By selecting mu directly, all other dimensions are discarded, so each value of mu appears in the plot exactly once.

When examining an MCMC trace plot, we want to see a \"fuzzy caterpillar\". Instead we see a few places where the Markov chains froze. We can do the same for theta as well, but it's more useful here to separate these draws by school.

\n\n
draw(\n    data(idata.posterior) *\n    mapping(:draw, :theta; layout=:school, color=:chain => nonnumeric) *\n    visual(Lines; alpha=0.8),\n)
\n\n\n\n

Suppose we want to compare tau with theta for two different schools. To do so, we use InferenceDatas indexing syntax to subset the data.

\n\n
draw(\n    data(idata[:posterior, school=At([\"Choate\", \"Deerfield\"])]) *\n    mapping(:theta, :tau; color=:school) *\n    density() *\n    visual(Contour; levels=10),\n)
\n\n\n\n

We can also compare the density plots constructed from each chain for different schools.

\n\n
draw(\n    data(idata.posterior) *\n    mapping(:theta; layout=:school, color=:chain => nonnumeric) *\n    density(),\n)
\n\n\n\n

If we want to compare many schools in a single plot, an ECDF plot is more convenient.

\n\n
draw(\n    data(idata.posterior) * mapping(:theta; color=:school => nonnumeric) * visual(ECDFPlot);\n    axis=(; ylabel=\"probability\"),\n)
\n\n\n\n

So far we've just plotted data from one group, but we often want to combine data from multiple groups in one plot. The simplest way to do this is to create the plot out of multiple layers. Here we use this approach to plot the observations over the posterior predictive distribution.

\n\n
draw(\n    (data(idata.posterior_predictive) * mapping(:obs; layout=:school) * density()) +\n    (data(idata.observed_data) * mapping(:obs, :obs => zero => \"\"; layout=:school)),\n)
\n\n\n\n

Another option is to combine the groups into a single dataset.

Here we compare the prior and posterior. Since the prior has 1 chain and the posterior has 4 chains, if we were to combine them into a table, the structure would need to be ragged. This is not currently supported.

We can then either plot the two distributions separately as we did before, or we can compare a single chain from each group. This is what we'll do here. To concatenate the two groups, we introduce a new named dimension using DimensionalData.Dim.

\n\n
draw(\n    data(\n        cat(\n            idata.posterior[chain=[1]], idata.prior; dims=Dim{:group}([:posterior, :prior])\n        )[:mu],\n    ) *\n    mapping(:mu; color=:group) *\n    histogram(; bins=20) *\n    visual(; alpha=0.8);\n    axis=(; ylabel=\"probability\"),\n)
\n\n\n\n

From the trace plots, we suspected the geometry of this posterior was bad. Let's highlight divergent transitions. To do so, we merge posterior and samplestats, which can do with merge since they share no common variable names.

\n\n
draw(\n    data(merge(idata.posterior, idata.sample_stats)) * mapping(\n        :theta,\n        :tau;\n        layout=:school,\n        color=:diverging,\n        markersize=:diverging => (d -> d ? 5 : 2),\n    ),\n)
\n\n\n\n

When we try building more complex plots, we may need to build new Datasets from our existing ones.

One example of this is the corner plot. To build this plot, we need to make a copy of theta with a copy of the school dimension.

\n\n
let\n    theta = idata.posterior.theta[school=1:4]\n    theta2 = rebuild(set(theta; school=:school2); name=:theta2)\n    plot_data = Dataset(theta, theta2, idata.sample_stats.diverging)\n    draw(\n        data(plot_data) * mapping(\n            :theta,\n            :theta2 => \"theta\";\n            col=:school,\n            row=:school2,\n            color=:diverging,\n            markersize=:diverging => (d -> d ? 3 : 1),\n        );\n        figure=(; figsize=(5, 5)),\n        axis=(; aspect=1),\n    )\nend
\n\n\n","category":"page"},{"location":"creating_custom_plots/#Environment","page":"Creating custom plots","title":"Environment","text":"","category":"section"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"
\n
\n\n
using Pkg, InteractiveUtils
\n\n\n
using PlutoUI
\n\n\n
with_terminal(Pkg.status; color=false)
\n
Status `~/work/ArviZ.jl/ArviZ.jl/docs/Project.toml`\n  [cbdf2221] AlgebraOfGraphics v0.8.11\n  [131c737c] ArviZ v0.12.1 `~/work/ArviZ.jl/ArviZ.jl`\n  [2f96bb34] ArviZExampleData v0.1.11\n  [4a6e88f0] ArviZPythonPlots v0.1.7\n  [13f3f980] CairoMakie v0.12.12\n  [a93c6f00] DataFrames v1.7.0\n⌅ [0703355e] DimensionalData v0.27.9\n  [31c24e10] Distributions v0.25.112\n  [e30172f5] Documenter v1.7.0\n  [f6006082] EvoTrees v0.16.7\n  [b5cf5a8d] InferenceObjects v0.4.3\n  [be115224] MCMCDiagnosticTools v0.3.10\n  [a7f614a8] MLJBase v1.7.0\n  [614be32b] MLJIteration v0.6.3\n  [ce719bf2] PSIS v0.9.6\n  [359b1769] PlutoStaticHTML v6.0.28\n  [7f904dfe] PlutoUI v0.7.60\n  [7f36be82] PosteriorStats v0.2.5\n  [c1514b29] StanSample v7.10.1\n  [a19d573c] StatisticalMeasures v0.1.7\n  [2913bbd2] StatsBase v0.34.3\n  [fce5fe82] Turing v0.34.1\n  [f43a241f] Downloads v1.6.0\n  [37e2e46d] LinearAlgebra\n  [10745b16] Statistics v1.10.0\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n
\n\n
with_terminal(versioninfo)
\n
Julia Version 1.10.5\nCommit 6f3fdf7b362 (2024-08-27 14:19 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 2 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_PKG_SERVER_REGISTRY_PREFERENCE = eager\n  JULIA_NUM_THREADS = 2\n  JULIA_REVISE_WORKER_ONLY = 1\n
\n\n","category":"page"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"EditURL = \"https://github.com/arviz-devs/ArviZ.jl/blob/main/docs/src/creating_custom_plots.jl\"","category":"page"},{"location":"api/stats/#stats-api","page":"Stats","title":"Stats","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"Pages = [\"stats.md\"]","category":"page"},{"location":"api/stats/#Summary-statistics","page":"Stats","title":"Summary statistics","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"SummaryStats\ndefault_summary_stats\ndefault_stats\ndefault_diagnostics\nsummarize\nsummarystats","category":"page"},{"location":"api/stats/#PosteriorStats.SummaryStats","page":"Stats","title":"PosteriorStats.SummaryStats","text":"struct SummaryStats{D, V<:(AbstractVector)}\n\nA container for a column table of values computed by summarize.\n\nThis object implements the Tables and TableTraits column table interfaces. It has a custom show method.\n\nSummaryStats behaves like an OrderedDict of columns, where the columns can be accessed using either Symbols or a 1-based integer index.\n\nname::String: The name of the collection of summary statistics, used as the table title in display.\ndata::Any: The summary statistics for each parameter. It must implement the Tables interface.\nparameter_names::AbstractVector: Names of the parameters\n\nSummaryStats([name::String,] data[, parameter_names])\nSummaryStats(data[, parameter_names]; name::String=\"SummaryStats\")\n\nConstruct a SummaryStats from tabular data with optional stats name and param_names.\n\ndata must not contain a column :parameter, as this is reserved for the parameter names, which are always in the first column.\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.default_summary_stats","page":"Stats","title":"PosteriorStats.default_summary_stats","text":"default_summary_stats(focus=Statistics.mean; kwargs...)\n\nCombinatiton of default_stats and default_diagnostics to be used with summarize.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.default_stats","page":"Stats","title":"PosteriorStats.default_stats","text":"default_stats(focus=Statistics.mean; prob_interval=0.94, kwargs...)\n\nDefault statistics to be computed with summarize.\n\nThe value of focus determines the statistics to be returned:\n\nStatistics.mean: mean, std, hdi_3%, hdi_97%\nStatistics.median: median, mad, eti_3%, eti_97%\n\nIf prob_interval is set to a different value than the default, then different HDI and ETI statistics are computed accordingly. hdi refers to the highest-density interval, while eti refers to the equal-tailed interval (i.e. the credible interval computed from symmetric quantiles).\n\nSee also: hdi\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.default_diagnostics","page":"Stats","title":"PosteriorStats.default_diagnostics","text":"default_diagnostics(focus=Statistics.mean; kwargs...)\n\nDefault diagnostics to be computed with summarize.\n\nThe value of focus determines the diagnostics to be returned:\n\nStatistics.mean: mcse_mean, mcse_std, ess_tail, ess_bulk, rhat\nStatistics.median: mcse_median, ess_tail, ess_bulk, rhat\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.summarize","page":"Stats","title":"PosteriorStats.summarize","text":"summarize(data, stats_funs...; name=\"SummaryStats\", [var_names]) -> SummaryStats\n\nCompute the summary statistics in stats_funs on each param in data.\n\nstats_funs is a collection of functions that reduces a matrix with shape (draws, chains) to a scalar or a collection of scalars. Alternatively, an item in stats_funs may be a Pair of the form name => fun specifying the name to be used for the statistic or of the form (name1, ...) => fun when the function returns a collection. When the function returns a collection, the names in this latter format must be provided.\n\nIf no stats functions are provided, then those specified in default_summary_stats are computed.\n\nvar_names specifies the names of the parameters in data. If not provided, the names are inferred from data.\n\nTo support computing summary statistics from a custom object, overload this method specifying the type of data.\n\nSee also SummaryStats, default_summary_stats, default_stats, default_diagnostics.\n\nExamples\n\nCompute mean, std and the Monte Carlo standard error (MCSE) of the mean estimate:\n\njulia> using Statistics, StatsBase\n\njulia> x = randn(1000, 4, 3) .+ reshape(0:10:20, 1, 1, :);\n\njulia> summarize(x, mean, std, :mcse_mean => sem; name=\"Mean/Std\")\nMean/Std\n mean std mcse_mean\n 1 0.0003 0.990 0.016\n 2 10.02 0.988 0.016\n 3 19.98 0.988 0.016\n\nAvoid recomputing the mean by using mean_and_std, and provide parameter names:\n\njulia> summarize(x, (:mean, :std) => mean_and_std, mad; var_names=[:a, :b, :c])\nSummaryStats\n mean std mad\n a 0.000305 0.990 0.978\n b 10.0 0.988 0.995\n c 20.0 0.988 0.979\n\nNote that when an estimator and its MCSE are both computed, the MCSE is used to determine the number of significant digits that will be displayed.\n\njulia> summarize(x; var_names=[:a, :b, :c])\nSummaryStats\n mean std hdi_3% hdi_97% mcse_mean mcse_std ess_tail ess_bulk r ⋯\n a 0.0003 0.99 -1.92 1.78 0.016 0.012 3567 3663 1 ⋯\n b 10.02 0.99 8.17 11.9 0.016 0.011 3841 3906 1 ⋯\n c 19.98 0.99 18.1 21.9 0.016 0.012 3892 3749 1 ⋯\n 1 column omitted\n\nCompute just the statistics with an 89% HDI on all parameters, and provide the parameter names:\n\njulia> summarize(x, default_stats(; prob_interval=0.89)...; var_names=[:a, :b, :c])\nSummaryStats\n mean std hdi_5.5% hdi_94.5%\n a 0.000305 0.990 -1.63 1.52\n b 10.0 0.988 8.53 11.6\n c 20.0 0.988 18.5 21.6\n\nCompute the summary stats focusing on Statistics.median:\n\njulia> summarize(x, default_summary_stats(median)...; var_names=[:a, :b, :c])\nSummaryStats\n median mad eti_3% eti_97% mcse_median ess_tail ess_median rhat\n a 0.004 0.978 -1.83 1.89 0.020 3567 3336 1.00\n b 10.02 0.995 8.17 11.9 0.023 3841 3787 1.00\n c 19.99 0.979 18.1 21.9 0.020 3892 3829 1.00\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#StatsBase.summarystats","page":"Stats","title":"StatsBase.summarystats","text":"summarystats(data::InferenceData; group=:posterior, kwargs...) -> SummaryStats\nsummarystats(data::Dataset; kwargs...) -> SummaryStats\n\nCompute default summary statistics for the data using summarize.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#General-statistics","page":"Stats","title":"General statistics","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"hdi\nhdi!\nr2_score","category":"page"},{"location":"api/stats/#PosteriorStats.hdi","page":"Stats","title":"PosteriorStats.hdi","text":"hdi(samples::AbstractArray{<:Real}; prob=0.94) -> (; lower, upper)\n\nEstimate the unimodal highest density interval (HDI) of samples for the probability prob.\n\nThe HDI is the minimum width Bayesian credible interval (BCI). That is, it is the smallest possible interval containing (100*prob)% of the probability mass.[Hyndman1996]\n\nsamples is an array of shape (draws[, chains[, params...]]). If multiple parameters are present, then lower and upper are arrays with the shape (params...,), computed separately for each marginal.\n\nThis implementation uses the algorithm of [ChenShao1999].\n\nnote: Note\nAny default value of prob is arbitrary. The default value of prob=0.94 instead of a more common default like prob=0.95 is chosen to reminder the user of this arbitrariness.\n\n[Hyndman1996]: Rob J. Hyndman (1996) Computing and Graphing Highest Density Regions, Amer. Stat., 50(2): 120-6. DOI: 10.1080/00031305.1996.10474359 jstor.\n\n[ChenShao1999]: Ming-Hui Chen & Qi-Man Shao (1999) Monte Carlo Estimation of Bayesian Credible and HPD Intervals, J Comput. Graph. Stat., 8:1, 69-92. DOI: 10.1080/10618600.1999.10474802 jstor.\n\nExamples\n\nHere we calculate the 83% HDI for a normal random variable:\n\njulia> x = randn(2_000);\n\njulia> hdi(x; prob=0.83) |> pairs\npairs(::NamedTuple) with 2 entries:\n :lower => -1.38266\n :upper => 1.25982\n\nWe can also calculate the HDI for a 3-dimensional array of samples:\n\njulia> x = randn(1_000, 1, 1) .+ reshape(0:5:10, 1, 1, :);\n\njulia> hdi(x) |> pairs\npairs(::NamedTuple) with 2 entries:\n :lower => [-1.9674, 3.0326, 8.0326]\n :upper => [1.90028, 6.90028, 11.9003]\n\n\n\n\n\nhdi(data::InferenceData; kwargs...) -> Dataset\nhdi(data::Dataset; kwargs...) -> Dataset\n\nCalculate the highest density interval (HDI) for each parameter in the data.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.hdi!","page":"Stats","title":"PosteriorStats.hdi!","text":"hdi!(samples::AbstractArray{<:Real}; prob=0.94) -> (; lower, upper)\n\nA version of hdi that sorts samples in-place while computing the HDI.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.r2_score","page":"Stats","title":"PosteriorStats.r2_score","text":"r2_score(y_true::AbstractVector, y_pred::AbstractArray) -> (; r2, r2_std)\n\nR² for linear Bayesian regression models.[GelmanGoodrich2019]\n\nArguments\n\ny_true: Observed data of length noutputs\ny_pred: Predicted data with size (ndraws[, nchains], noutputs)\n\n[GelmanGoodrich2019]: Andrew Gelman, Ben Goodrich, Jonah Gabry & Aki Vehtari (2019) R-squared for Bayesian Regression Models, The American Statistician, 73:3, 307-9, DOI: 10.1080/00031305.2018.1549100.\n\nExamples\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"regression1d\");\n\njulia> y_true = idata.observed_data.y;\n\njulia> y_pred = PermutedDimsArray(idata.posterior_predictive.y, (:draw, :chain, :y_dim_0));\n\njulia> r2_score(y_true, y_pred) |> pairs\npairs(::NamedTuple) with 2 entries:\n :r2 => 0.683197\n :r2_std => 0.0368838\n\n\n\n\n\nr2_score(idata::InferenceData; y_name, y_pred_name) -> (; r2, r2_std)\n\nCompute R² from idata, automatically formatting the predictions to the correct shape.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\n\nExamples\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"regression10d\");\n\njulia> r2_score(idata) |> pairs\npairs(::NamedTuple) with 2 entries:\n :r2 => 0.998385\n :r2_std => 0.000100621\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Pareto-smoothed-importance-sampling","page":"Stats","title":"Pareto-smoothed importance sampling","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"PSISResult\ness_is\nPSISPlots.paretoshapeplot\npsis\npsis!","category":"page"},{"location":"api/stats/#PSIS.PSISResult","page":"Stats","title":"PSIS.PSISResult","text":"PSISResult\n\nResult of Pareto-smoothed importance sampling (PSIS) using psis.\n\nProperties\n\nlog_weights: un-normalized Pareto-smoothed log weights\nweights: normalized Pareto-smoothed weights (allocates a copy)\npareto_shape: Pareto k=ξ shape parameter\nnparams: number of parameters in log_weights\nndraws: number of draws in log_weights\nnchains: number of chains in log_weights\nreff: the ratio of the effective sample size of the unsmoothed importance ratios and the actual sample size.\ness: estimated effective sample size of estimate of mean using smoothed importance samples (see ess_is)\ntail_length: length of the upper tail of log_weights that was smoothed\ntail_dist: the generalized Pareto distribution that was fit to the tail of log_weights. Note that the tail weights are scaled to have a maximum of 1, so tail_dist * exp(maximum(log_ratios)) is the corresponding fit directly to the tail of log_ratios.\nnormalized::Bool:indicates whether log_weights are log-normalized along the sample dimensions.\n\nDiagnostic\n\nThe pareto_shape parameter k=ξ of the generalized Pareto distribution tail_dist can be used to diagnose reliability and convergence of estimates using the importance weights [VehtariSimpson2021].\n\nif k frac13, importance sampling is stable, and importance sampling (IS) and PSIS both are reliable.\nif k frac12, then the importance ratio distributon has finite variance, and the central limit theorem holds. As k approaches the upper bound, IS becomes less reliable, while PSIS still works well but with a higher RMSE.\nif frac12 k 07, then the variance is infinite, and IS can behave quite poorly. However, PSIS works well in this regime.\nif 07 k 1, then it quickly becomes impractical to collect enough importance weights to reliably compute estimates, and importance sampling is not recommended.\nif k 1, then neither the variance nor the mean of the raw importance ratios exists. The convergence rate is close to zero, and bias can be large with practical sample sizes.\n\nSee PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PSIS.ess_is","page":"Stats","title":"PSIS.ess_is","text":"ess_is(weights; reff=1)\n\nEstimate effective sample size (ESS) for importance sampling over the sample dimensions.\n\nGiven normalized weights w_1n, the ESS is estimated using the L2-norm of the weights:\n\nmathrmESS(w_1n) = fracr_mathrmeffsum_i=1^n w_i^2\n\nwhere r_mathrmeff is the relative efficiency of the log_weights.\n\ness_is(result::PSISResult; bad_shape_nan=true)\n\nEstimate ESS for Pareto-smoothed importance sampling.\n\nnote: Note\nESS estimates for Pareto shape values k 07, which are unreliable and misleadingly high, are set to NaN. To avoid this, set bad_shape_nan=false.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.PSISPlots.paretoshapeplot","page":"Stats","title":"PSIS.PSISPlots.paretoshapeplot","text":"paretoshapeplot(values; showlines=false, ...)\nparetoshapeplot!(values; showlines=false, kwargs...)\n\nPlot shape parameters of fitted Pareto tail distributions for diagnosing convergence.\n\nvalues may be either a vector of Pareto shape parameters or a PSIS.PSISResult.\n\nIf showlines==true, horizontal lines indicating relevant Pareto shape thresholds are drawn. See PSIS.PSISResult for an explanation of the thresholds.\n\nAll remaining kwargs are forwarded to the plotting function.\n\nSee psis, PSISResult.\n\nExamples\n\nusing PSIS, Distributions, Plots\nproposal = Normal()\ntarget = TDist(7)\nx = rand(proposal, 1_000, 100)\nlog_ratios = logpdf.(target, x) .- logpdf.(proposal, x)\nresult = psis(log_ratios)\nparetoshapeplot(result)\n\nWe can also plot the Pareto shape parameters directly:\n\nparetoshapeplot(result.pareto_shape)\n\nWe can also use plot directly:\n\nplot(result.pareto_shape; showlines=true)\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.psis","page":"Stats","title":"PSIS.psis","text":"psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult\npsis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult\n\nCompute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].\n\nWhile psis computes smoothed log weights out-of-place, psis! smooths them in-place.\n\nArguments\n\nlog_ratios: an array of logarithms of importance ratios, with size (draws, [chains, [parameters...]]), where chains>1 would be used when chains are generated using Markov chain Monte Carlo.\nreff::Union{Real,AbstractArray}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(draws * chains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo. If an array, it must have the size (parameters...,) to match log_ratios.\n\nKeywords\n\nwarn=true: If true, warning messages are delivered\nnormalize=true: If true, the log-weights will be log-normalized so that exp.(log_weights) sums to 1 along the sample dimensions.\n\nReturns\n\nresult: a PSISResult object containing the results of the Pareto-smoothing.\n\nA warning is raised if the Pareto shape parameter k 07. See PSISResult for details and PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.psis!","page":"Stats","title":"PSIS.psis!","text":"psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult\npsis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult\n\nCompute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].\n\nWhile psis computes smoothed log weights out-of-place, psis! smooths them in-place.\n\nArguments\n\nlog_ratios: an array of logarithms of importance ratios, with size (draws, [chains, [parameters...]]), where chains>1 would be used when chains are generated using Markov chain Monte Carlo.\nreff::Union{Real,AbstractArray}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(draws * chains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo. If an array, it must have the size (parameters...,) to match log_ratios.\n\nKeywords\n\nwarn=true: If true, warning messages are delivered\nnormalize=true: If true, the log-weights will be log-normalized so that exp.(log_weights) sums to 1 along the sample dimensions.\n\nReturns\n\nresult: a PSISResult object containing the results of the Pareto-smoothing.\n\nA warning is raised if the Pareto shape parameter k 07. See PSISResult for details and PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#LOO-and-WAIC","page":"Stats","title":"LOO and WAIC","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"AbstractELPDResult\nPSISLOOResult\nWAICResult\nelpd_estimates\ninformation_criterion\nloo\nwaic","category":"page"},{"location":"api/stats/#PosteriorStats.AbstractELPDResult","page":"Stats","title":"PosteriorStats.AbstractELPDResult","text":"abstract type AbstractELPDResult\n\nAn abstract type representing the result of an ELPD computation.\n\nEvery subtype stores estimates of both the expected log predictive density (elpd) and the effective number of parameters p, as well as standard errors and pointwise estimates of each, from which other relevant estimates can be computed.\n\nSubtypes implement the following functions:\n\nelpd_estimates\ninformation_criterion\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.PSISLOOResult","page":"Stats","title":"PosteriorStats.PSISLOOResult","text":"Results of Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO).\n\nSee also: loo, AbstractELPDResult\n\nestimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)\npointwise: Pointwise estimates\npsis_result: Pareto-smoothed importance sampling (PSIS) results\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.WAICResult","page":"Stats","title":"PosteriorStats.WAICResult","text":"Results of computing the widely applicable information criterion (WAIC).\n\nSee also: waic, AbstractELPDResult\n\nestimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)\npointwise: Pointwise estimates\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.elpd_estimates","page":"Stats","title":"PosteriorStats.elpd_estimates","text":"elpd_estimates(result::AbstractELPDResult; pointwise=false) -> (; elpd, elpd_mcse, lpd)\n\nReturn the (E)LPD estimates from the result.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.information_criterion","page":"Stats","title":"PosteriorStats.information_criterion","text":"information_criterion(elpd, scale::Symbol)\n\nCompute the information criterion for the given scale from the elpd estimate.\n\nscale must be one of (:deviance, :log, :negative_log).\n\nSee also: loo, waic\n\n\n\n\n\ninformation_criterion(result::AbstractELPDResult, scale::Symbol; pointwise=false)\n\nCompute information criterion for the given scale from the existing ELPD result.\n\nscale must be one of (:deviance, :log, :negative_log).\n\nIf pointwise=true, then pointwise estimates are returned.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.loo","page":"Stats","title":"PosteriorStats.loo","text":"loo(log_likelihood; reff=nothing, kwargs...) -> PSISLOOResult{<:NamedTuple,<:NamedTuple}\n\nCompute the Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO). [Vehtari2017][LOOFAQ]\n\nlog_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).\n\nKeywords\n\nreff::Union{Real,AbstractArray{<:Real}}: The relative effective sample size(s) of the likelihood values. If an array, it must have the same data dimensions as the corresponding log-likelihood variable. If not provided, then this is estimated using MCMCDiagnosticTools.ess.\nkwargs: Remaining keywords are forwarded to [PSIS.psis].\n\nSee also: PSISLOOResult, waic\n\n[Vehtari2017]: Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput 27, 1413–1432 (2017). doi: 10.1007/s11222-016-9696-4 arXiv: 1507.04544\n\n[LOOFAQ]: Aki Vehtari. Cross-validation FAQ. https://mc-stan.org/loo/articles/online-only/faq.html\n\nExamples\n\nManually compute R_mathrmeff and calculate PSIS-LOO of a model:\n\njulia> using ArviZExampleData, MCMCDiagnosticTools\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> reff = ess(log_like; kind=:basic, split_chains=1, relative=true);\n\njulia> loo(log_like; reff)\nPSISLOOResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.34\n\nand PSISResult with 500 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n Count Min. ESS\n (-Inf, 0.5] good 7 (87.5%) 151\n (0.5, 0.7] okay 1 (12.5%) 446\n\n\n\n\n\nloo(data::Dataset; [var_name::Symbol,] kwargs...) -> PSISLOOResult{<:NamedTuple,<:Dataset}\nloo(data::InferenceData; [var_name::Symbol,] kwargs...) -> PSISLOOResult{<:NamedTuple,<:Dataset}\n\nCompute PSIS-LOO from log-likelihood values in data.\n\nIf more than one log-likelihood variable is present, then var_name must be provided.\n\nExamples\n\nCalculate PSIS-LOO of a model:\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo(idata)\nPSISLOOResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.34\n\nand PSISResult with 500 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n Count Min. ESS\n (-Inf, 0.5] good 6 (75.0%) 135\n (0.5, 0.7] okay 2 (25.0%) 421\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.waic","page":"Stats","title":"PosteriorStats.waic","text":"waic(log_likelihood::AbstractArray) -> WAICResult{<:NamedTuple,<:NamedTuple}\n\nCompute the widely applicable information criterion (WAIC).[Watanabe2010][Vehtari2017][LOOFAQ]\n\nlog_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).\n\nSee also: WAICResult, loo\n\n[Watanabe2010]: Watanabe, S. Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory. 11(116):3571−3594, 2010. https://jmlr.csail.mit.edu/papers/v11/watanabe10a.html\n\n[Vehtari2017]: Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput 27, 1413–1432 (2017). doi: 10.1007/s11222-016-9696-4 arXiv: 1507.04544\n\n[LOOFAQ]: Aki Vehtari. Cross-validation FAQ. https://mc-stan.org/loo/articles/online-only/faq.html\n\nExamples\n\nCalculate WAIC of a model:\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> waic(log_like)\nWAICResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.33\n\n\n\n\n\nwaic(data::Dataset; [var_name::Symbol]) -> WAICResult{<:NamedTuple,<:Dataset}\nwaic(data::InferenceData; [var_name::Symbol]) -> WAICResult{<:NamedTuple,<:Dataset}\n\nCompute WAIC from log-likelihood values in data.\n\nIf more than one log-likelihood variable is present, then var_name must be provided.\n\nExamples\n\nCalculate WAIC of a model:\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> waic(idata)\nWAICResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.33\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Model-comparison","page":"Stats","title":"Model comparison","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"ModelComparisonResult\ncompare\nmodel_weights","category":"page"},{"location":"api/stats/#PosteriorStats.ModelComparisonResult","page":"Stats","title":"PosteriorStats.ModelComparisonResult","text":"ModelComparisonResult\n\nResult of model comparison using ELPD.\n\nThis struct implements the Tables and TableTraits interfaces.\n\nEach field returns a collection of the corresponding entry for each model:\n\nname: Names of the models, if provided.\nrank: Ranks of the models (ordered by decreasing ELPD)\nelpd_diff: ELPD of a model subtracted from the largest ELPD of any model\nelpd_diff_mcse: Monte Carlo standard error of the ELPD difference\nweight: Model weights computed with weights_method\nelpd_result: AbstactELPDResults for each model, which can be used to access useful stats like ELPD estimates, pointwise estimates, and Pareto shape values for PSIS-LOO\nweights_method: Method used to compute model weights with model_weights\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.compare","page":"Stats","title":"PosteriorStats.compare","text":"compare(models; kwargs...) -> ModelComparisonResult\n\nCompare models based on their expected log pointwise predictive density (ELPD).\n\nThe ELPD is estimated either by Pareto smoothed importance sampling leave-one-out cross-validation (LOO) or using the widely applicable information criterion (WAIC). We recommend loo. Read more theory here - in a paper by some of the leading authorities on model comparison dx.doi.org/10.1111/1467-9868.00353\n\nArguments\n\nmodels: a Tuple, NamedTuple, or AbstractVector whose values are either AbstractELPDResult entries or any argument to elpd_method.\n\nKeywords\n\nweights_method::AbstractModelWeightsMethod=Stacking(): the method to be used to weight the models. See model_weights for details\nelpd_method=loo: a method that computes an AbstractELPDResult from an argument in models.\nsort::Bool=true: Whether to sort models by decreasing ELPD.\n\nReturns\n\nModelComparisonResult: A container for the model comparison results. The fields contain a similar collection to models.\n\nExamples\n\nCompare the centered and non centered models of the eight school problem using the defaults: loo and Stacking weights. A custom myloo method formates the inputs as expected by loo.\n\njulia> using ArviZExampleData\n\njulia> models = (\n centered=load_example_data(\"centered_eight\"),\n non_centered=load_example_data(\"non_centered_eight\"),\n );\n\njulia> function myloo(idata)\n log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))\n return loo(log_like)\n end;\n\njulia> mc = compare(models; elpd_method=myloo)\n┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.\n└ @ PSIS ~/.julia/packages/PSIS/...\nModelComparisonResult with Stacking weights\n rank elpd elpd_mcse elpd_diff elpd_diff_mcse weight p ⋯\n non_centered 1 -31 1.4 0 0.0 1.0 0.9 ⋯\n centered 2 -31 1.4 0.06 0.067 0.0 0.9 ⋯\n 1 column omitted\njulia> mc.weight |> pairs\npairs(::NamedTuple) with 2 entries:\n :non_centered => 1.0\n :centered => 5.34175e-19\n\nCompare the same models from pre-computed PSIS-LOO results and computing BootstrappedPseudoBMA weights:\n\njulia> elpd_results = mc.elpd_result;\n\njulia> compare(elpd_results; weights_method=BootstrappedPseudoBMA())\nModelComparisonResult with BootstrappedPseudoBMA weights\n rank elpd elpd_mcse elpd_diff elpd_diff_mcse weight p ⋯\n non_centered 1 -31 1.4 0 0.0 0.52 0.9 ⋯\n centered 2 -31 1.4 0.06 0.067 0.48 0.9 ⋯\n 1 column omitted\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.model_weights","page":"Stats","title":"PosteriorStats.model_weights","text":"model_weights(elpd_results; method=Stacking())\nmodel_weights(method::AbstractModelWeightsMethod, elpd_results)\n\nCompute weights for each model in elpd_results using method.\n\nelpd_results is a Tuple, NamedTuple, or AbstractVector with AbstractELPDResult entries. The weights are returned in the same type of collection.\n\nStacking is the recommended approach, as it performs well even when the true data generating process is not included among the candidate models. See [YaoVehtari2018] for details.\n\nSee also: AbstractModelWeightsMethod, compare\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\nExamples\n\nCompute Stacking weights for two models:\n\njulia> using ArviZExampleData\n\njulia> models = (\n centered=load_example_data(\"centered_eight\"),\n non_centered=load_example_data(\"non_centered_eight\"),\n );\n\njulia> elpd_results = map(models) do idata\n log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))\n return loo(log_like)\n end;\n┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.\n└ @ PSIS ~/.julia/packages/PSIS/...\n\njulia> model_weights(elpd_results; method=Stacking()) |> pairs\npairs(::NamedTuple) with 2 entries:\n :centered => 5.34175e-19\n :non_centered => 1.0\n\nNow we compute BootstrappedPseudoBMA weights for the same models:\n\njulia> model_weights(elpd_results; method=BootstrappedPseudoBMA()) |> pairs\npairs(::NamedTuple) with 2 entries:\n :centered => 0.483723\n :non_centered => 0.516277\n\n\n\n\n\n","category":"function"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"The following model weighting methods are available","category":"page"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"AbstractModelWeightsMethod\nBootstrappedPseudoBMA\nPseudoBMA\nStacking","category":"page"},{"location":"api/stats/#PosteriorStats.AbstractModelWeightsMethod","page":"Stats","title":"PosteriorStats.AbstractModelWeightsMethod","text":"abstract type AbstractModelWeightsMethod\n\nAn abstract type representing methods for computing model weights.\n\nSubtypes implement model_weights(method, elpd_results).\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.BootstrappedPseudoBMA","page":"Stats","title":"PosteriorStats.BootstrappedPseudoBMA","text":"struct BootstrappedPseudoBMA{R<:Random.AbstractRNG, T<:Real} <: AbstractModelWeightsMethod\n\nModel weighting method using pseudo Bayesian Model Averaging using Akaike-type weighting with the Bayesian bootstrap (pseudo-BMA+)[YaoVehtari2018].\n\nThe Bayesian bootstrap stabilizes the model weights.\n\nBootstrappedPseudoBMA(; rng=Random.default_rng(), samples=1_000, alpha=1)\nBootstrappedPseudoBMA(rng, samples, alpha)\n\nConstruct the method.\n\nrng::Random.AbstractRNG: The random number generator to use for the Bayesian bootstrap\nsamples::Int64: The number of samples to draw for bootstrapping\nalpha::Real: The shape parameter in the Dirichlet distribution used for the Bayesian bootstrap. The default (1) corresponds to a uniform distribution on the simplex.\n\nSee also: Stacking\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.PseudoBMA","page":"Stats","title":"PosteriorStats.PseudoBMA","text":"struct PseudoBMA <: AbstractModelWeightsMethod\n\nModel weighting method using pseudo Bayesian Model Averaging (pseudo-BMA) and Akaike-type weighting.\n\nPseudoBMA(; regularize=false)\nPseudoBMA(regularize)\n\nConstruct the method with optional regularization of the weights using the standard error of the ELPD estimate.\n\nnote: Note\nThis approach is not recommended, as it produces unstable weight estimates. It is recommended to instead use BootstrappedPseudoBMA to stabilize the weights or Stacking. For details, see [YaoVehtari2018].\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\nSee also: Stacking\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.Stacking","page":"Stats","title":"PosteriorStats.Stacking","text":"struct Stacking{O<:Optim.AbstractOptimizer} <: AbstractModelWeightsMethod\n\nModel weighting using stacking of predictive distributions[YaoVehtari2018].\n\nStacking(; optimizer=Optim.LBFGS(), options=Optim.Options()\nStacking(optimizer[, options])\n\nConstruct the method, optionally customizing the optimization.\n\noptimizer::Optim.AbstractOptimizer: The optimizer to use for the optimization of the weights. The optimizer must support projected gradient optimization via a manifold field.\noptions::Optim.Options: The Optim options to use for the optimization of the weights.\n\nSee also: BootstrappedPseudoBMA\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#Predictive-checks","page":"Stats","title":"Predictive checks","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"loo_pit","category":"page"},{"location":"api/stats/#PosteriorStats.loo_pit","page":"Stats","title":"PosteriorStats.loo_pit","text":"loo_pit(y, y_pred, log_weights; kwargs...) -> Union{Real,AbstractArray}\n\nCompute leave-one-out probability integral transform (LOO-PIT) checks.\n\nArguments\n\ny: array of observations with shape (params...,)\ny_pred: array of posterior predictive samples with shape (draws, chains, params...).\nlog_weights: array of normalized log LOO importance weights with shape (draws, chains, params...).\n\nKeywords\n\nis_discrete: If not provided, then it is set to true iff elements of y and y_pred are all integer-valued. If true, then data are smoothed using smooth_data to make them non-discrete before estimating LOO-PIT values.\nkwargs: Remaining keywords are forwarded to smooth_data if data is discrete.\n\nReturns\n\npitvals: LOO-PIT values with same size as y. If y is a scalar, then pitvals is a scalar.\n\nLOO-PIT is a marginal posterior predictive check. If y_-i is the array y of observations with the ith observation left out, and y_i^* is a posterior prediction of the ith observation, then the LOO-PIT value for the ith observation is defined as\n\nP(y_i^* le y_i mid y_-i) = int_-infty^y_i p(y_i^* mid y_-i) mathrmd y_i^*\n\nThe LOO posterior predictions and the corresponding observations should have similar distributions, so if conditional predictive distributions are well-calibrated, then all LOO-PIT values should be approximately uniformly distributed on 0 1.[Gabry2019]\n\n[Gabry2019]: Gabry, J., Simpson, D., Vehtari, A., Betancourt, M. & Gelman, A. Visualization in Bayesian Workflow. J. R. Stat. Soc. Ser. A Stat. Soc. 182, 389–402 (2019). doi: 10.1111/rssa.12378 arXiv: 1709.01449\n\nExamples\n\nCalculate LOO-PIT values using as test quantity the observed values themselves.\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> y = idata.observed_data.obs;\n\njulia> y_pred = PermutedDimsArray(idata.posterior_predictive.obs, (:draw, :chain, :school));\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> log_weights = loo(log_like).psis_result.log_weights;\n\njulia> loo_pit(y, y_pred, log_weights)\n╭───────────────────────────────╮\n│ 8-element DimArray{Float64,1} │\n├───────────────────────────────┴──────────────────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\nCalculate LOO-PIT values using as test quantity the square of the difference between each observation and mu.\n\njulia> using Statistics\n\njulia> mu = idata.posterior.mu;\n\njulia> T = y .- median(mu);\n\njulia> T_pred = y_pred .- mu;\n\njulia> loo_pit(T .^ 2, T_pred .^ 2, log_weights)\n╭───────────────────────────────╮\n│ 8-element DimArray{Float64,1} │\n├───────────────────────────────┴──────────────────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.873577\n \"Deerfield\" 0.243686\n \"Phillips Andover\" 0.357563\n \"Phillips Exeter\" 0.149908\n \"Hotchkiss\" 0.435094\n \"Lawrenceville\" 0.220627\n \"St. Paul's\" 0.775086\n \"Mt. Hermon\" 0.296706\n\n\n\n\n\nloo_pit(idata::InferenceData, log_weights; kwargs...) -> DimArray\n\nCompute LOO-PIT values using existing normalized log LOO importance weights.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\nkwargs: Remaining keywords are forwarded to the base method of loo_pit.\n\nExamples\n\nCalculate LOO-PIT values using already computed log weights.\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo_result = loo(idata; var_name=:obs);\n\njulia> loo_pit(idata, loo_result.psis_result.log_weights; y_name=:obs)\n╭───────────────────────────────────────────╮\n│ 8-element DimArray{Float64,1} loo_pit_obs │\n├───────────────────────────────────────────┴──────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\n\n\n\n\nloo_pit(idata::InferenceData; kwargs...) -> DimArray\n\nCompute LOO-PIT from groups in idata using PSIS-LOO.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\nlog_likelihood_name: Name of log-likelihood variable in idata.log_likelihood. If not provided, then y_name is used if idata has a log_likelihood group, otherwise the only variable is used.\nreff::Union{Real,AbstractArray{<:Real}}: The relative effective sample size(s) of the likelihood values. If an array, it must have the same data dimensions as the corresponding log-likelihood variable. If not provided, then this is estimated using ess.\nkwargs: Remaining keywords are forwarded to the base method of loo_pit.\n\nExamples\n\nCalculate LOO-PIT values using as test quantity the observed values themselves.\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo_pit(idata; y_name=:obs)\n╭───────────────────────────────────────────╮\n│ 8-element DimArray{Float64,1} loo_pit_obs │\n├───────────────────────────────────────────┴──────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Utilities","page":"Stats","title":"Utilities","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"PosteriorStats.smooth_data","category":"page"},{"location":"api/stats/#PosteriorStats.smooth_data","page":"Stats","title":"PosteriorStats.smooth_data","text":"smooth_data(y; dims=:, interp_method=CubicSpline, offset_frac=0.01)\n\nSmooth y along dims using interp_method.\n\ninterp_method is a 2-argument callabale that takes the arguments y and x and returns a DataInterpolations.jl interpolation method, defaulting to a cubic spline interpolator.\n\noffset_frac is the fraction of the length of y to use as an offset when interpolating.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#dataset-api","page":"Dataset","title":"Dataset","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Pages = [\"dataset.md\"]","category":"page"},{"location":"api/dataset/#Type-definition","page":"Dataset","title":"Type definition","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Dataset","category":"page"},{"location":"api/dataset/#InferenceObjects.Dataset","page":"Dataset","title":"InferenceObjects.Dataset","text":"Dataset{K,T,N,L} <: DimensionalData.AbstractDimStack{K,T,N,L}\n\nContainer of dimensional arrays sharing some dimensions.\n\nThis type is an DimensionalData.AbstractDimStack that implements the same interface as DimensionalData.DimStack and has identical usage.\n\nWhen a Dataset is passed to Python, it is converted to an xarray.Dataset without copying the data. That is, the Python object shares the same memory as the Julia object. However, if an xarray.Dataset is passed to Julia, its data must be copied.\n\nConstructors\n\nDataset(data::DimensionalData.AbstractDimArray...)\nDataset(data::Tuple{Vararg{<:DimensionalData.AbstractDimArray}})\nDataset(data::NamedTuple{Keys,Vararg{<:DimensionalData.AbstractDimArray}})\nDataset(\n data::NamedTuple,\n dims::Tuple{Vararg{DimensionalData.Dimension}};\n metadata=DimensionalData.NoMetadata(),\n)\n\nIn most cases, use convert_to_dataset to create a Dataset instead of directly using a constructor.\n\n\n\n\n\n","category":"type"},{"location":"api/dataset/#General-conversion","page":"Dataset","title":"General conversion","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"convert_to_dataset\nnamedtuple_to_dataset","category":"page"},{"location":"api/dataset/#InferenceObjects.convert_to_dataset","page":"Dataset","title":"InferenceObjects.convert_to_dataset","text":"convert_to_dataset(obj; group = :posterior, kwargs...) -> Dataset\n\nConvert a supported object to a Dataset.\n\nIn most cases, this function calls convert_to_inference_data and returns the corresponding group.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#InferenceObjects.namedtuple_to_dataset","page":"Dataset","title":"InferenceObjects.namedtuple_to_dataset","text":"namedtuple_to_dataset(data; kwargs...) -> Dataset\n\nConvert NamedTuple mapping variable names to arrays to a Dataset.\n\nAny non-array values will be converted to a 0-dimensional array.\n\nKeywords\n\nattrs::AbstractDict{<:AbstractString}: a collection of metadata to attach to the dataset, in addition to defaults. Values should be JSON serializable.\nlibrary::Union{String,Module}: library used for performing inference. Will be attached to the attrs metadata.\ndims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:\nSymbol: dimension name\nType{<:DimensionsionalData.Dimension}: dimension type\nDimensionsionalData.Dimension: dimension, potentially with indices\nNothing: no dimension name provided, dimension name is automatically generated\ncoords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#DimensionalData","page":"Dataset","title":"DimensionalData","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"As a DimensionalData.AbstractDimStack, Dataset also implements the AbstractDimStack API and can be used like a DimStack. See DimensionalData's documentation for example usage.","category":"page"},{"location":"api/dataset/#Tables-inteface","page":"Dataset","title":"Tables inteface","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Dataset implements the Tables interface. This allows Datasets to be used as sources for any function that can accept a table. For example, it's straightforward to:","category":"page"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"write to CSV with CSV.jl\nflatten to a DataFrame with DataFrames.jl\nplot with StatsPlots.jl\nplot with AlgebraOfGraphics.jl","category":"page"},{"location":"#arvizjl","page":"Home","title":"ArviZ.jl: Exploratory analysis of Bayesian models in Julia","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"ArviZ.jl is a Julia meta-package for exploratory analysis of Bayesian models. It is part of the ArviZ project, which also includes a related Python package.","category":"page"},{"location":"","page":"Home","title":"Home","text":"ArviZ consists of and re-exports the following subpackages, along with extensions integrating them with InferenceObjects:","category":"page"},{"location":"","page":"Home","title":"Home","text":"InferenceObjects.jl: a base package implementing the InferenceData type with utilities for building, saving, and working with it\nMCMCDiagnosticTools.jl: diagnostics for Markov Chain Monte Carlo methods\nPSIS.jl: Pareto-smoothed importance sampling\nPosteriorStats.jl: common statistical analyses for the Bayesian workflow","category":"page"},{"location":"","page":"Home","title":"Home","text":"Additional functionality can be loaded with the following packages:","category":"page"},{"location":"","page":"Home","title":"Home","text":"ArviZExampleData.jl: example InferenceData objects, useful for demonstration and testing\nArviZPythonPlots.jl: Python ArviZ's library of plotting functions for Julia types","category":"page"},{"location":"","page":"Home","title":"Home","text":"See the navigation bar for more useful packages.","category":"page"},{"location":"#installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"From the Julia REPL, type ] to enter the Pkg REPL mode and run","category":"page"},{"location":"","page":"Home","title":"Home","text":"pkg> add ArviZ","category":"page"},{"location":"#usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"See the Quickstart for example usage and the API Overview for description of functions.","category":"page"},{"location":"#extendingarviz","page":"Home","title":"Extending ArviZ.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To use a custom data type with ArviZ.jl, simply overload InferenceObjects.convert_to_inference_data to convert your input(s) to an InferenceObjects.InferenceData.","category":"page"},{"location":"working_with_inference_data/#working-with-inference-data","page":"Working with InferenceData","title":"Working with InferenceData","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"using ArviZ, ArviZExampleData, DimensionalData, Statistics","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we present a collection of common manipulations you can use while working with InferenceData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let's load one of ArviZ's example datasets. posterior, posterior_predictive, etc are the groups stored in idata, and they are stored as Datasets. In this HTML view, you can click a group name to expand a summary of the group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata = load_example_data(\"centered_eight\")","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"info: Info\nDatasets are DimensionalData.AbstractDimStacks and can be used identically. The variables a Dataset contains are called \"layers\", and dimensions of the same name that appear in more than one layer within a Dataset must have the same indices.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData behaves like a NamedTuple and can be used similarly. Note that unlike a NamedTuple, the groups always appear in a specific order.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"length(idata) # number of groups","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"keys(idata) # group names","category":"page"},{"location":"working_with_inference_data/#Get-the-dataset-corresponding-to-a-single-group","page":"Working with InferenceData","title":"Get the dataset corresponding to a single group","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Group datasets can be accessed both as properties or as indexed items.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post = idata.posterior","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post is the dataset itself, so this is a non-allocating operation.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[:posterior] === post","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData supports a more advanced indexing syntax, which we'll see later.","category":"page"},{"location":"working_with_inference_data/#Getting-a-new-InferenceData-with-a-subset-of-groups","page":"Working with InferenceData","title":"Getting a new InferenceData with a subset of groups","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can index by a collection of group names to get a new InferenceData with just those groups. This is also non-allocating.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_sub = idata[(:posterior, :posterior_predictive)]","category":"page"},{"location":"working_with_inference_data/#Adding-groups-to-an-InferenceData","page":"Working with InferenceData","title":"Adding groups to an InferenceData","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData is immutable, so to add or replace groups we use merge to create a new object.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(idata_sub, idata[(:observed_data, :prior)])","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can also use Base.setindex to out-of-place add or replace a single group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Base.setindex(idata_sub, idata.prior, :prior)","category":"page"},{"location":"working_with_inference_data/#Add-a-new-variable","page":"Working with InferenceData","title":"Add a new variable","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Dataset is also immutable. So while the values within the underlying data arrays can be mutated, layers cannot be added or removed from Datasets, and groups cannot be added/removed from InferenceData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Instead, we do this out-of-place also using merge.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(post, (log_tau=log.(post[:tau]),))","category":"page"},{"location":"working_with_inference_data/#Obtain-an-array-for-a-given-parameter","page":"Working with InferenceData","title":"Obtain an array for a given parameter","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s say we want to get the values for mu as an array. Parameters can be accessed with either property or index syntax.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post.tau","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post[:tau] === post.tau","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To remove the dimensions, just use parent to retrieve the underlying array.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"parent(post.tau)","category":"page"},{"location":"working_with_inference_data/#Get-the-dimension-lengths","page":"Working with InferenceData","title":"Get the dimension lengths","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s check how many groups are in our hierarchical model.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"size(idata.observed_data, :school)","category":"page"},{"location":"working_with_inference_data/#Get-coordinate/index-values","page":"Working with InferenceData","title":"Get coordinate/index values","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"DimensionalData.index(idata.observed_data, :school)","category":"page"},{"location":"working_with_inference_data/#Get-a-subset-of-chains","page":"Working with InferenceData","title":"Get a subset of chains","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s keep only chain 0 here. For the subset to take effect on all relevant InferenceData groups – posterior, sample_stats, log_likelihood, and posterior_predictive – we will index InferenceData instead of Dataset.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we use DimensionalData's At selector. Its other selectors are also supported.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[chain=At(0)]","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Note that in this case, prior only has a chain of 0. If it also had the other chains, we could have passed chain=At([0, 2]) to subset by chains 0 and 2.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"warning: Warning\nIf we used idata[chain=[0, 2]] without the At selector, this is equivalent to idata[chain=DimensionalData.index(idata.posterior, :chain)[0, 2]], that is, [0, 2] indexes an array of dimension indices, which here would error. But if we had requested idata[chain=[1, 2]] we would not hit an error, but we would index the wrong chains. So it's important to always use a selector to index by values of dimension indices.","category":"page"},{"location":"working_with_inference_data/#Remove-the-first-n-draws-(burn-in)","page":"Working with InferenceData","title":"Remove the first n draws (burn-in)","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s say we want to remove the first 100 draws from all the chains and all InferenceData groups with draws. To do this we use the .. syntax from IntervalSets.jl, which is exported by DimensionalData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[draw=100 .. Inf]","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"If you check the object you will see that the groups posterior, posterior_predictive, prior, and sample_stats have 400 draws compared to idata, which has 500. The group observed_data has not been affected because it does not have the draw dimension.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Alternatively, you can change a subset of groups by combining indexing styles with merge. Here we use this to build a new InferenceData where we have discarded the first 100 draws only from posterior.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(idata, idata[(:posterior,), draw=100 .. Inf])","category":"page"},{"location":"working_with_inference_data/#Compute-posterior-mean-values-along-draw-and-chain-dimensions","page":"Working with InferenceData","title":"Compute posterior mean values along draw and chain dimensions","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To compute the mean value of the posterior samples, do the following:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mean(post)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"This computes the mean along all dimensions, discarding all dimensions and returning the result as a NamedTuple. This may be what you wanted for mu and tau, which have only two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can specify along which dimension you want to compute the mean (or other functions), which instead returns a Dataset.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mean(post; dims=(:chain, :draw))","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"The singleton dimensions of chain and draw now contain meaningless indices, so you may want to discard them, which you can do with dropdims.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"dropdims(mean(post; dims=(:chain, :draw)); dims=(:chain, :draw))","category":"page"},{"location":"working_with_inference_data/#Renaming-a-dimension","page":"Working with InferenceData","title":"Renaming a dimension","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can rename a dimension in a Dataset using DimensionalData's set method:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"theta_bis = set(post.theta; school=:school_bis)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can use this, for example, to broadcast functions across multiple arrays, automatically matching up shared dimensions, using DimensionalData.broadcast_dims.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"theta_school_diff = broadcast_dims(-, post.theta, theta_bis)","category":"page"},{"location":"working_with_inference_data/#Compute-and-store-posterior-pushforward-quantities","page":"Working with InferenceData","title":"Compute and store posterior pushforward quantities","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We use “posterior pushfoward quantities” to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can compute these pushforward operations and store them as a new variable in a copy of the posterior group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we'll create a new InferenceData with theta_school_diff in the posterior:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_new = Base.setindex(idata, merge(post, (; theta_school_diff)), :posterior)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Once you have these pushforward quantities in an InferenceData, you’ll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them, or save and share the InferenceData object with the pushforward quantities included.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we compute the mcse of theta_school_diff:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mcse(idata_new.posterior).theta_school_diff","category":"page"},{"location":"working_with_inference_data/#Advanced-subsetting","page":"Working with InferenceData","title":"Advanced subsetting","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To select the value corresponding to the difference between the Choate and Deerfield schools do:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"school_idx = [\"Choate\", \"Hotchkiss\", \"Mt. Hermon\"]\nschool_bis_idx = [\"Deerfield\", \"Choate\", \"Lawrenceville\"]\ntheta_school_diff[school=At(school_idx), school_bis=At(school_bis_idx)]","category":"page"},{"location":"working_with_inference_data/#Add-new-chains-using-cat","page":"Working with InferenceData","title":"Add new chains using cat","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Suppose after checking the mcse and realizing you need more samples, you rerun the model with two chains and obtain an idata_rerun object.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_rerun = InferenceData(; posterior=set(post[chain=At([0, 1])]; chain=[4, 5]))","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can combine the two using cat.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"cat(idata[[:posterior]], idata_rerun; dims=:chain)","category":"page"}] +[{"location":"api/inference_data/#inferencedata-api","page":"InferenceData","title":"InferenceData","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"Pages = [\"inference_data.md\"]","category":"page"},{"location":"api/inference_data/#Type-definition","page":"InferenceData","title":"Type definition","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"InferenceData","category":"page"},{"location":"api/inference_data/#InferenceObjects.InferenceData","page":"InferenceData","title":"InferenceObjects.InferenceData","text":"InferenceData{group_names,group_types}\n\nContainer for inference data storage using DimensionalData.\n\nThis object implements the InferenceData schema.\n\nInternally, groups are stored in a NamedTuple, which can be accessed using parent(::InferenceData).\n\nConstructors\n\nInferenceData(groups::NamedTuple)\nInferenceData(; groups...)\n\nConstruct an inference data from either a NamedTuple or keyword arguments of groups.\n\nGroups must be Dataset objects.\n\nInstead of directly creating an InferenceData, use the exported from_xyz functions or convert_to_inference_data.\n\n\n\n\n\n","category":"type"},{"location":"api/inference_data/#Property-interface","page":"InferenceData","title":"Property interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"getproperty\npropertynames","category":"page"},{"location":"api/inference_data/#Base.getproperty","page":"InferenceData","title":"Base.getproperty","text":"getproperty(data::InferenceData, name::Symbol) -> Dataset\n\nGet group with the specified name.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.propertynames","page":"InferenceData","title":"Base.propertynames","text":"propertynames(data::InferenceData) -> Tuple{Symbol}\n\nGet names of groups\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Indexing-interface","page":"InferenceData","title":"Indexing interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"getindex\nBase.setindex","category":"page"},{"location":"api/inference_data/#Base.getindex","page":"InferenceData","title":"Base.getindex","text":"Base.getindex(data::InferenceData, groups::Symbol; coords...) -> Dataset\nBase.getindex(data::InferenceData, groups; coords...) -> InferenceData\n\nReturn a new InferenceData containing the specified groups sliced to the specified coords.\n\ncoords specifies a dimension name mapping to an index, a DimensionalData.Selector, or an IntervalSets.AbstractInterval.\n\nIf one or more groups lack the specified dimension, a warning is raised but can be ignored. All groups that contain the dimension must also contain the specified indices, or an exception will be raised.\n\nExamples\n\nSelect data from all groups for just the specified id values.\n\njulia> using InferenceObjects, DimensionalData\n\njulia> idata = from_namedtuple(\n (θ=randn(4, 100, 4), τ=randn(4, 100));\n prior=(θ=randn(4, 100, 4), τ=randn(4, 100)),\n observed_data=(y=randn(4),),\n dims=(θ=[:id], y=[:id]),\n coords=(id=[\"a\", \"b\", \"c\", \"d\"],),\n )\nInferenceData with groups:\n > posterior\n > prior\n > observed_data\n\njulia> idata.posterior\nDataset with dimensions:\n Dim{:chain} Sampled 1:4 ForwardOrdered Regular Points,\n Dim{:draw} Sampled 1:100 ForwardOrdered Regular Points,\n Dim{:id} Categorical String[a, b, c, d] ForwardOrdered\nand 2 layers:\n :θ Float64 dims: Dim{:chain}, Dim{:draw}, Dim{:id} (4×100×4)\n :τ Float64 dims: Dim{:chain}, Dim{:draw} (4×100)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:15:21.4\"\n\njulia> idata_sel = idata[id=At([\"a\", \"b\"])]\nInferenceData with groups:\n > posterior\n > prior\n > observed_data\n\njulia> idata_sel.posterior\nDataset with dimensions:\n Dim{:chain} Sampled 1:4 ForwardOrdered Regular Points,\n Dim{:draw} Sampled 1:100 ForwardOrdered Regular Points,\n Dim{:id} Categorical String[a, b] ForwardOrdered\nand 2 layers:\n :θ Float64 dims: Dim{:chain}, Dim{:draw}, Dim{:id} (4×100×2)\n :τ Float64 dims: Dim{:chain}, Dim{:draw} (4×100)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:15:21.4\"\n\nSelect data from just the posterior, returning a Dataset if the indices index more than one element from any of the variables:\n\njulia> idata[:observed_data, id=At([\"a\"])]\nDataset with dimensions:\n Dim{:id} Categorical String[a] ForwardOrdered\nand 1 layer:\n :y Float64 dims: Dim{:id} (1)\n\nwith metadata Dict{String, Any} with 1 entry:\n \"created_at\" => \"2022-08-11T11:19:25.982\"\n\nNote that if a single index is provided, the behavior is still to slice so that the dimension is preserved.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.setindex","page":"InferenceData","title":"Base.setindex","text":"Base.setindex(data::InferenceData, group::Dataset, name::Symbol) -> InferenceData\n\nCreate a new InferenceData containing the group with the specified name.\n\nIf a group with name is already in data, it is replaced.\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Iteration-interface","page":"InferenceData","title":"Iteration interface","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"InferenceData also implements the same iteration interface as its underlying NamedTuple. That is, iterating over an InferenceData iterates over its groups.","category":"page"},{"location":"api/inference_data/#General-conversion","page":"InferenceData","title":"General conversion","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"convert_to_inference_data\nfrom_dict\nfrom_namedtuple","category":"page"},{"location":"api/inference_data/#InferenceObjects.convert_to_inference_data","page":"InferenceData","title":"InferenceObjects.convert_to_inference_data","text":"convert_to_inference_data(obj; group, kwargs...) -> InferenceData\n\nConvert a supported object to an InferenceData object.\n\nIf obj converts to a single dataset, group specifies which dataset in the resulting InferenceData that is.\n\nSee convert_to_dataset\n\nArguments\n\nobj can be many objects. Basic supported types are:\nInferenceData: return unchanged\nDataset/DimensionalData.AbstractDimStack: add to InferenceData as the only group\nNamedTuple/AbstractDict: create a Dataset as the only group\nAbstractArray{<:Real}: create a Dataset as the only group, given an arbitrary name, if the name is not set\n\nMore specific types may be documented separately.\n\nKeywords\n\ngroup::Symbol = :posterior: If obj converts to a single dataset, assign the resulting dataset to this group.\ndims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:\nSymbol: dimension name\nType{<:DimensionsionalData.Dimension}: dimension type\nDimensionsionalData.Dimension: dimension, potentially with indices\nNothing: no dimension name provided, dimension name is automatically generated\ncoords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.\nkwargs: remaining keywords forwarded to converter functions\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#InferenceObjects.from_dict","page":"InferenceData","title":"InferenceObjects.from_dict","text":"from_dict(posterior::AbstractDict; kwargs...) -> InferenceData\n\nConvert a Dict to an InferenceData.\n\nArguments\n\nposterior: The data to be converted. Its strings must be Symbol or AbstractString, and its values must be arrays.\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution\nsample_stats::Any=nothing: Statistics of the posterior sampling process\npredictions::Any=nothing: Out-of-sample predictions for the posterior.\nprior::Dict=nothing: Draws from the prior\nprior_predictive::Any=nothing: Draws from the prior predictive distribution\nsample_stats_prior::Any=nothing: Statistics of the prior sampling process\nobserved_data::NamedTuple: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data::NamedTuple: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.\npredictions_constant_data::NamedTuple: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a NamedTuple whose keys are observed variable names and whose values are log likelihood arrays.\nlibrary: Name of library that generated the draws\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\nExamples\n\nusing InferenceObjects\nnchains = 2\nndraws = 100\n\ndata = Dict(\n :x => rand(ndraws, nchains),\n :y => randn(2, ndraws, nchains),\n :z => randn(3, 2, ndraws, nchains),\n)\nidata = from_dict(data)\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#InferenceObjects.from_namedtuple","page":"InferenceData","title":"InferenceObjects.from_namedtuple","text":"from_namedtuple(posterior::NamedTuple; kwargs...) -> InferenceData\nfrom_namedtuple(posterior::Vector{Vector{<:NamedTuple}}; kwargs...) -> InferenceData\nfrom_namedtuple(\n posterior::NamedTuple,\n sample_stats::Any,\n posterior_predictive::Any,\n predictions::Any,\n log_likelihood::Any;\n kwargs...\n) -> InferenceData\n\nConvert a NamedTuple or container of NamedTuples to an InferenceData.\n\nIf containers are passed, they are flattened into a single NamedTuple with array elements whose first dimensions correspond to the dimensions of the containers.\n\nArguments\n\nposterior: The data to be converted. It may be of the following types:\n::NamedTuple: The keys are the variable names and the values are arrays with dimensions (ndraws, nchains[, sizes...]).\n::Vector{Vector{<:NamedTuple}}: A vector of length nchains whose elements have length ndraws.\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution\nsample_stats::Any=nothing: Statistics of the posterior sampling process\npredictions::Any=nothing: Out-of-sample predictions for the posterior.\nprior=nothing: Draws from the prior. Accepts the same types as posterior.\nprior_predictive::Any=nothing: Draws from the prior predictive distribution\nsample_stats_prior::Any=nothing: Statistics of the prior sampling process\nobserved_data::NamedTuple: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data::NamedTuple: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.\npredictions_constant_data::NamedTuple: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a NamedTuple whose keys are observed variable names and whose values are log likelihood arrays.\nlibrary: Name of library that generated the draws\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\nnote: Note\nIf a NamedTuple is provided for observed_data, constant_data, or predictionsconstantdata`, any non-array values (e.g. integers) are converted to 0-dimensional arrays.\n\nExamples\n\nusing InferenceObjects\nnchains = 2\nndraws = 100\n\ndata1 = (\n x=rand(ndraws, nchains), y=randn(ndraws, nchains, 2), z=randn(ndraws, nchains, 3, 2)\n)\nidata1 = from_namedtuple(data1)\n\ndata2 = [[(x=rand(), y=randn(2), z=randn(3, 2)) for _ in 1:ndraws] for _ in 1:nchains];\nidata2 = from_namedtuple(data2)\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#General-functions","page":"InferenceData","title":"General functions","text":"","category":"section"},{"location":"api/inference_data/","page":"InferenceData","title":"InferenceData","text":"cat\nmerge","category":"page"},{"location":"api/inference_data/#Base.cat","page":"InferenceData","title":"Base.cat","text":"cat(data::InferenceData...; [groups=keys(data[1]),] dims) -> InferenceData\n\nConcatenate InferenceData objects along the specified dimension dims.\n\nOnly the groups in groups are concatenated. Remaining groups are merged into the new InferenceData object.\n\nExamples\n\nHere is how we can concatenate all groups of two InferenceData objects along the existing chain dimension:\n\njulia> coords = (; a_dim=[\"x\", \"y\", \"z\"]);\n\njulia> dims = dims=(; a=[:a_dim]);\n\njulia> data = Dict(:a => randn(100, 4, 3), :b => randn(100, 4));\n\njulia> idata = from_dict(data; coords=coords, dims=dims)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat1 = cat(idata, idata; dims=:chain)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat1.posterior\n╭─────────────────╮\n│ 100×8×3 Dataset │\n├─────────────────┴──────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered\n├──────────────────────────────────────────────────── layers ┤\n :a eltype: Float64 dims: draw, chain, a_dim size: 100×8×3\n :b eltype: Float64 dims: draw, chain size: 100×8\n├────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\nAlternatively, we can concatenate along a new run dimension, which will be created.\n\njulia> idata_cat2 = cat(idata, idata; dims=:run)\nInferenceData with groups:\n > posterior\n\njulia> idata_cat2.posterior\n╭───────────────────╮\n│ 100×4×3×2 Dataset │\n├───────────────────┴─────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered,\n ⬔ run\n├─────────────────────────────────────────────────────────────┴ layers ┐\n :a eltype: Float64 dims: draw, chain, a_dim, run size: 100×4×3×2\n :b eltype: Float64 dims: draw, chain, run size: 100×4×2\n├──────────────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\nWe can also concatenate only a subset of groups and merge the rest, which is useful when some groups are present only in some of the InferenceData objects or will be identical in all of them:\n\njulia> observed_data = Dict(:y => randn(10));\n\njulia> idata2 = from_dict(data; observed_data=observed_data, coords=coords, dims=dims)\nInferenceData with groups:\n > posterior\n > observed_data\n\njulia> idata_cat3 = cat(idata, idata2; groups=(:posterior,), dims=:run)\nInferenceData with groups:\n > posterior\n > observed_data\n\njulia> idata_cat3.posterior\n╭───────────────────╮\n│ 100×4×3×2 Dataset │\n├───────────────────┴─────────────────────────────────── dims ┐\n ↓ draw ,\n → chain,\n ↗ a_dim Categorical{String} [\"x\", \"y\", \"z\"] ForwardOrdered,\n ⬔ run\n├─────────────────────────────────────────────────────────────┴ layers ┐\n :a eltype: Float64 dims: draw, chain, a_dim, run size: 100×4×3×2\n :b eltype: Float64 dims: draw, chain, run size: 100×4×2\n├──────────────────────────────────────────────────────────── metadata ┤\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:48.434\"\n\njulia> idata_cat3.observed_data\n╭────────────────────╮\n│ 10-element Dataset │\n├────────────── dims ┤\n ↓ y_dim_1\n├────────────────────┴─────────────── layers ┐\n :y eltype: Float64 dims: y_dim_1 size: 10\n├────────────────────────────────────────────┴ metadata ┐\n Dict{String, Any} with 1 entry:\n \"created_at\" => \"2024-03-11T14:10:53.539\"\n\n\n\n\n\n","category":"function"},{"location":"api/inference_data/#Base.merge","page":"InferenceData","title":"Base.merge","text":"merge(data::InferenceData...) -> InferenceData\n\nMerge InferenceData objects.\n\nThe result contains all groups in data and others. If a group appears more than once, the one that occurs last is kept.\n\nSee also: cat\n\nExamples\n\nHere we merge an InferenceData containing only a posterior group with one containing only a prior group to create a new one containing both groups.\n\njulia> idata1 = from_dict(Dict(:a => randn(100, 4, 3), :b => randn(100, 4)))\nInferenceData with groups:\n > posterior\n\njulia> idata2 = from_dict(; prior=Dict(:a => randn(100, 1, 3), :c => randn(100, 1)))\nInferenceData with groups:\n > prior\n\njulia> idata_merged = merge(idata1, idata2)\nInferenceData with groups:\n > posterior\n > prior\n\n\n\n\n\n","category":"function"},{"location":"quickstart/","page":"Quickstart","title":"Quickstart","text":"\n\n\n

ArviZ Quickstart

Note

This tutorial is adapted from ArviZ's quickstart.

\n\n\n

Setup

Here we add the necessary packages for this notebook and load a few we will use throughout.

\n\n\n\n\n
using ArviZ, ArviZPythonPlots, Distributions, LinearAlgebra, Random, StanSample, Turing
\n\n\n
# ArviZPythonPlots ships with style sheets!\nuse_style(\"arviz-darkgrid\")
\n\n\n\n

Get started with plotting

To plot with ArviZ, we need to load the ArviZPythonPlots package. ArviZ is designed to be used with libraries like Stan, Turing.jl, and Soss.jl but works fine with raw arrays.

\n\n
rng1 = Random.MersenneTwister(37772);
\n\n\n
begin\n    plot_posterior(randn(rng1, 100_000))\n    gcf()\nend
\n\n\n\n

Plotting a dictionary of arrays, ArviZ will interpret each key as the name of a different random variable. Each row of an array is treated as an independent series of draws from the variable, called a chain. Below, we have 10 chains of 50 draws each for four different distributions.

\n\n
let\n    s = (50, 10)\n    plot_forest((\n        normal=randn(rng1, s),\n        gumbel=rand(rng1, Gumbel(), s),\n        student_t=rand(rng1, TDist(6), s),\n        exponential=rand(rng1, Exponential(), s),\n    ),)\n    gcf()\nend
\n\n\n\n

Plotting with MCMCChains.jl's Chains objects produced by Turing.jl

ArviZ is designed to work well with high dimensional, labelled data. Consider the eight schools model, which roughly tries to measure the effectiveness of SAT classes at eight different schools. To show off ArviZ's labelling, I give the schools the names of a different eight schools.

This model is small enough to write down, is hierarchical, and uses labelling. Additionally, a centered parameterization causes divergences (which are interesting for illustration).

First we create our data and set some sampling parameters.

\n\n
begin\n    J = 8\n    y = [28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0]\n    σ = [15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0]\n    schools = [\n        \"Choate\",\n        \"Deerfield\",\n        \"Phillips Andover\",\n        \"Phillips Exeter\",\n        \"Hotchkiss\",\n        \"Lawrenceville\",\n        \"St. Paul's\",\n        \"Mt. Hermon\",\n    ]\n    ndraws = 1_000\n    ndraws_warmup = 1_000\n    nchains = 4\nend;
\n\n\n\n

Now we write and run the model using Turing:

\n\n
Turing.@model function model_turing(y, σ, J=length(y))\n    μ ~ Normal(0, 5)\n    τ ~ truncated(Cauchy(0, 5), 0, Inf)\n    θ ~ filldist(Normal(μ, τ), J)\n    for i in 1:J\n        y[i] ~ Normal(θ[i], σ[i])\n    end\nend
\n
model_turing (generic function with 4 methods)
\n\n
rng2 = Random.MersenneTwister(16653);
\n\n\n
begin\n    param_mod_turing = model_turing(y, σ)\n    sampler = NUTS(ndraws_warmup, 0.8)\n\n    turing_chns = Turing.sample(\n        rng2, model_turing(y, σ), sampler, MCMCThreads(), ndraws, nchains\n    )\nend;
\n\n\n\n

Most ArviZ functions work fine with Chains objects from Turing:

\n\n
begin\n    plot_autocorr(turing_chns; var_names=(:μ, :τ))\n    gcf()\nend
\n\n\n\n

Convert to InferenceData

For much more powerful querying, analysis and plotting, we can use built-in ArviZ utilities to convert Chains objects to multidimensional data structures with named dimensions and indices. Note that for such dimensions, the information is not contained in Chains, so we need to provide it.

ArviZ is built to work with InferenceData, and the more groups it has access to, the more powerful analyses it can perform.

\n\n
idata_turing_post = from_mcmcchains(\n    turing_chns;\n    coords=(; school=schools),\n    dims=NamedTuple(k => (:school,) for k in (:y, :σ, :θ)),\n    library=\"Turing\",\n)
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\" => \"2024-10-07T01:21:45.317\"\n  \"inference_library\" => \"Turing\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴───────────────────────────────────────── layers ┐\n  :energy           eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps          eltype: Int64 dims: draw, chain size: 1000×4\n  :diverging        eltype: Bool dims: draw, chain size: 1000×4\n  :max_energy_error eltype: Float64 dims: draw, chain size: 1000×4\n  :energy_error     eltype: Float64 dims: draw, chain size: 1000×4\n  :is_accept        eltype: Bool dims: draw, chain size: 1000×4\n  :log_density      eltype: Float64 dims: draw, chain size: 1000×4\n  :tree_depth       eltype: Int64 dims: draw, chain size: 1000×4\n  :step_size        eltype: Float64 dims: draw, chain size: 1000×4\n  :acceptance_rate  eltype: Float64 dims: draw, chain size: 1000×4\n  :lp               eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4\n├───────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\" => \"2024-10-07T01:21:45.258\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Each group is an ArviZ.Dataset, a DimensionalData.AbstractDimStack that can be used identically to a DimensionalData.Dimstack. We can view a summary of the dataset.

\n\n
idata_turing_post.posterior
\n
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 2 entries:\n  \"created_at\"        => \"2024-10-07T01:21:45.317\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Here is a plot of the trace. Note the intelligent labels.

\n\n
begin\n    plot_trace(idata_turing_post)\n    gcf()\nend
\n\n\n\n

We can also generate summary stats...

\n\n
summarystats(idata_turing_post)
\n
SummaryStats
meanstdhdi_3%hdi_97%mcse_meanmcse_stdess_tailess_bulkrhat
μ4.33.3-1.8110.50.110.06211928451.01
τ4.43.30.67310.40.200.121091151.05
θ[Choate]6.66.1-4.0118.00.210.1916277501.01
θ[Deerfield]5.05.0-4.8114.20.140.14195212771.01
θ[Phillips Andover]3.75.7-7.0714.60.140.16197914291.01
θ[Phillips Exeter]4.85.1-4.6014.40.140.14206411781.00
θ[Hotchkiss]3.34.9-6.0812.60.150.11180410981.01
θ[Lawrenceville]3.85.2-6.1213.30.130.14196513311.00
θ[St. Paul's]6.65.4-2.6817.40.180.1418428531.01
θ[Mt. Hermon]4.95.6-5.7114.80.140.19179413931.00
\n\n\n

...and examine the energy distribution of the Hamiltonian sampler.

\n\n
begin\n    plot_energy(idata_turing_post)\n    gcf()\nend
\n\n\n\n

Additional information in Turing.jl

With a few more steps, we can use Turing to compute additional useful groups to add to the InferenceData.

To sample from the prior, one simply calls sample but with the Prior sampler:

\n\n
prior = Turing.sample(rng2, param_mod_turing, Prior(), ndraws);
\n\n\n\n

To draw from the prior and posterior predictive distributions we can instantiate a \"predictive model\", i.e. a Turing model but with the observations set to missing, and then calling predict on the predictive model and the previously drawn samples:

\n\n
begin\n    # Instantiate the predictive model\n    param_mod_predict = model_turing(similar(y, Missing), σ)\n    # and then sample!\n    prior_predictive = Turing.predict(rng2, param_mod_predict, prior)\n    posterior_predictive = Turing.predict(rng2, param_mod_predict, turing_chns)\nend;
\n\n\n\n

And to extract the pointwise log-likelihoods, which is useful if you want to compute metrics such as loo,

\n\n
log_likelihood = let\n    log_likelihood = Turing.pointwise_loglikelihoods(\n        param_mod_turing, MCMCChains.get_sections(turing_chns, :parameters)\n    )\n    # Ensure the ordering of the loglikelihoods matches the ordering of `posterior_predictive`\n    ynames = string.(keys(posterior_predictive))\n    log_likelihood_y = getindex.(Ref(log_likelihood), ynames)\n    (; y=cat(log_likelihood_y...; dims=3))\nend;
\n\n\n\n

This can then be included in the from_mcmcchains call from above:

\n\n
idata_turing = from_mcmcchains(\n    turing_chns;\n    posterior_predictive,\n    log_likelihood,\n    prior,\n    prior_predictive,\n    observed_data=(; y),\n    coords=(; school=schools),\n    dims=NamedTuple(k => (:school,) for k in (:y, :σ, :θ)),\n    library=Turing,\n)
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×4\n  :τ eltype: Float64 dims: draw, chain size: 1000×4\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:16.652\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
posterior_predictive
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:16.334\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
log_likelihood
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:16.516\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴───────────────────────────────────────── layers ┐\n  :energy           eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps          eltype: Int64 dims: draw, chain size: 1000×4\n  :diverging        eltype: Bool dims: draw, chain size: 1000×4\n  :max_energy_error eltype: Float64 dims: draw, chain size: 1000×4\n  :energy_error     eltype: Float64 dims: draw, chain size: 1000×4\n  :is_accept        eltype: Bool dims: draw, chain size: 1000×4\n  :log_density      eltype: Float64 dims: draw, chain size: 1000×4\n  :tree_depth       eltype: Int64 dims: draw, chain size: 1000×4\n  :step_size        eltype: Float64 dims: draw, chain size: 1000×4\n  :acceptance_rate  eltype: Float64 dims: draw, chain size: 1000×4\n  :lp               eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size_nom    eltype: Float64 dims: draw, chain size: 1000×4\n├───────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:16.652\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
prior
╭──────────────────╮\n│ 1000×1×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :μ eltype: Float64 dims: draw, chain size: 1000×1\n  :τ eltype: Float64 dims: draw, chain size: 1000×1\n  :θ eltype: Float64 dims: draw, chain, school size: 1000×1×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:17.161\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
prior_predictive
╭──────────────────╮\n│ 1000×1×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: draw, chain, school size: 1000×1×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:17.027\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
sample_stats_prior
╭────────────────╮\n│ 1000×1 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴─────────────────────────── layers ┐\n  :lp eltype: Float64 dims: draw, chain size: 1000×1\n├─────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:17.118\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 3 entries:\n  \"created_at\" => \"2024-10-07T01:22:17.343\"\n  \"inference_library_version\" => \"0.34.1\"\n  \"inference_library\" => \"Turing\"\n
\n\n\n

Then we can for example compute the expected leave-one-out (LOO) predictive density, which is an estimate of the out-of-distribution predictive fit of the model:

\n\n
loo(idata_turing) # higher ELPD is better
\n
PSISLOOResult with estimates\n elpd  elpd_mcse    p  p_mcse\n  -31        1.4  1.0    0.33\n\nand PSISResult with 1000 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n                    Count      Min. ESS\n (-Inf, 0.5]  good  5 (62.5%)  404\n  (0.5, 0.7]  okay  3 (37.5%)  788
\n\n\n

If the model is well-calibrated, i.e. it replicates the true generative process well, the CDF of the pointwise LOO values should be similarly distributed to a uniform distribution. This can be inspected visually:

\n\n
begin\n    plot_loo_pit(idata_turing; y=:y, ecdf=true)\n    gcf()\nend
\n\n\n\n

Plotting with Stan.jl outputs

StanSample.jl comes with built-in support for producing InferenceData outputs.

Here is the same centered eight schools model in Stan:

\n\n
begin\n    schools_code = \"\"\"\n    data {\n      int<lower=0> J;\n      array[J] real y;\n      array[J] real<lower=0> sigma;\n    }\n\n    parameters {\n      real mu;\n      real<lower=0> tau;\n      array[J] real theta;\n    }\n\n    model {\n      mu ~ normal(0, 5);\n      tau ~ cauchy(0, 5);\n      theta ~ normal(mu, tau);\n      y ~ normal(theta, sigma);\n    }\n\n    generated quantities {\n        vector[J] log_lik;\n        vector[J] y_hat;\n        for (j in 1:J) {\n            log_lik[j] = normal_lpdf(y[j] | theta[j], sigma[j]);\n            y_hat[j] = normal_rng(theta[j], sigma[j]);\n        }\n    }\n    \"\"\"\n\n    schools_data = Dict(\"J\" => J, \"y\" => y, \"sigma\" => σ)\n    idata_stan = mktempdir() do path\n        stan_model = SampleModel(\"schools\", schools_code, path)\n        _ = stan_sample(\n            stan_model;\n            data=schools_data,\n            num_chains=nchains,\n            num_warmups=ndraws_warmup,\n            num_samples=ndraws,\n            seed=28983,\n            summary=false,\n        )\n        return StanSample.inferencedata(\n            stan_model;\n            posterior_predictive_var=:y_hat,\n            observed_data=(; y),\n            log_likelihood_var=:log_lik,\n            coords=(; school=schools),\n            dims=NamedTuple(\n                k => (:school,) for k in (:y, :sigma, :theta, :log_lik, :y_hat)\n            ),\n        )\n    end\nend
\n
InferenceData
posterior
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 1000×4\n  :tau   eltype: Float64 dims: draw, chain size: 1000×4\n  :theta eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-07T01:23:00.581\"\n
posterior_predictive
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y_hat eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-07T01:23:00.124\"\n
log_likelihood
╭──────────────────╮\n│ 1000×4×8 Dataset │\n├──────────────────┴───────────────────────────────────────────────────── dims ┐\n  ↓ draw  ,\n  → chain ,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :log_lik eltype: Float64 dims: draw, chain, school size: 1000×4×8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-07T01:23:00.487\"\n
sample_stats
╭────────────────╮\n│ 1000×4 Dataset │\n├────────────────┴ dims ┐\n  ↓ draw, → chain\n├─────────────────┴──────────────────────────────────────── layers ┐\n  :tree_depth      eltype: Int64 dims: draw, chain size: 1000×4\n  :energy          eltype: Float64 dims: draw, chain size: 1000×4\n  :diverging       eltype: Bool dims: draw, chain size: 1000×4\n  :acceptance_rate eltype: Float64 dims: draw, chain size: 1000×4\n  :n_steps         eltype: Int64 dims: draw, chain size: 1000×4\n  :lp              eltype: Float64 dims: draw, chain size: 1000×4\n  :step_size       eltype: Float64 dims: draw, chain size: 1000×4\n├──────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-07T01:23:00.233\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :y eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 1 entry:\n  \"created_at\" => \"2024-10-07T01:23:00.635\"\n
\n\n
begin\n    plot_density(idata_stan; var_names=(:mu, :tau))\n    gcf()\nend
\n\n\n\n

Here is a plot showing where the Hamiltonian sampler had divergences:

\n\n
begin\n    plot_pair(\n        idata_stan;\n        coords=Dict(:school => [\"Choate\", \"Deerfield\", \"Phillips Andover\"]),\n        divergences=true,\n    )\n    gcf()\nend
\n\n\n\n\n\n
using PlutoUI
\n\n\n
using Pkg, InteractiveUtils
\n\n\n
with_terminal(Pkg.status; color=false)
\n
Status `~/work/ArviZ.jl/ArviZ.jl/docs/Project.toml`\n  [cbdf2221] AlgebraOfGraphics v0.8.11\n  [131c737c] ArviZ v0.12.1 `~/work/ArviZ.jl/ArviZ.jl`\n  [2f96bb34] ArviZExampleData v0.1.11\n  [4a6e88f0] ArviZPythonPlots v0.1.7\n  [13f3f980] CairoMakie v0.12.12\n  [a93c6f00] DataFrames v1.7.0\n⌅ [0703355e] DimensionalData v0.27.9\n  [31c24e10] Distributions v0.25.112\n  [e30172f5] Documenter v1.7.0\n  [f6006082] EvoTrees v0.16.7\n  [b5cf5a8d] InferenceObjects v0.4.3\n  [be115224] MCMCDiagnosticTools v0.3.10\n  [a7f614a8] MLJBase v1.7.0\n  [614be32b] MLJIteration v0.6.3\n  [ce719bf2] PSIS v0.9.6\n  [359b1769] PlutoStaticHTML v6.0.28\n  [7f904dfe] PlutoUI v0.7.60\n  [7f36be82] PosteriorStats v0.2.5\n  [c1514b29] StanSample v7.10.1\n  [a19d573c] StatisticalMeasures v0.1.7\n  [2913bbd2] StatsBase v0.34.3\n  [fce5fe82] Turing v0.34.1\n  [f43a241f] Downloads v1.6.0\n  [37e2e46d] LinearAlgebra\n  [10745b16] Statistics v1.10.0\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n
\n\n
with_terminal(versioninfo)
\n
Julia Version 1.10.5\nCommit 6f3fdf7b362 (2024-08-27 14:19 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 2 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_PKG_SERVER_REGISTRY_PREFERENCE = eager\n  JULIA_NUM_THREADS = 2\n  JULIA_REVISE_WORKER_ONLY = 1\n  JULIA_PYTHONCALL_EXE = /home/runner/work/ArviZ.jl/ArviZ.jl/docs/.CondaPkg/env/bin/python\n
\n\n","category":"page"},{"location":"quickstart/","page":"Quickstart","title":"Quickstart","text":"EditURL = \"https://github.com/arviz-devs/ArviZ.jl/blob/main/docs/src/quickstart.jl\"","category":"page"},{"location":"api/data/#data-api","page":"Data","title":"Data","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"Pages = [\"data.md\"]","category":"page"},{"location":"api/data/#Inference-library-converters","page":"Data","title":"Inference library converters","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"from_mcmcchains\nfrom_samplechains","category":"page"},{"location":"api/data/#ArviZ.from_mcmcchains","page":"Data","title":"ArviZ.from_mcmcchains","text":"from_mcmcchains(posterior::MCMCChains.Chains; kwargs...) -> InferenceData\nfrom_mcmcchains(; kwargs...) -> InferenceData\nfrom_mcmcchains(\n posterior::MCMCChains.Chains,\n posterior_predictive,\n predictions,\n log_likelihood;\n kwargs...\n) -> InferenceData\n\nConvert data in an MCMCChains.Chains format into an InferenceData.\n\nAny keyword argument below without an an explicitly annotated type above is allowed, so long as it can be passed to convert_to_inference_data.\n\nArguments\n\nposterior::MCMCChains.Chains: Draws from the posterior\n\nKeywords\n\nposterior_predictive::Any=nothing: Draws from the posterior predictive distribution or name(s) of predictive variables in posterior\npredictions: Out-of-sample predictions for the posterior.\nprior: Draws from the prior\nprior_predictive: Draws from the prior predictive distribution or name(s) of predictive variables in prior\nobserved_data: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.\nconstant_data: Model constants, data included in the model that are not modeled as random variables. Keys are parameter names.\npredictions_constant_data: Constants relevant to the model predictions (i.e. new x values in a linear regression).\nlog_likelihood: Pointwise log-likelihood for the data. It is recommended to use this argument as a named tuple whose keys are observed variable names and whose values are log likelihood arrays. Alternatively, provide the name of variable in posterior containing log likelihoods.\nlibrary=MCMCChains: Name of library that generated the chains\ncoords: Map from named dimension to named indices\ndims: Map from variable name to names of its dimensions\neltypes: Map from variable names to eltypes. This is primarily used to assign discrete eltypes to discrete variables that were stored in Chains as floats.\n\nReturns\n\nInferenceData: The data with groups corresponding to the provided data\n\n\n\n\n\n","category":"function"},{"location":"api/data/#ArviZ.from_samplechains","page":"Data","title":"ArviZ.from_samplechains","text":"from_samplechains(\n posterior=nothing;\n prior=nothing,\n library=SampleChains,\n kwargs...,\n) -> InferenceData\n\nConvert SampleChains samples to an InferenceData.\n\nEither posterior or prior may be a SampleChains.AbstractChain or SampleChains.MultiChain object.\n\nFor descriptions of remaining kwargs, see from_namedtuple.\n\n\n\n\n\n","category":"function"},{"location":"api/data/#IO-/-Conversion","page":"Data","title":"IO / Conversion","text":"","category":"section"},{"location":"api/data/","page":"Data","title":"Data","text":"from_netcdf\nto_netcdf","category":"page"},{"location":"api/data/#InferenceObjects.from_netcdf","page":"Data","title":"InferenceObjects.from_netcdf","text":"from_netcdf(path::AbstractString; kwargs...) -> InferenceData\n\nLoad an InferenceData from an unopened NetCDF file.\n\nRemaining kwargs are passed to NCDatasets.NCDataset. This method loads data eagerly. To instead load data lazily, pass an opened NCDataset to from_netcdf.\n\nnote: Note\nThis method requires that NCDatasets is loaded before it can be used.\n\nExamples\n\njulia> using InferenceObjects, NCDatasets\n\njulia> idata = from_netcdf(\"centered_eight.nc\")\nInferenceData with groups:\n > posterior\n > posterior_predictive\n > sample_stats\n > prior\n > observed_data\n\nfrom_netcdf(ds::NCDatasets.NCDataset; load_mode) -> InferenceData\n\nLoad an InferenceData from an opened NetCDF file.\n\nload_mode defaults to :lazy, which avoids reading variables into memory. Operations on these arrays will be slow. load_mode can also be :eager, which copies all variables into memory. It is then safe to close ds. If load_mode is :lazy and ds is closed after constructing InferenceData, using the variable arrays will have undefined behavior.\n\nExamples\n\nHere is how we might load an InferenceData from an InferenceData lazily from a web-hosted NetCDF file.\n\njulia> using HTTP, InferenceObjects, NCDatasets\n\njulia> resp = HTTP.get(\"https://github.com/arviz-devs/arviz_example_data/blob/main/data/centered_eight.nc?raw=true\");\n\njulia> ds = NCDataset(\"centered_eight\", \"r\"; memory = resp.body);\n\njulia> idata = from_netcdf(ds)\nInferenceData with groups:\n > posterior\n > posterior_predictive\n > sample_stats\n > prior\n > observed_data\n\njulia> idata_copy = copy(idata); # disconnect from the loaded dataset\n\njulia> close(ds);\n\n\n\n\n\n","category":"function"},{"location":"api/data/#InferenceObjects.to_netcdf","page":"Data","title":"InferenceObjects.to_netcdf","text":"to_netcdf(data, dest::AbstractString; group::Symbol=:posterior, kwargs...)\nto_netcdf(data, dest::NCDatasets.NCDataset; group::Symbol=:posterior)\n\nWrite data to a NetCDF file.\n\ndata is any type that can be converted to an InferenceData using convert_to_inference_data. If not an InferenceData, then group specifies which group the data represents.\n\ndest specifies either the path to the NetCDF file or an opened NetCDF file. If dest is a path, remaining kwargs are passed to NCDatasets.NCDataset.\n\nnote: Note\nThis method requires that NCDatasets is loaded before it can be used.\n\nExamples\n\njulia> using InferenceObjects, NCDatasets\n\njulia> idata = from_namedtuple((; x = randn(4, 100, 3), z = randn(4, 100)))\nInferenceData with groups:\n > posterior\n\njulia> to_netcdf(idata, \"data.nc\")\n\"data.nc\"\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#diagnostics-api","page":"Diagnostics","title":"Diagnostics","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"Pages = [\"diagnostics.md\"]","category":"page"},{"location":"api/diagnostics/#bfmi","page":"Diagnostics","title":"Bayesian fraction of missing information","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.bfmi","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.bfmi","page":"Diagnostics","title":"MCMCDiagnosticTools.bfmi","text":"bfmi(energy::AbstractVector{<:Real}) -> Real\nbfmi(energy::AbstractMatrix{<:Real}; dims::Int=1) -> AbstractVector{<:Real}\n\nCalculate the estimated Bayesian fraction of missing information (BFMI).\n\nWhen sampling with Hamiltonian Monte Carlo (HMC), BFMI quantifies how well momentum resampling matches the marginal energy distribution.\n\nThe current advice is that values smaller than 0.3 indicate poor sampling. However, this threshold is provisional and may change. A BFMI value below the threshold often indicates poor adaptation of sampling parameters or that the target distribution has heavy tails that were not well explored by the Markov chain.\n\nFor more information, see Section 6.1 of [Betancourt2018] or [Betancourt2016] for a complete account.\n\nenergy is either a vector of Hamiltonian energies of draws or a matrix of energies of draws for multiple chains. dims indicates the dimension in energy that contains the draws. The default dims=1 assumes energy has the shape draws or (draws, chains). If a different shape is provided, dims must be set accordingly.\n\nIf energy is a vector, a single BFMI value is returned. Otherwise, a vector of BFMI values for each chain is returned.\n\n[Betancourt2018]: Betancourt M. (2018). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v2 [stat.ME]\n\n[Betancourt2016]: Betancourt M. (2016). Diagnosing Suboptimal Cotangent Disintegrations in Hamiltonian Monte Carlo. arXiv:1604.00695v1 [stat.ME]\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#ess_rhat","page":"Diagnostics","title":"Effective sample size and widehatR diagnostic","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.ess\nMCMCDiagnosticTools.rhat\nMCMCDiagnosticTools.ess_rhat","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.ess","page":"Diagnostics","title":"MCMCDiagnosticTools.ess","text":"ess(data::InferenceData; kwargs...) -> Dataset\ness(data::Dataset; kwargs...) -> Dataset\n\nCalculate the effective sample size (ESS) for each parameter in the data.\n\n\n\n\n\ness(\n samples::AbstractArray{<:Union{Missing,Real}};\n kind=:bulk,\n relative::Bool=false,\n autocov_method=AutocovMethod(),\n split_chains::Int=2,\n maxlag::Int=250,\n kwargs...\n)\n\nEstimate the effective sample size (ESS) of the samples of shape (draws, [chains[, parameters...]]) with the autocov_method.\n\nOptionally, the kind of ESS estimate to be computed can be specified (see below). Some kinds accept additional kwargs.\n\nIf relative is true, the relative ESS is returned, i.e. ess / (draws * chains).\n\nsplit_chains indicates the number of chains each chain is split into. When split_chains > 1, then the diagnostics check for within-chain convergence. When d = mod(draws, split_chains) > 0, i.e. the chains cannot be evenly split, then 1 draw is discarded after each of the first d splits within each chain. There must be at least 3 draws in each chain after splitting.\n\nmaxlag indicates the maximum lag for which autocovariance is computed and must be greater than 0.\n\nFor a given estimand, it is recommended that the ESS is at least 100 * chains and that widehatR 101.[VehtariGelman2021]\n\nSee also: AutocovMethod, FFTAutocovMethod, BDAAutocovMethod, rhat, ess_rhat, mcse\n\nKinds of ESS estimates\n\nIf kind isa a Symbol, it may take one of the following values:\n\n:bulk: basic ESS computed on rank-normalized draws. This kind diagnoses poor convergence in the bulk of the distribution due to trends or different locations of the chains.\n:tail: minimum of the quantile-ESS for the symmetric quantiles where tail_prob=0.1 is the probability in the tails. This kind diagnoses poor convergence in the tails of the distribution. If this kind is chosen, kwargs may contain a tail_prob keyword.\n:basic: basic ESS, equivalent to specifying kind=Statistics.mean.\n\nnote: Note\nWhile Bulk-ESS is conceptually related to basic ESS, it is well-defined even if the chains do not have finite variance.[VehtariGelman2021] For each parameter, rank-normalization proceeds by first ranking the inputs using \"tied ranking\" and then transforming the ranks to normal quantiles so that the result is standard normally distributed. This transform is monotonic.\n\nOtherwise, kind specifies one of the following estimators, whose ESS is to be estimated:\n\nStatistics.mean\nStatistics.median\nStatistics.std\nStatsBase.mad\nBase.Fix2(Statistics.quantile, p::Real)\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#MCMCDiagnosticTools.rhat","page":"Diagnostics","title":"MCMCDiagnosticTools.rhat","text":"rhat(data::InferenceData; kwargs...) -> Dataset\nrhat(data::Dataset; kwargs...) -> Dataset\n\nCalculate the widehatR diagnostic for each parameter in the data.\n\n\n\n\n\nrhat(samples::AbstractArray{Union{Real,Missing}}; kind::Symbol=:rank, split_chains=2)\n\nCompute the widehatR diagnostics for each parameter in samples of shape (draws, [chains[, parameters...]]).[VehtariGelman2021]\n\nkind indicates the kind of widehatR to compute (see below).\n\nsplit_chains indicates the number of chains each chain is split into. When split_chains > 1, then the diagnostics check for within-chain convergence. When d = mod(draws, split_chains) > 0, i.e. the chains cannot be evenly split, then 1 draw is discarded after each of the first d splits within each chain.\n\nSee also ess, ess_rhat, rstar\n\nKinds of widehatR\n\nThe following kinds are supported:\n\n:rank: maximum of widehatR with kind=:bulk and kind=:tail.\n:bulk: basic widehatR computed on rank-normalized draws. This kind diagnoses poor convergence in the bulk of the distribution due to trends or different locations of the chains.\n:tail: widehatR computed on draws folded around the median and then rank-normalized. This kind diagnoses poor convergence in the tails of the distribution due to different scales of the chains.\n:basic: Classic widehatR.\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#MCMCDiagnosticTools.ess_rhat","page":"Diagnostics","title":"MCMCDiagnosticTools.ess_rhat","text":"ess_rhat(data::InferenceData; kwargs...) -> Dataset\ness_rhat(data::Dataset; kwargs...) -> Dataset\n\nCalculate the effective sample size (ESS) and widehatR diagnostic for each parameter in the data.\n\n\n\n\n\ness_rhat(\n samples::AbstractArray{<:Union{Missing,Real}};\n kind::Symbol=:rank,\n kwargs...,\n) -> NamedTuple{(:ess, :rhat)}\n\nEstimate the effective sample size and widehatR of the samples of shape (draws, [chains[, parameters...]]).\n\nWhen both ESS and widehatR are needed, this method is often more efficient than calling ess and rhat separately.\n\nSee rhat for a description of supported kinds and ess for a description of kwargs.\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"The following autocovariance methods are supported:","category":"page"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.AutocovMethod\nMCMCDiagnosticTools.FFTAutocovMethod\nMCMCDiagnosticTools.BDAAutocovMethod","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.AutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.AutocovMethod","text":"AutocovMethod <: AbstractAutocovMethod\n\nThe AutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nIt is is based on the discussion by [VehtariGelman2021] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992].\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n[Geyer1992]: Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#MCMCDiagnosticTools.FFTAutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.FFTAutocovMethod","text":"FFTAutocovMethod <: AbstractAutocovMethod\n\nThe FFTAutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nThe algorithm is the same as the one of AutocovMethod but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.\n\ninfo: Info\nTo be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#MCMCDiagnosticTools.BDAAutocovMethod","page":"Diagnostics","title":"MCMCDiagnosticTools.BDAAutocovMethod","text":"BDAAutocovMethod <: AbstractAutocovMethod\n\nThe BDAAutocovMethod uses a standard algorithm for estimating the mean autocovariance of MCMC chains.\n\nIt is is based on the discussion by [VehtariGelman2021]. and uses the variogram estimator of the autocorrelation function discussed by [BDA3].\n\n[VehtariGelman2021]: Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved widehat R for assessing convergence of MCMC. Bayesian Analysis. doi: 10.1214/20-BA1221 arXiv: 1903.08008\n\n[BDA3]: Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.\n\n\n\n\n\n","category":"type"},{"location":"api/diagnostics/#mcse","page":"Diagnostics","title":"Monte Carlo standard error","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.mcse","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.mcse","page":"Diagnostics","title":"MCMCDiagnosticTools.mcse","text":"mcse(data::InferenceData; kwargs...) -> Dataset\nmcse(data::Dataset; kwargs...) -> Dataset\n\nCalculate the Monte Carlo standard error (MCSE) for each parameter in the data.\n\n\n\n\n\nmcse(samples::AbstractArray{<:Union{Missing,Real}}; kind=Statistics.mean, kwargs...)\n\nEstimate the Monte Carlo standard errors (MCSE) of the estimator kind applied to samples of shape (draws, [chains[, parameters...]]).\n\nSee also: ess\n\nKinds of MCSE estimates\n\nThe estimator whose MCSE should be estimated is specified with kind. kind must accept a vector of the same eltype as samples and return a real estimate.\n\nFor the following estimators, the effective sample size ess and an estimate of the asymptotic variance are used to compute the MCSE, and kwargs are forwarded to ess:\n\nStatistics.mean\nStatistics.median\nStatistics.std\nBase.Fix2(Statistics.quantile, p::Real)\n\nFor other estimators, the subsampling bootstrap method (SBM)[FlegalJones2011][Flegal2012] is used as a fallback, and the only accepted kwargs are batch_size, which indicates the size of the overlapping batches used to estimate the MCSE, defaulting to floor(Int, sqrt(draws * chains)). Note that SBM tends to underestimate the MCSE, especially for highly autocorrelated chains. One should verify that autocorrelation is low by checking the bulk- and tail-ESS values.\n\n[FlegalJones2011]: Flegal JM, Jones GL. (2011) Implementing MCMC: estimating with confidence. Handbook of Markov Chain Monte Carlo. pp. 175-97. pdf\n\n[Flegal2012]: Flegal JM. (2012) Applicability of subsampling bootstrap methods in Markov chain Monte Carlo. Monte Carlo and Quasi-Monte Carlo Methods 2010. pp. 363-72. doi: 10.1007/978-3-642-27440-4_18\n\n\n\n\n\n","category":"function"},{"location":"api/diagnostics/#rstar","page":"Diagnostics","title":"R^* diagnostic","text":"","category":"section"},{"location":"api/diagnostics/","page":"Diagnostics","title":"Diagnostics","text":"MCMCDiagnosticTools.rstar","category":"page"},{"location":"api/diagnostics/#MCMCDiagnosticTools.rstar","page":"Diagnostics","title":"MCMCDiagnosticTools.rstar","text":"rstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n data::Union{InferenceData,Dataset};\n kwargs...,\n)\n\nCalculate the R^* diagnostic for the data.\n\n\n\n\n\nrstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n samples,\n chain_indices::AbstractVector{Int};\n subset::Real=0.7,\n split_chains::Int=2,\n verbosity::Int=0,\n)\n\nCompute the R^* convergence statistic of the table samples with the classifier.\n\nsamples must be either an AbstractMatrix, an AbstractVector, or a table (i.e. implements the Tables.jl interface) whose rows are draws and whose columns are parameters.\n\nchain_indices indicates the chain ids of each row of samples.\n\nThis method supports ragged chains, i.e. chains of nonequal lengths.\n\n\n\n\n\nrstar(\n rng::Random.AbstractRNG=Random.default_rng(),\n classifier,\n samples::AbstractArray{<:Real};\n subset::Real=0.7,\n split_chains::Int=2,\n verbosity::Int=0,\n)\n\nCompute the R^* convergence statistic of the samples with the classifier.\n\nsamples is an array of draws with the shape (draws, [chains[, parameters...]]).`\n\nThis implementation is an adaption of algorithms 1 and 2 described by Lambert and Vehtari.\n\nThe classifier has to be a supervised classifier of the MLJ framework (see the MLJ documentation for a list of supported models). It is trained with a subset of the samples from each chain. Each chain is split into split_chains separate chains to additionally check for within-chain convergence. The training of the classifier can be inspected by adjusting the verbosity level.\n\nIf the classifier is deterministic, i.e., if it predicts a class, the value of the R^* statistic is returned (algorithm 1). If the classifier is probabilistic, i.e., if it outputs probabilities of classes, the scaled Poisson-binomial distribution of the R^* statistic is returned (algorithm 2).\n\nnote: Note\nThe correctness of the statistic depends on the convergence of the classifier used internally in the statistic.\n\nExamples\n\njulia> using MLJBase, MLJIteration, EvoTrees, Statistics, StatisticalMeasures\n\njulia> samples = fill(4.0, 100, 3, 2);\n\nOne can compute the distribution of the R^* statistic (algorithm 2) with a probabilistic classifier. For instance, we can use a gradient-boosted trees model with nrounds = 100 sequentially stacked trees and learning rate eta = 0.05:\n\njulia> model = EvoTreeClassifier(; nrounds=100, eta=0.05);\n\njulia> distribution = rstar(model, samples);\n\njulia> round(mean(distribution); digits=2)\n1.0f0\n\nNote, however, that it is recommended to determine nrounds based on early-stopping. With the MLJ framework, this can be achieved in the following way (see the MLJ documentation for additional explanations):\n\njulia> model = IteratedModel(;\n model=EvoTreeClassifier(; eta=0.05),\n iteration_parameter=:nrounds,\n resampling=Holdout(),\n measures=log_loss,\n controls=[Step(5), Patience(2), NumberLimit(100)],\n retrain=true,\n );\n\njulia> distribution = rstar(model, samples);\n\njulia> round(mean(distribution); digits=2)\n1.0f0\n\nFor deterministic classifiers, a single R^* statistic (algorithm 1) is returned. Deterministic classifiers can also be derived from probabilistic classifiers by e.g. predicting the mode. In MLJ this corresponds to a pipeline of models.\n\njulia> evotree_deterministic = Pipeline(model; operation=predict_mode);\n\njulia> value = rstar(evotree_deterministic, samples);\n\njulia> round(value; digits=2)\n1.0\n\nReferences\n\nLambert, B., & Vehtari, A. (2020). R^*: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.\n\n\n\n\n\n","category":"function"},{"location":"api/#api","page":"API Overview","title":"API Overview","text":"","category":"section"},{"location":"api/","page":"API Overview","title":"API Overview","text":"Pages = [\"data.md\", \"dataset.md\", \"diagnostics.md\", \"inference_data.md\", \"stats.md\"]\nDepth = 1","category":"page"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"\n\n\n

Creating custom plots

\n\n\n\n\n\n

While ArviZ includes many plotting functions for visualizing the data stored in InferenceData objects, you will often need to construct custom plots, or you may want to tweak some of our plots in your favorite plotting package.

In this tutorial, we will show you a few useful techniques you can use to construct these plots using Julia's plotting packages. For demonstration purposes, we'll use Makie.jl and AlgebraOfGraphics.jl, which can consume Dataset objects since they implement the Tables interface. However, we could just as easily have used StatsPlots.jl.

\n\n
begin\n    using ArviZ, ArviZExampleData, DimensionalData, DataFrames, Statistics\n    using AlgebraOfGraphics, CairoMakie\n    using AlgebraOfGraphics: density\n    set_aog_theme!()\nend;
\n\n\n\n

We'll start by loading some draws from an implementation of the non-centered parameterization of the 8 schools model. In this parameterization, the model has some sampling issues.

\n\n
idata = load_example_data(\"centered_eight\")
\n
InferenceData
posterior
╭─────────────────╮\n│ 500×4×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 500×4\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×4\n  :tau   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.315398\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\" => 7.48011\n  \"tuning_steps\" => 1000\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
posterior_predictive
╭─────────────────╮\n│ 8×500×4 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:41.460544\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
log_likelihood
╭─────────────────╮\n│ 8×500×4 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.487399\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
sample_stats
╭───────────────╮\n│ 500×4 Dataset │\n├───────────────┴─────────────────────────────────────────────────────── dims ┐\n  ↓ draw  Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points\n├─────────────────────────────────────────────────────────────────────────────┴ layers ┐\n  :max_energy_error    eltype: Float64 dims: draw, chain size: 500×4\n  :energy_error        eltype: Float64 dims: draw, chain size: 500×4\n  :lp                  eltype: Float64 dims: draw, chain size: 500×4\n  :index_in_trajectory eltype: Int64 dims: draw, chain size: 500×4\n  :acceptance_rate     eltype: Float64 dims: draw, chain size: 500×4\n  :diverging           eltype: Bool dims: draw, chain size: 500×4\n  :process_time_diff   eltype: Float64 dims: draw, chain size: 500×4\n  :n_steps             eltype: Float64 dims: draw, chain size: 500×4\n  :perf_counter_start  eltype: Float64 dims: draw, chain size: 500×4\n  :largest_eigval      eltype: Union{Missing, Float64} dims: draw, chain size: 500×4\n  :smallest_eigval     eltype: Union{Missing, Float64} dims: draw, chain size: 500×4\n  :step_size_bar       eltype: Float64 dims: draw, chain size: 500×4\n  :step_size           eltype: Float64 dims: draw, chain size: 500×4\n  :energy              eltype: Float64 dims: draw, chain size: 500×4\n  :tree_depth          eltype: Int64 dims: draw, chain size: 500×4\n  :perf_counter_diff   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\" => \"2022-10-13T14:37:37.324929\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\" => 7.48011\n  \"tuning_steps\" => 1000\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
prior
╭─────────────────╮\n│ 500×1×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :tau   eltype: Float64 dims: draw, chain size: 500×1\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×1\n  :mu    eltype: Float64 dims: draw, chain size: 500×1\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.602116\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
prior_predictive
╭─────────────────╮\n│ 8×500×1 Dataset │\n├─────────────────┴────────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered,\n  → draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  ↗ chain  Sampled{Int64} [0] ForwardOrdered Irregular Points\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school, draw, chain size: 8×500×1\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.604969\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
observed_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :obs eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.606375\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
constant_data
╭───────────────────╮\n│ 8-element Dataset │\n├───────────────────┴──────────────────────────────────────────────────── dims ┐\n  ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────── layers ┤\n  :scores eltype: Float64 dims: school size: 8\n├──────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 4 entries:\n  \"created_at\" => \"2022-10-13T14:37:26.607471\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"arviz_version\" => \"0.13.0.dev0\"\n  \"inference_library\" => \"pymc\"\n
\n\n
idata.posterior
\n
╭─────────────────╮\n│ 500×4×8 Dataset │\n├─────────────────┴────────────────────────────────────────────────────────────── dims ┐\n  ↓ draw   Sampled{Int64} [0, 1, …, 498, 499] ForwardOrdered Irregular Points,\n  → chain  Sampled{Int64} [0, 1, 2, 3] ForwardOrdered Irregular Points,\n  ↗ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n├────────────────────────────────────────────────────────────────────────────── layers ┤\n  :mu    eltype: Float64 dims: draw, chain size: 500×4\n  :theta eltype: Float64 dims: school, draw, chain size: 8×500×4\n  :tau   eltype: Float64 dims: draw, chain size: 500×4\n├──────────────────────────────────────────────────────────────────────────── metadata ┤\n  Dict{String, Any} with 6 entries:\n  \"created_at\"                => \"2022-10-13T14:37:37.315398\"\n  \"inference_library_version\" => \"4.2.2\"\n  \"sampling_time\"             => 7.48011\n  \"tuning_steps\"              => 1000\n  \"arviz_version\"             => \"0.13.0.dev0\"\n  \"inference_library\"         => \"pymc\"\n
\n\n\n

The plotting functions we'll be using interact with a tabular view of a Dataset. Let's see what that view looks like for a Dataset:

\n\n
df = DataFrame(idata.posterior)
\n
drawchainschoolmuthetatau
100\"Choate\"7.871812.32074.72574
210\"Choate\"3.3845511.28563.90899
320\"Choate\"9.100485.708514.84403
430\"Choate\"7.3042910.03731.8567
540\"Choate\"9.879689.149154.74841
650\"Choate\"7.0420314.73593.51387
760\"Choate\"10.378514.3044.20898
870\"Choate\"10.0613.32982.6834
980\"Choate\"10.425310.44981.16889
1090\"Choate\"10.810811.47311.21052
...
160004993\"Mt. Hermon\"3.404461.295054.46125
\n\n\n

The tabular view includes dimensions and variables as columns.

When variables with different dimensions are flattened into a tabular form, there's always some duplication of values. As a simple case, note that chain, draw, and school all have repeated values in the above table.

In this case, theta has the school dimension, but tau doesn't, so the values of tau will be repeated in the table for each value of school.

\n\n
df[df.school .== Ref(\"Choate\"), :].tau == df[df.school .== Ref(\"Deerfield\"), :].tau
\n
true
\n\n\n

In our first example, this will be important.

Here, let's construct a trace plot. Besides idata, all functions and types in the following cell are defined in AlgebraOfGraphics or Makie:

  • data(...) indicates that the wrapped object implements the Tables interface

  • mapping indicates how the data should be used. The symbols are all column names in the table, which for us are our variable names and dimensions.

  • visual specifies how the data should be converted to a plot.

  • Lines is a plot type defined in Makie.

  • draw takes this combination and plots it.

\n\n
draw(\n    data(idata.posterior.mu) *\n    mapping(:draw, :mu; color=:chain => nonnumeric) *\n    visual(Lines; alpha=0.8),\n)
\n\n\n\n

Note the line idata.posterior.mu. If we had just used idata.posterior, the plot would have looked more-or-less the same, but there would be artifacts due to mu being copied many times. By selecting mu directly, all other dimensions are discarded, so each value of mu appears in the plot exactly once.

When examining an MCMC trace plot, we want to see a \"fuzzy caterpillar\". Instead we see a few places where the Markov chains froze. We can do the same for theta as well, but it's more useful here to separate these draws by school.

\n\n
draw(\n    data(idata.posterior) *\n    mapping(:draw, :theta; layout=:school, color=:chain => nonnumeric) *\n    visual(Lines; alpha=0.8),\n)
\n\n\n\n

Suppose we want to compare tau with theta for two different schools. To do so, we use InferenceDatas indexing syntax to subset the data.

\n\n
draw(\n    data(idata[:posterior, school=At([\"Choate\", \"Deerfield\"])]) *\n    mapping(:theta, :tau; color=:school) *\n    density() *\n    visual(Contour; levels=10),\n)
\n\n\n\n

We can also compare the density plots constructed from each chain for different schools.

\n\n
draw(\n    data(idata.posterior) *\n    mapping(:theta; layout=:school, color=:chain => nonnumeric) *\n    density(),\n)
\n\n\n\n

If we want to compare many schools in a single plot, an ECDF plot is more convenient.

\n\n
draw(\n    data(idata.posterior) * mapping(:theta; color=:school => nonnumeric) * visual(ECDFPlot);\n    axis=(; ylabel=\"probability\"),\n)
\n\n\n\n

So far we've just plotted data from one group, but we often want to combine data from multiple groups in one plot. The simplest way to do this is to create the plot out of multiple layers. Here we use this approach to plot the observations over the posterior predictive distribution.

\n\n
draw(\n    (data(idata.posterior_predictive) * mapping(:obs; layout=:school) * density()) +\n    (data(idata.observed_data) * mapping(:obs, :obs => zero => \"\"; layout=:school)),\n)
\n\n\n\n

Another option is to combine the groups into a single dataset.

Here we compare the prior and posterior. Since the prior has 1 chain and the posterior has 4 chains, if we were to combine them into a table, the structure would need to be ragged. This is not currently supported.

We can then either plot the two distributions separately as we did before, or we can compare a single chain from each group. This is what we'll do here. To concatenate the two groups, we introduce a new named dimension using DimensionalData.Dim.

\n\n
draw(\n    data(\n        cat(\n            idata.posterior[chain=[1]], idata.prior; dims=Dim{:group}([:posterior, :prior])\n        )[:mu],\n    ) *\n    mapping(:mu; color=:group) *\n    histogram(; bins=20) *\n    visual(; alpha=0.8);\n    axis=(; ylabel=\"probability\"),\n)
\n\n\n\n

From the trace plots, we suspected the geometry of this posterior was bad. Let's highlight divergent transitions. To do so, we merge posterior and samplestats, which can do with merge since they share no common variable names.

\n\n
draw(\n    data(merge(idata.posterior, idata.sample_stats)) * mapping(\n        :theta,\n        :tau;\n        layout=:school,\n        color=:diverging,\n        markersize=:diverging => (d -> d ? 5 : 2),\n    ),\n)
\n\n\n\n

When we try building more complex plots, we may need to build new Datasets from our existing ones.

One example of this is the corner plot. To build this plot, we need to make a copy of theta with a copy of the school dimension.

\n\n
let\n    theta = idata.posterior.theta[school=1:4]\n    theta2 = rebuild(set(theta; school=:school2); name=:theta2)\n    plot_data = Dataset(theta, theta2, idata.sample_stats.diverging)\n    draw(\n        data(plot_data) * mapping(\n            :theta,\n            :theta2 => \"theta\";\n            col=:school,\n            row=:school2,\n            color=:diverging,\n            markersize=:diverging => (d -> d ? 3 : 1),\n        );\n        figure=(; figsize=(5, 5)),\n        axis=(; aspect=1),\n    )\nend
\n\n\n","category":"page"},{"location":"creating_custom_plots/#Environment","page":"Creating custom plots","title":"Environment","text":"","category":"section"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"
\n
\n\n
using Pkg, InteractiveUtils
\n\n\n
using PlutoUI
\n\n\n
with_terminal(Pkg.status; color=false)
\n
Status `~/work/ArviZ.jl/ArviZ.jl/docs/Project.toml`\n  [cbdf2221] AlgebraOfGraphics v0.8.11\n  [131c737c] ArviZ v0.12.1 `~/work/ArviZ.jl/ArviZ.jl`\n  [2f96bb34] ArviZExampleData v0.1.11\n  [4a6e88f0] ArviZPythonPlots v0.1.7\n  [13f3f980] CairoMakie v0.12.12\n  [a93c6f00] DataFrames v1.7.0\n⌅ [0703355e] DimensionalData v0.27.9\n  [31c24e10] Distributions v0.25.112\n  [e30172f5] Documenter v1.7.0\n  [f6006082] EvoTrees v0.16.7\n  [b5cf5a8d] InferenceObjects v0.4.3\n  [be115224] MCMCDiagnosticTools v0.3.10\n  [a7f614a8] MLJBase v1.7.0\n  [614be32b] MLJIteration v0.6.3\n  [ce719bf2] PSIS v0.9.6\n  [359b1769] PlutoStaticHTML v6.0.28\n  [7f904dfe] PlutoUI v0.7.60\n  [7f36be82] PosteriorStats v0.2.5\n  [c1514b29] StanSample v7.10.1\n  [a19d573c] StatisticalMeasures v0.1.7\n  [2913bbd2] StatsBase v0.34.3\n  [fce5fe82] Turing v0.34.1\n  [f43a241f] Downloads v1.6.0\n  [37e2e46d] LinearAlgebra\n  [10745b16] Statistics v1.10.0\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n
\n\n
with_terminal(versioninfo)
\n
Julia Version 1.10.5\nCommit 6f3fdf7b362 (2024-08-27 14:19 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 2 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_PKG_SERVER_REGISTRY_PREFERENCE = eager\n  JULIA_NUM_THREADS = 2\n  JULIA_REVISE_WORKER_ONLY = 1\n
\n\n","category":"page"},{"location":"creating_custom_plots/","page":"Creating custom plots","title":"Creating custom plots","text":"EditURL = \"https://github.com/arviz-devs/ArviZ.jl/blob/main/docs/src/creating_custom_plots.jl\"","category":"page"},{"location":"api/stats/#stats-api","page":"Stats","title":"Stats","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"Pages = [\"stats.md\"]","category":"page"},{"location":"api/stats/#Summary-statistics","page":"Stats","title":"Summary statistics","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"SummaryStats\ndefault_summary_stats\ndefault_stats\ndefault_diagnostics\nsummarize\nsummarystats","category":"page"},{"location":"api/stats/#PosteriorStats.SummaryStats","page":"Stats","title":"PosteriorStats.SummaryStats","text":"struct SummaryStats{D, V<:(AbstractVector)}\n\nA container for a column table of values computed by summarize.\n\nThis object implements the Tables and TableTraits column table interfaces. It has a custom show method.\n\nSummaryStats behaves like an OrderedDict of columns, where the columns can be accessed using either Symbols or a 1-based integer index.\n\nname::String: The name of the collection of summary statistics, used as the table title in display.\ndata::Any: The summary statistics for each parameter. It must implement the Tables interface.\nparameter_names::AbstractVector: Names of the parameters\n\nSummaryStats([name::String,] data[, parameter_names])\nSummaryStats(data[, parameter_names]; name::String=\"SummaryStats\")\n\nConstruct a SummaryStats from tabular data with optional stats name and param_names.\n\ndata must not contain a column :parameter, as this is reserved for the parameter names, which are always in the first column.\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.default_summary_stats","page":"Stats","title":"PosteriorStats.default_summary_stats","text":"default_summary_stats(focus=Statistics.mean; kwargs...)\n\nCombinatiton of default_stats and default_diagnostics to be used with summarize.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.default_stats","page":"Stats","title":"PosteriorStats.default_stats","text":"default_stats(focus=Statistics.mean; prob_interval=0.94, kwargs...)\n\nDefault statistics to be computed with summarize.\n\nThe value of focus determines the statistics to be returned:\n\nStatistics.mean: mean, std, hdi_3%, hdi_97%\nStatistics.median: median, mad, eti_3%, eti_97%\n\nIf prob_interval is set to a different value than the default, then different HDI and ETI statistics are computed accordingly. hdi refers to the highest-density interval, while eti refers to the equal-tailed interval (i.e. the credible interval computed from symmetric quantiles).\n\nSee also: hdi\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.default_diagnostics","page":"Stats","title":"PosteriorStats.default_diagnostics","text":"default_diagnostics(focus=Statistics.mean; kwargs...)\n\nDefault diagnostics to be computed with summarize.\n\nThe value of focus determines the diagnostics to be returned:\n\nStatistics.mean: mcse_mean, mcse_std, ess_tail, ess_bulk, rhat\nStatistics.median: mcse_median, ess_tail, ess_bulk, rhat\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.summarize","page":"Stats","title":"PosteriorStats.summarize","text":"summarize(data, stats_funs...; name=\"SummaryStats\", [var_names]) -> SummaryStats\n\nCompute the summary statistics in stats_funs on each param in data.\n\nstats_funs is a collection of functions that reduces a matrix with shape (draws, chains) to a scalar or a collection of scalars. Alternatively, an item in stats_funs may be a Pair of the form name => fun specifying the name to be used for the statistic or of the form (name1, ...) => fun when the function returns a collection. When the function returns a collection, the names in this latter format must be provided.\n\nIf no stats functions are provided, then those specified in default_summary_stats are computed.\n\nvar_names specifies the names of the parameters in data. If not provided, the names are inferred from data.\n\nTo support computing summary statistics from a custom object, overload this method specifying the type of data.\n\nSee also SummaryStats, default_summary_stats, default_stats, default_diagnostics.\n\nExamples\n\nCompute mean, std and the Monte Carlo standard error (MCSE) of the mean estimate:\n\njulia> using Statistics, StatsBase\n\njulia> x = randn(1000, 4, 3) .+ reshape(0:10:20, 1, 1, :);\n\njulia> summarize(x, mean, std, :mcse_mean => sem; name=\"Mean/Std\")\nMean/Std\n mean std mcse_mean\n 1 0.0003 0.990 0.016\n 2 10.02 0.988 0.016\n 3 19.98 0.988 0.016\n\nAvoid recomputing the mean by using mean_and_std, and provide parameter names:\n\njulia> summarize(x, (:mean, :std) => mean_and_std, mad; var_names=[:a, :b, :c])\nSummaryStats\n mean std mad\n a 0.000305 0.990 0.978\n b 10.0 0.988 0.995\n c 20.0 0.988 0.979\n\nNote that when an estimator and its MCSE are both computed, the MCSE is used to determine the number of significant digits that will be displayed.\n\njulia> summarize(x; var_names=[:a, :b, :c])\nSummaryStats\n mean std hdi_3% hdi_97% mcse_mean mcse_std ess_tail ess_bulk r ⋯\n a 0.0003 0.99 -1.92 1.78 0.016 0.012 3567 3663 1 ⋯\n b 10.02 0.99 8.17 11.9 0.016 0.011 3841 3906 1 ⋯\n c 19.98 0.99 18.1 21.9 0.016 0.012 3892 3749 1 ⋯\n 1 column omitted\n\nCompute just the statistics with an 89% HDI on all parameters, and provide the parameter names:\n\njulia> summarize(x, default_stats(; prob_interval=0.89)...; var_names=[:a, :b, :c])\nSummaryStats\n mean std hdi_5.5% hdi_94.5%\n a 0.000305 0.990 -1.63 1.52\n b 10.0 0.988 8.53 11.6\n c 20.0 0.988 18.5 21.6\n\nCompute the summary stats focusing on Statistics.median:\n\njulia> summarize(x, default_summary_stats(median)...; var_names=[:a, :b, :c])\nSummaryStats\n median mad eti_3% eti_97% mcse_median ess_tail ess_median rhat\n a 0.004 0.978 -1.83 1.89 0.020 3567 3336 1.00\n b 10.02 0.995 8.17 11.9 0.023 3841 3787 1.00\n c 19.99 0.979 18.1 21.9 0.020 3892 3829 1.00\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#StatsBase.summarystats","page":"Stats","title":"StatsBase.summarystats","text":"summarystats(data::InferenceData; group=:posterior, kwargs...) -> SummaryStats\nsummarystats(data::Dataset; kwargs...) -> SummaryStats\n\nCompute default summary statistics for the data using summarize.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#General-statistics","page":"Stats","title":"General statistics","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"hdi\nhdi!\nr2_score","category":"page"},{"location":"api/stats/#PosteriorStats.hdi","page":"Stats","title":"PosteriorStats.hdi","text":"hdi(samples::AbstractArray{<:Real}; prob=0.94) -> (; lower, upper)\n\nEstimate the unimodal highest density interval (HDI) of samples for the probability prob.\n\nThe HDI is the minimum width Bayesian credible interval (BCI). That is, it is the smallest possible interval containing (100*prob)% of the probability mass.[Hyndman1996]\n\nsamples is an array of shape (draws[, chains[, params...]]). If multiple parameters are present, then lower and upper are arrays with the shape (params...,), computed separately for each marginal.\n\nThis implementation uses the algorithm of [ChenShao1999].\n\nnote: Note\nAny default value of prob is arbitrary. The default value of prob=0.94 instead of a more common default like prob=0.95 is chosen to reminder the user of this arbitrariness.\n\n[Hyndman1996]: Rob J. Hyndman (1996) Computing and Graphing Highest Density Regions, Amer. Stat., 50(2): 120-6. DOI: 10.1080/00031305.1996.10474359 jstor.\n\n[ChenShao1999]: Ming-Hui Chen & Qi-Man Shao (1999) Monte Carlo Estimation of Bayesian Credible and HPD Intervals, J Comput. Graph. Stat., 8:1, 69-92. DOI: 10.1080/10618600.1999.10474802 jstor.\n\nExamples\n\nHere we calculate the 83% HDI for a normal random variable:\n\njulia> x = randn(2_000);\n\njulia> hdi(x; prob=0.83) |> pairs\npairs(::NamedTuple) with 2 entries:\n :lower => -1.38266\n :upper => 1.25982\n\nWe can also calculate the HDI for a 3-dimensional array of samples:\n\njulia> x = randn(1_000, 1, 1) .+ reshape(0:5:10, 1, 1, :);\n\njulia> hdi(x) |> pairs\npairs(::NamedTuple) with 2 entries:\n :lower => [-1.9674, 3.0326, 8.0326]\n :upper => [1.90028, 6.90028, 11.9003]\n\n\n\n\n\nhdi(data::InferenceData; kwargs...) -> Dataset\nhdi(data::Dataset; kwargs...) -> Dataset\n\nCalculate the highest density interval (HDI) for each parameter in the data.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.hdi!","page":"Stats","title":"PosteriorStats.hdi!","text":"hdi!(samples::AbstractArray{<:Real}; prob=0.94) -> (; lower, upper)\n\nA version of hdi that sorts samples in-place while computing the HDI.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.r2_score","page":"Stats","title":"PosteriorStats.r2_score","text":"r2_score(y_true::AbstractVector, y_pred::AbstractArray) -> (; r2, r2_std)\n\nR² for linear Bayesian regression models.[GelmanGoodrich2019]\n\nArguments\n\ny_true: Observed data of length noutputs\ny_pred: Predicted data with size (ndraws[, nchains], noutputs)\n\n[GelmanGoodrich2019]: Andrew Gelman, Ben Goodrich, Jonah Gabry & Aki Vehtari (2019) R-squared for Bayesian Regression Models, The American Statistician, 73:3, 307-9, DOI: 10.1080/00031305.2018.1549100.\n\nExamples\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"regression1d\");\n\njulia> y_true = idata.observed_data.y;\n\njulia> y_pred = PermutedDimsArray(idata.posterior_predictive.y, (:draw, :chain, :y_dim_0));\n\njulia> r2_score(y_true, y_pred) |> pairs\npairs(::NamedTuple) with 2 entries:\n :r2 => 0.683197\n :r2_std => 0.0368838\n\n\n\n\n\nr2_score(idata::InferenceData; y_name, y_pred_name) -> (; r2, r2_std)\n\nCompute R² from idata, automatically formatting the predictions to the correct shape.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\n\nExamples\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"regression10d\");\n\njulia> r2_score(idata) |> pairs\npairs(::NamedTuple) with 2 entries:\n :r2 => 0.998385\n :r2_std => 0.000100621\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Pareto-smoothed-importance-sampling","page":"Stats","title":"Pareto-smoothed importance sampling","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"PSISResult\ness_is\nPSISPlots.paretoshapeplot\npsis\npsis!","category":"page"},{"location":"api/stats/#PSIS.PSISResult","page":"Stats","title":"PSIS.PSISResult","text":"PSISResult\n\nResult of Pareto-smoothed importance sampling (PSIS) using psis.\n\nProperties\n\nlog_weights: un-normalized Pareto-smoothed log weights\nweights: normalized Pareto-smoothed weights (allocates a copy)\npareto_shape: Pareto k=ξ shape parameter\nnparams: number of parameters in log_weights\nndraws: number of draws in log_weights\nnchains: number of chains in log_weights\nreff: the ratio of the effective sample size of the unsmoothed importance ratios and the actual sample size.\ness: estimated effective sample size of estimate of mean using smoothed importance samples (see ess_is)\ntail_length: length of the upper tail of log_weights that was smoothed\ntail_dist: the generalized Pareto distribution that was fit to the tail of log_weights. Note that the tail weights are scaled to have a maximum of 1, so tail_dist * exp(maximum(log_ratios)) is the corresponding fit directly to the tail of log_ratios.\nnormalized::Bool:indicates whether log_weights are log-normalized along the sample dimensions.\n\nDiagnostic\n\nThe pareto_shape parameter k=ξ of the generalized Pareto distribution tail_dist can be used to diagnose reliability and convergence of estimates using the importance weights [VehtariSimpson2021].\n\nif k frac13, importance sampling is stable, and importance sampling (IS) and PSIS both are reliable.\nif k frac12, then the importance ratio distributon has finite variance, and the central limit theorem holds. As k approaches the upper bound, IS becomes less reliable, while PSIS still works well but with a higher RMSE.\nif frac12 k 07, then the variance is infinite, and IS can behave quite poorly. However, PSIS works well in this regime.\nif 07 k 1, then it quickly becomes impractical to collect enough importance weights to reliably compute estimates, and importance sampling is not recommended.\nif k 1, then neither the variance nor the mean of the raw importance ratios exists. The convergence rate is close to zero, and bias can be large with practical sample sizes.\n\nSee PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PSIS.ess_is","page":"Stats","title":"PSIS.ess_is","text":"ess_is(weights; reff=1)\n\nEstimate effective sample size (ESS) for importance sampling over the sample dimensions.\n\nGiven normalized weights w_1n, the ESS is estimated using the L2-norm of the weights:\n\nmathrmESS(w_1n) = fracr_mathrmeffsum_i=1^n w_i^2\n\nwhere r_mathrmeff is the relative efficiency of the log_weights.\n\ness_is(result::PSISResult; bad_shape_nan=true)\n\nEstimate ESS for Pareto-smoothed importance sampling.\n\nnote: Note\nESS estimates for Pareto shape values k 07, which are unreliable and misleadingly high, are set to NaN. To avoid this, set bad_shape_nan=false.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.PSISPlots.paretoshapeplot","page":"Stats","title":"PSIS.PSISPlots.paretoshapeplot","text":"paretoshapeplot(values; showlines=false, ...)\nparetoshapeplot!(values; showlines=false, kwargs...)\n\nPlot shape parameters of fitted Pareto tail distributions for diagnosing convergence.\n\nvalues may be either a vector of Pareto shape parameters or a PSIS.PSISResult.\n\nIf showlines==true, horizontal lines indicating relevant Pareto shape thresholds are drawn. See PSIS.PSISResult for an explanation of the thresholds.\n\nAll remaining kwargs are forwarded to the plotting function.\n\nSee psis, PSISResult.\n\nExamples\n\nusing PSIS, Distributions, Plots\nproposal = Normal()\ntarget = TDist(7)\nx = rand(proposal, 1_000, 100)\nlog_ratios = logpdf.(target, x) .- logpdf.(proposal, x)\nresult = psis(log_ratios)\nparetoshapeplot(result)\n\nWe can also plot the Pareto shape parameters directly:\n\nparetoshapeplot(result.pareto_shape)\n\nWe can also use plot directly:\n\nplot(result.pareto_shape; showlines=true)\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.psis","page":"Stats","title":"PSIS.psis","text":"psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult\npsis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult\n\nCompute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].\n\nWhile psis computes smoothed log weights out-of-place, psis! smooths them in-place.\n\nArguments\n\nlog_ratios: an array of logarithms of importance ratios, with size (draws, [chains, [parameters...]]), where chains>1 would be used when chains are generated using Markov chain Monte Carlo.\nreff::Union{Real,AbstractArray}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(draws * chains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo. If an array, it must have the size (parameters...,) to match log_ratios.\n\nKeywords\n\nwarn=true: If true, warning messages are delivered\nnormalize=true: If true, the log-weights will be log-normalized so that exp.(log_weights) sums to 1 along the sample dimensions.\n\nReturns\n\nresult: a PSISResult object containing the results of the Pareto-smoothing.\n\nA warning is raised if the Pareto shape parameter k 07. See PSISResult for details and PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PSIS.psis!","page":"Stats","title":"PSIS.psis!","text":"psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult\npsis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult\n\nCompute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].\n\nWhile psis computes smoothed log weights out-of-place, psis! smooths them in-place.\n\nArguments\n\nlog_ratios: an array of logarithms of importance ratios, with size (draws, [chains, [parameters...]]), where chains>1 would be used when chains are generated using Markov chain Monte Carlo.\nreff::Union{Real,AbstractArray}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(draws * chains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo. If an array, it must have the size (parameters...,) to match log_ratios.\n\nKeywords\n\nwarn=true: If true, warning messages are delivered\nnormalize=true: If true, the log-weights will be log-normalized so that exp.(log_weights) sums to 1 along the sample dimensions.\n\nReturns\n\nresult: a PSISResult object containing the results of the Pareto-smoothing.\n\nA warning is raised if the Pareto shape parameter k 07. See PSISResult for details and PSISPlots.paretoshapeplot for a diagnostic plot.\n\n[VehtariSimpson2021]: Vehtari A, Simpson D, Gelman A, Yao Y, Gabry J. (2021). Pareto smoothed importance sampling. arXiv:1507.02646v7 [stat.CO]\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#LOO-and-WAIC","page":"Stats","title":"LOO and WAIC","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"AbstractELPDResult\nPSISLOOResult\nWAICResult\nelpd_estimates\ninformation_criterion\nloo\nwaic","category":"page"},{"location":"api/stats/#PosteriorStats.AbstractELPDResult","page":"Stats","title":"PosteriorStats.AbstractELPDResult","text":"abstract type AbstractELPDResult\n\nAn abstract type representing the result of an ELPD computation.\n\nEvery subtype stores estimates of both the expected log predictive density (elpd) and the effective number of parameters p, as well as standard errors and pointwise estimates of each, from which other relevant estimates can be computed.\n\nSubtypes implement the following functions:\n\nelpd_estimates\ninformation_criterion\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.PSISLOOResult","page":"Stats","title":"PosteriorStats.PSISLOOResult","text":"Results of Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO).\n\nSee also: loo, AbstractELPDResult\n\nestimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)\npointwise: Pointwise estimates\npsis_result: Pareto-smoothed importance sampling (PSIS) results\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.WAICResult","page":"Stats","title":"PosteriorStats.WAICResult","text":"Results of computing the widely applicable information criterion (WAIC).\n\nSee also: waic, AbstractELPDResult\n\nestimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)\npointwise: Pointwise estimates\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.elpd_estimates","page":"Stats","title":"PosteriorStats.elpd_estimates","text":"elpd_estimates(result::AbstractELPDResult; pointwise=false) -> (; elpd, elpd_mcse, lpd)\n\nReturn the (E)LPD estimates from the result.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.information_criterion","page":"Stats","title":"PosteriorStats.information_criterion","text":"information_criterion(elpd, scale::Symbol)\n\nCompute the information criterion for the given scale from the elpd estimate.\n\nscale must be one of (:deviance, :log, :negative_log).\n\nSee also: loo, waic\n\n\n\n\n\ninformation_criterion(result::AbstractELPDResult, scale::Symbol; pointwise=false)\n\nCompute information criterion for the given scale from the existing ELPD result.\n\nscale must be one of (:deviance, :log, :negative_log).\n\nIf pointwise=true, then pointwise estimates are returned.\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.loo","page":"Stats","title":"PosteriorStats.loo","text":"loo(log_likelihood; reff=nothing, kwargs...) -> PSISLOOResult{<:NamedTuple,<:NamedTuple}\n\nCompute the Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO). [Vehtari2017][LOOFAQ]\n\nlog_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).\n\nKeywords\n\nreff::Union{Real,AbstractArray{<:Real}}: The relative effective sample size(s) of the likelihood values. If an array, it must have the same data dimensions as the corresponding log-likelihood variable. If not provided, then this is estimated using MCMCDiagnosticTools.ess.\nkwargs: Remaining keywords are forwarded to [PSIS.psis].\n\nSee also: PSISLOOResult, waic\n\n[Vehtari2017]: Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput 27, 1413–1432 (2017). doi: 10.1007/s11222-016-9696-4 arXiv: 1507.04544\n\n[LOOFAQ]: Aki Vehtari. Cross-validation FAQ. https://mc-stan.org/loo/articles/online-only/faq.html\n\nExamples\n\nManually compute R_mathrmeff and calculate PSIS-LOO of a model:\n\njulia> using ArviZExampleData, MCMCDiagnosticTools\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> reff = ess(log_like; kind=:basic, split_chains=1, relative=true);\n\njulia> loo(log_like; reff)\nPSISLOOResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.34\n\nand PSISResult with 500 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n Count Min. ESS\n (-Inf, 0.5] good 7 (87.5%) 151\n (0.5, 0.7] okay 1 (12.5%) 446\n\n\n\n\n\nloo(data::Dataset; [var_name::Symbol,] kwargs...) -> PSISLOOResult{<:NamedTuple,<:Dataset}\nloo(data::InferenceData; [var_name::Symbol,] kwargs...) -> PSISLOOResult{<:NamedTuple,<:Dataset}\n\nCompute PSIS-LOO from log-likelihood values in data.\n\nIf more than one log-likelihood variable is present, then var_name must be provided.\n\nExamples\n\nCalculate PSIS-LOO of a model:\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo(idata)\nPSISLOOResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.34\n\nand PSISResult with 500 draws, 4 chains, and 8 parameters\nPareto shape (k) diagnostic values:\n Count Min. ESS\n (-Inf, 0.5] good 6 (75.0%) 135\n (0.5, 0.7] okay 2 (25.0%) 421\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.waic","page":"Stats","title":"PosteriorStats.waic","text":"waic(log_likelihood::AbstractArray) -> WAICResult{<:NamedTuple,<:NamedTuple}\n\nCompute the widely applicable information criterion (WAIC).[Watanabe2010][Vehtari2017][LOOFAQ]\n\nlog_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).\n\nSee also: WAICResult, loo\n\n[Watanabe2010]: Watanabe, S. Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory. 11(116):3571−3594, 2010. https://jmlr.csail.mit.edu/papers/v11/watanabe10a.html\n\n[Vehtari2017]: Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput 27, 1413–1432 (2017). doi: 10.1007/s11222-016-9696-4 arXiv: 1507.04544\n\n[LOOFAQ]: Aki Vehtari. Cross-validation FAQ. https://mc-stan.org/loo/articles/online-only/faq.html\n\nExamples\n\nCalculate WAIC of a model:\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> waic(log_like)\nWAICResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.33\n\n\n\n\n\nwaic(data::Dataset; [var_name::Symbol]) -> WAICResult{<:NamedTuple,<:Dataset}\nwaic(data::InferenceData; [var_name::Symbol]) -> WAICResult{<:NamedTuple,<:Dataset}\n\nCompute WAIC from log-likelihood values in data.\n\nIf more than one log-likelihood variable is present, then var_name must be provided.\n\nExamples\n\nCalculate WAIC of a model:\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> waic(idata)\nWAICResult with estimates\n elpd elpd_mcse p p_mcse\n -31 1.4 0.9 0.33\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Model-comparison","page":"Stats","title":"Model comparison","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"ModelComparisonResult\ncompare\nmodel_weights","category":"page"},{"location":"api/stats/#PosteriorStats.ModelComparisonResult","page":"Stats","title":"PosteriorStats.ModelComparisonResult","text":"ModelComparisonResult\n\nResult of model comparison using ELPD.\n\nThis struct implements the Tables and TableTraits interfaces.\n\nEach field returns a collection of the corresponding entry for each model:\n\nname: Names of the models, if provided.\nrank: Ranks of the models (ordered by decreasing ELPD)\nelpd_diff: ELPD of a model subtracted from the largest ELPD of any model\nelpd_diff_mcse: Monte Carlo standard error of the ELPD difference\nweight: Model weights computed with weights_method\nelpd_result: AbstactELPDResults for each model, which can be used to access useful stats like ELPD estimates, pointwise estimates, and Pareto shape values for PSIS-LOO\nweights_method: Method used to compute model weights with model_weights\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.compare","page":"Stats","title":"PosteriorStats.compare","text":"compare(models; kwargs...) -> ModelComparisonResult\n\nCompare models based on their expected log pointwise predictive density (ELPD).\n\nThe ELPD is estimated either by Pareto smoothed importance sampling leave-one-out cross-validation (LOO) or using the widely applicable information criterion (WAIC). We recommend loo. Read more theory here - in a paper by some of the leading authorities on model comparison dx.doi.org/10.1111/1467-9868.00353\n\nArguments\n\nmodels: a Tuple, NamedTuple, or AbstractVector whose values are either AbstractELPDResult entries or any argument to elpd_method.\n\nKeywords\n\nweights_method::AbstractModelWeightsMethod=Stacking(): the method to be used to weight the models. See model_weights for details\nelpd_method=loo: a method that computes an AbstractELPDResult from an argument in models.\nsort::Bool=true: Whether to sort models by decreasing ELPD.\n\nReturns\n\nModelComparisonResult: A container for the model comparison results. The fields contain a similar collection to models.\n\nExamples\n\nCompare the centered and non centered models of the eight school problem using the defaults: loo and Stacking weights. A custom myloo method formates the inputs as expected by loo.\n\njulia> using ArviZExampleData\n\njulia> models = (\n centered=load_example_data(\"centered_eight\"),\n non_centered=load_example_data(\"non_centered_eight\"),\n );\n\njulia> function myloo(idata)\n log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))\n return loo(log_like)\n end;\n\njulia> mc = compare(models; elpd_method=myloo)\n┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.\n└ @ PSIS ~/.julia/packages/PSIS/...\nModelComparisonResult with Stacking weights\n rank elpd elpd_mcse elpd_diff elpd_diff_mcse weight p ⋯\n non_centered 1 -31 1.4 0 0.0 1.0 0.9 ⋯\n centered 2 -31 1.4 0.06 0.067 0.0 0.9 ⋯\n 1 column omitted\njulia> mc.weight |> pairs\npairs(::NamedTuple) with 2 entries:\n :non_centered => 1.0\n :centered => 5.34175e-19\n\nCompare the same models from pre-computed PSIS-LOO results and computing BootstrappedPseudoBMA weights:\n\njulia> elpd_results = mc.elpd_result;\n\njulia> compare(elpd_results; weights_method=BootstrappedPseudoBMA())\nModelComparisonResult with BootstrappedPseudoBMA weights\n rank elpd elpd_mcse elpd_diff elpd_diff_mcse weight p ⋯\n non_centered 1 -31 1.4 0 0.0 0.52 0.9 ⋯\n centered 2 -31 1.4 0.06 0.067 0.48 0.9 ⋯\n 1 column omitted\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#PosteriorStats.model_weights","page":"Stats","title":"PosteriorStats.model_weights","text":"model_weights(elpd_results; method=Stacking())\nmodel_weights(method::AbstractModelWeightsMethod, elpd_results)\n\nCompute weights for each model in elpd_results using method.\n\nelpd_results is a Tuple, NamedTuple, or AbstractVector with AbstractELPDResult entries. The weights are returned in the same type of collection.\n\nStacking is the recommended approach, as it performs well even when the true data generating process is not included among the candidate models. See [YaoVehtari2018] for details.\n\nSee also: AbstractModelWeightsMethod, compare\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\nExamples\n\nCompute Stacking weights for two models:\n\njulia> using ArviZExampleData\n\njulia> models = (\n centered=load_example_data(\"centered_eight\"),\n non_centered=load_example_data(\"non_centered_eight\"),\n );\n\njulia> elpd_results = map(models) do idata\n log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))\n return loo(log_like)\n end;\n┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.\n└ @ PSIS ~/.julia/packages/PSIS/...\n\njulia> model_weights(elpd_results; method=Stacking()) |> pairs\npairs(::NamedTuple) with 2 entries:\n :centered => 5.34175e-19\n :non_centered => 1.0\n\nNow we compute BootstrappedPseudoBMA weights for the same models:\n\njulia> model_weights(elpd_results; method=BootstrappedPseudoBMA()) |> pairs\npairs(::NamedTuple) with 2 entries:\n :centered => 0.483723\n :non_centered => 0.516277\n\n\n\n\n\n","category":"function"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"The following model weighting methods are available","category":"page"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"AbstractModelWeightsMethod\nBootstrappedPseudoBMA\nPseudoBMA\nStacking","category":"page"},{"location":"api/stats/#PosteriorStats.AbstractModelWeightsMethod","page":"Stats","title":"PosteriorStats.AbstractModelWeightsMethod","text":"abstract type AbstractModelWeightsMethod\n\nAn abstract type representing methods for computing model weights.\n\nSubtypes implement model_weights(method, elpd_results).\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.BootstrappedPseudoBMA","page":"Stats","title":"PosteriorStats.BootstrappedPseudoBMA","text":"struct BootstrappedPseudoBMA{R<:Random.AbstractRNG, T<:Real} <: AbstractModelWeightsMethod\n\nModel weighting method using pseudo Bayesian Model Averaging using Akaike-type weighting with the Bayesian bootstrap (pseudo-BMA+)[YaoVehtari2018].\n\nThe Bayesian bootstrap stabilizes the model weights.\n\nBootstrappedPseudoBMA(; rng=Random.default_rng(), samples=1_000, alpha=1)\nBootstrappedPseudoBMA(rng, samples, alpha)\n\nConstruct the method.\n\nrng::Random.AbstractRNG: The random number generator to use for the Bayesian bootstrap\nsamples::Int64: The number of samples to draw for bootstrapping\nalpha::Real: The shape parameter in the Dirichlet distribution used for the Bayesian bootstrap. The default (1) corresponds to a uniform distribution on the simplex.\n\nSee also: Stacking\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.PseudoBMA","page":"Stats","title":"PosteriorStats.PseudoBMA","text":"struct PseudoBMA <: AbstractModelWeightsMethod\n\nModel weighting method using pseudo Bayesian Model Averaging (pseudo-BMA) and Akaike-type weighting.\n\nPseudoBMA(; regularize=false)\nPseudoBMA(regularize)\n\nConstruct the method with optional regularization of the weights using the standard error of the ELPD estimate.\n\nnote: Note\nThis approach is not recommended, as it produces unstable weight estimates. It is recommended to instead use BootstrappedPseudoBMA to stabilize the weights or Stacking. For details, see [YaoVehtari2018].\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\nSee also: Stacking\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#PosteriorStats.Stacking","page":"Stats","title":"PosteriorStats.Stacking","text":"struct Stacking{O<:Optim.AbstractOptimizer} <: AbstractModelWeightsMethod\n\nModel weighting using stacking of predictive distributions[YaoVehtari2018].\n\nStacking(; optimizer=Optim.LBFGS(), options=Optim.Options()\nStacking(optimizer[, options])\n\nConstruct the method, optionally customizing the optimization.\n\noptimizer::Optim.AbstractOptimizer: The optimizer to use for the optimization of the weights. The optimizer must support projected gradient optimization via a manifold field.\noptions::Optim.Options: The Optim options to use for the optimization of the weights.\n\nSee also: BootstrappedPseudoBMA\n\n[YaoVehtari2018]: Yuling Yao, Aki Vehtari, Daniel Simpson, and Andrew Gelman. Using Stacking to Average Bayesian Predictive Distributions. 2018. Bayesian Analysis. 13, 3, 917–1007. doi: 10.1214/17-BA1091 arXiv: 1704.02030\n\n\n\n\n\n","category":"type"},{"location":"api/stats/#Predictive-checks","page":"Stats","title":"Predictive checks","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"loo_pit","category":"page"},{"location":"api/stats/#PosteriorStats.loo_pit","page":"Stats","title":"PosteriorStats.loo_pit","text":"loo_pit(y, y_pred, log_weights; kwargs...) -> Union{Real,AbstractArray}\n\nCompute leave-one-out probability integral transform (LOO-PIT) checks.\n\nArguments\n\ny: array of observations with shape (params...,)\ny_pred: array of posterior predictive samples with shape (draws, chains, params...).\nlog_weights: array of normalized log LOO importance weights with shape (draws, chains, params...).\n\nKeywords\n\nis_discrete: If not provided, then it is set to true iff elements of y and y_pred are all integer-valued. If true, then data are smoothed using smooth_data to make them non-discrete before estimating LOO-PIT values.\nkwargs: Remaining keywords are forwarded to smooth_data if data is discrete.\n\nReturns\n\npitvals: LOO-PIT values with same size as y. If y is a scalar, then pitvals is a scalar.\n\nLOO-PIT is a marginal posterior predictive check. If y_-i is the array y of observations with the ith observation left out, and y_i^* is a posterior prediction of the ith observation, then the LOO-PIT value for the ith observation is defined as\n\nP(y_i^* le y_i mid y_-i) = int_-infty^y_i p(y_i^* mid y_-i) mathrmd y_i^*\n\nThe LOO posterior predictions and the corresponding observations should have similar distributions, so if conditional predictive distributions are well-calibrated, then all LOO-PIT values should be approximately uniformly distributed on 0 1.[Gabry2019]\n\n[Gabry2019]: Gabry, J., Simpson, D., Vehtari, A., Betancourt, M. & Gelman, A. Visualization in Bayesian Workflow. J. R. Stat. Soc. Ser. A Stat. Soc. 182, 389–402 (2019). doi: 10.1111/rssa.12378 arXiv: 1709.01449\n\nExamples\n\nCalculate LOO-PIT values using as test quantity the observed values themselves.\n\njulia> using ArviZExampleData\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> y = idata.observed_data.obs;\n\njulia> y_pred = PermutedDimsArray(idata.posterior_predictive.obs, (:draw, :chain, :school));\n\njulia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));\n\njulia> log_weights = loo(log_like).psis_result.log_weights;\n\njulia> loo_pit(y, y_pred, log_weights)\n╭───────────────────────────────╮\n│ 8-element DimArray{Float64,1} │\n├───────────────────────────────┴──────────────────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\nCalculate LOO-PIT values using as test quantity the square of the difference between each observation and mu.\n\njulia> using Statistics\n\njulia> mu = idata.posterior.mu;\n\njulia> T = y .- median(mu);\n\njulia> T_pred = y_pred .- mu;\n\njulia> loo_pit(T .^ 2, T_pred .^ 2, log_weights)\n╭───────────────────────────────╮\n│ 8-element DimArray{Float64,1} │\n├───────────────────────────────┴──────────────────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.873577\n \"Deerfield\" 0.243686\n \"Phillips Andover\" 0.357563\n \"Phillips Exeter\" 0.149908\n \"Hotchkiss\" 0.435094\n \"Lawrenceville\" 0.220627\n \"St. Paul's\" 0.775086\n \"Mt. Hermon\" 0.296706\n\n\n\n\n\nloo_pit(idata::InferenceData, log_weights; kwargs...) -> DimArray\n\nCompute LOO-PIT values using existing normalized log LOO importance weights.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\nkwargs: Remaining keywords are forwarded to the base method of loo_pit.\n\nExamples\n\nCalculate LOO-PIT values using already computed log weights.\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo_result = loo(idata; var_name=:obs);\n\njulia> loo_pit(idata, loo_result.psis_result.log_weights; y_name=:obs)\n╭───────────────────────────────────────────╮\n│ 8-element DimArray{Float64,1} loo_pit_obs │\n├───────────────────────────────────────────┴──────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\n\n\n\n\nloo_pit(idata::InferenceData; kwargs...) -> DimArray\n\nCompute LOO-PIT from groups in idata using PSIS-LOO.\n\nKeywords\n\ny_name: Name of observed data variable in idata.observed_data. If not provided, then the only observed data variable is used.\ny_pred_name: Name of posterior predictive variable in idata.posterior_predictive. If not provided, then y_name is used.\nlog_likelihood_name: Name of log-likelihood variable in idata.log_likelihood. If not provided, then y_name is used if idata has a log_likelihood group, otherwise the only variable is used.\nreff::Union{Real,AbstractArray{<:Real}}: The relative effective sample size(s) of the likelihood values. If an array, it must have the same data dimensions as the corresponding log-likelihood variable. If not provided, then this is estimated using ess.\nkwargs: Remaining keywords are forwarded to the base method of loo_pit.\n\nExamples\n\nCalculate LOO-PIT values using as test quantity the observed values themselves.\n\njulia> using ArviZExampleData, PosteriorStats\n\njulia> idata = load_example_data(\"centered_eight\");\n\njulia> loo_pit(idata; y_name=:obs)\n╭───────────────────────────────────────────╮\n│ 8-element DimArray{Float64,1} loo_pit_obs │\n├───────────────────────────────────────────┴──────────────────────────── dims ┐\n ↓ school Categorical{String} [Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered\n└──────────────────────────────────────────────────────────────────────────────┘\n \"Choate\" 0.943511\n \"Deerfield\" 0.63797\n \"Phillips Andover\" 0.316697\n \"Phillips Exeter\" 0.582252\n \"Hotchkiss\" 0.295321\n \"Lawrenceville\" 0.403318\n \"St. Paul's\" 0.902508\n \"Mt. Hermon\" 0.655275\n\n\n\n\n\n","category":"function"},{"location":"api/stats/#Utilities","page":"Stats","title":"Utilities","text":"","category":"section"},{"location":"api/stats/","page":"Stats","title":"Stats","text":"PosteriorStats.smooth_data","category":"page"},{"location":"api/stats/#PosteriorStats.smooth_data","page":"Stats","title":"PosteriorStats.smooth_data","text":"smooth_data(y; dims=:, interp_method=CubicSpline, offset_frac=0.01)\n\nSmooth y along dims using interp_method.\n\ninterp_method is a 2-argument callabale that takes the arguments y and x and returns a DataInterpolations.jl interpolation method, defaulting to a cubic spline interpolator.\n\noffset_frac is the fraction of the length of y to use as an offset when interpolating.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#dataset-api","page":"Dataset","title":"Dataset","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Pages = [\"dataset.md\"]","category":"page"},{"location":"api/dataset/#Type-definition","page":"Dataset","title":"Type definition","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Dataset","category":"page"},{"location":"api/dataset/#InferenceObjects.Dataset","page":"Dataset","title":"InferenceObjects.Dataset","text":"Dataset{K,T,N,L} <: DimensionalData.AbstractDimStack{K,T,N,L}\n\nContainer of dimensional arrays sharing some dimensions.\n\nThis type is an DimensionalData.AbstractDimStack that implements the same interface as DimensionalData.DimStack and has identical usage.\n\nWhen a Dataset is passed to Python, it is converted to an xarray.Dataset without copying the data. That is, the Python object shares the same memory as the Julia object. However, if an xarray.Dataset is passed to Julia, its data must be copied.\n\nConstructors\n\nDataset(data::DimensionalData.AbstractDimArray...)\nDataset(data::Tuple{Vararg{<:DimensionalData.AbstractDimArray}})\nDataset(data::NamedTuple{Keys,Vararg{<:DimensionalData.AbstractDimArray}})\nDataset(\n data::NamedTuple,\n dims::Tuple{Vararg{DimensionalData.Dimension}};\n metadata=DimensionalData.NoMetadata(),\n)\n\nIn most cases, use convert_to_dataset to create a Dataset instead of directly using a constructor.\n\n\n\n\n\n","category":"type"},{"location":"api/dataset/#General-conversion","page":"Dataset","title":"General conversion","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"convert_to_dataset\nnamedtuple_to_dataset","category":"page"},{"location":"api/dataset/#InferenceObjects.convert_to_dataset","page":"Dataset","title":"InferenceObjects.convert_to_dataset","text":"convert_to_dataset(obj; group = :posterior, kwargs...) -> Dataset\n\nConvert a supported object to a Dataset.\n\nIn most cases, this function calls convert_to_inference_data and returns the corresponding group.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#InferenceObjects.namedtuple_to_dataset","page":"Dataset","title":"InferenceObjects.namedtuple_to_dataset","text":"namedtuple_to_dataset(data; kwargs...) -> Dataset\n\nConvert NamedTuple mapping variable names to arrays to a Dataset.\n\nAny non-array values will be converted to a 0-dimensional array.\n\nKeywords\n\nattrs::AbstractDict{<:AbstractString}: a collection of metadata to attach to the dataset, in addition to defaults. Values should be JSON serializable.\nlibrary::Union{String,Module}: library used for performing inference. Will be attached to the attrs metadata.\ndims: a collection mapping variable names to collections of objects containing dimension names. Acceptable such objects are:\nSymbol: dimension name\nType{<:DimensionsionalData.Dimension}: dimension type\nDimensionsionalData.Dimension: dimension, potentially with indices\nNothing: no dimension name provided, dimension name is automatically generated\ncoords: a collection indexable by dimension name specifying the indices of the given dimension. If indices for a dimension in dims are provided, they are used even if the dimension contains its own indices. If a dimension is missing, its indices are automatically generated.\n\n\n\n\n\n","category":"function"},{"location":"api/dataset/#DimensionalData","page":"Dataset","title":"DimensionalData","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"As a DimensionalData.AbstractDimStack, Dataset also implements the AbstractDimStack API and can be used like a DimStack. See DimensionalData's documentation for example usage.","category":"page"},{"location":"api/dataset/#Tables-inteface","page":"Dataset","title":"Tables inteface","text":"","category":"section"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"Dataset implements the Tables interface. This allows Datasets to be used as sources for any function that can accept a table. For example, it's straightforward to:","category":"page"},{"location":"api/dataset/","page":"Dataset","title":"Dataset","text":"write to CSV with CSV.jl\nflatten to a DataFrame with DataFrames.jl\nplot with StatsPlots.jl\nplot with AlgebraOfGraphics.jl","category":"page"},{"location":"#arvizjl","page":"Home","title":"ArviZ.jl: Exploratory analysis of Bayesian models in Julia","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"ArviZ.jl is a Julia meta-package for exploratory analysis of Bayesian models. It is part of the ArviZ project, which also includes a related Python package.","category":"page"},{"location":"","page":"Home","title":"Home","text":"ArviZ consists of and re-exports the following subpackages, along with extensions integrating them with InferenceObjects:","category":"page"},{"location":"","page":"Home","title":"Home","text":"InferenceObjects.jl: a base package implementing the InferenceData type with utilities for building, saving, and working with it\nMCMCDiagnosticTools.jl: diagnostics for Markov Chain Monte Carlo methods\nPSIS.jl: Pareto-smoothed importance sampling\nPosteriorStats.jl: common statistical analyses for the Bayesian workflow","category":"page"},{"location":"","page":"Home","title":"Home","text":"Additional functionality can be loaded with the following packages:","category":"page"},{"location":"","page":"Home","title":"Home","text":"ArviZExampleData.jl: example InferenceData objects, useful for demonstration and testing\nArviZPythonPlots.jl: Python ArviZ's library of plotting functions for Julia types","category":"page"},{"location":"","page":"Home","title":"Home","text":"See the navigation bar for more useful packages.","category":"page"},{"location":"#installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"From the Julia REPL, type ] to enter the Pkg REPL mode and run","category":"page"},{"location":"","page":"Home","title":"Home","text":"pkg> add ArviZ","category":"page"},{"location":"#usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"See the Quickstart for example usage and the API Overview for description of functions.","category":"page"},{"location":"#extendingarviz","page":"Home","title":"Extending ArviZ.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To use a custom data type with ArviZ.jl, simply overload InferenceObjects.convert_to_inference_data to convert your input(s) to an InferenceObjects.InferenceData.","category":"page"},{"location":"working_with_inference_data/#working-with-inference-data","page":"Working with InferenceData","title":"Working with InferenceData","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"using ArviZ, ArviZExampleData, DimensionalData, Statistics","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we present a collection of common manipulations you can use while working with InferenceData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let's load one of ArviZ's example datasets. posterior, posterior_predictive, etc are the groups stored in idata, and they are stored as Datasets. In this HTML view, you can click a group name to expand a summary of the group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata = load_example_data(\"centered_eight\")","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"info: Info\nDatasets are DimensionalData.AbstractDimStacks and can be used identically. The variables a Dataset contains are called \"layers\", and dimensions of the same name that appear in more than one layer within a Dataset must have the same indices.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData behaves like a NamedTuple and can be used similarly. Note that unlike a NamedTuple, the groups always appear in a specific order.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"length(idata) # number of groups","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"keys(idata) # group names","category":"page"},{"location":"working_with_inference_data/#Get-the-dataset-corresponding-to-a-single-group","page":"Working with InferenceData","title":"Get the dataset corresponding to a single group","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Group datasets can be accessed both as properties or as indexed items.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post = idata.posterior","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post is the dataset itself, so this is a non-allocating operation.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[:posterior] === post","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData supports a more advanced indexing syntax, which we'll see later.","category":"page"},{"location":"working_with_inference_data/#Getting-a-new-InferenceData-with-a-subset-of-groups","page":"Working with InferenceData","title":"Getting a new InferenceData with a subset of groups","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can index by a collection of group names to get a new InferenceData with just those groups. This is also non-allocating.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_sub = idata[(:posterior, :posterior_predictive)]","category":"page"},{"location":"working_with_inference_data/#Adding-groups-to-an-InferenceData","page":"Working with InferenceData","title":"Adding groups to an InferenceData","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"InferenceData is immutable, so to add or replace groups we use merge to create a new object.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(idata_sub, idata[(:observed_data, :prior)])","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can also use Base.setindex to out-of-place add or replace a single group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Base.setindex(idata_sub, idata.prior, :prior)","category":"page"},{"location":"working_with_inference_data/#Add-a-new-variable","page":"Working with InferenceData","title":"Add a new variable","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Dataset is also immutable. So while the values within the underlying data arrays can be mutated, layers cannot be added or removed from Datasets, and groups cannot be added/removed from InferenceData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Instead, we do this out-of-place also using merge.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(post, (log_tau=log.(post[:tau]),))","category":"page"},{"location":"working_with_inference_data/#Obtain-an-array-for-a-given-parameter","page":"Working with InferenceData","title":"Obtain an array for a given parameter","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s say we want to get the values for mu as an array. Parameters can be accessed with either property or index syntax.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post.tau","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"post[:tau] === post.tau","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To remove the dimensions, just use parent to retrieve the underlying array.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"parent(post.tau)","category":"page"},{"location":"working_with_inference_data/#Get-the-dimension-lengths","page":"Working with InferenceData","title":"Get the dimension lengths","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s check how many groups are in our hierarchical model.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"size(idata.observed_data, :school)","category":"page"},{"location":"working_with_inference_data/#Get-coordinate/index-values","page":"Working with InferenceData","title":"Get coordinate/index values","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"DimensionalData.index(idata.observed_data, :school)","category":"page"},{"location":"working_with_inference_data/#Get-a-subset-of-chains","page":"Working with InferenceData","title":"Get a subset of chains","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s keep only chain 0 here. For the subset to take effect on all relevant InferenceData groups – posterior, sample_stats, log_likelihood, and posterior_predictive – we will index InferenceData instead of Dataset.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we use DimensionalData's At selector. Its other selectors are also supported.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[chain=At(0)]","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Note that in this case, prior only has a chain of 0. If it also had the other chains, we could have passed chain=At([0, 2]) to subset by chains 0 and 2.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"warning: Warning\nIf we used idata[chain=[0, 2]] without the At selector, this is equivalent to idata[chain=DimensionalData.index(idata.posterior, :chain)[0, 2]], that is, [0, 2] indexes an array of dimension indices, which here would error. But if we had requested idata[chain=[1, 2]] we would not hit an error, but we would index the wrong chains. So it's important to always use a selector to index by values of dimension indices.","category":"page"},{"location":"working_with_inference_data/#Remove-the-first-n-draws-(burn-in)","page":"Working with InferenceData","title":"Remove the first n draws (burn-in)","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Let’s say we want to remove the first 100 draws from all the chains and all InferenceData groups with draws. To do this we use the .. syntax from IntervalSets.jl, which is exported by DimensionalData.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata[draw=100 .. Inf]","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"If you check the object you will see that the groups posterior, posterior_predictive, prior, and sample_stats have 400 draws compared to idata, which has 500. The group observed_data has not been affected because it does not have the draw dimension.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Alternatively, you can change a subset of groups by combining indexing styles with merge. Here we use this to build a new InferenceData where we have discarded the first 100 draws only from posterior.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"merge(idata, idata[(:posterior,), draw=100 .. Inf])","category":"page"},{"location":"working_with_inference_data/#Compute-posterior-mean-values-along-draw-and-chain-dimensions","page":"Working with InferenceData","title":"Compute posterior mean values along draw and chain dimensions","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To compute the mean value of the posterior samples, do the following:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mean(post)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"This computes the mean along all dimensions, discarding all dimensions and returning the result as a NamedTuple. This may be what you wanted for mu and tau, which have only two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can specify along which dimension you want to compute the mean (or other functions), which instead returns a Dataset.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mean(post; dims=(:chain, :draw))","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"The singleton dimensions of chain and draw now contain meaningless indices, so you may want to discard them, which you can do with dropdims.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"dropdims(mean(post; dims=(:chain, :draw)); dims=(:chain, :draw))","category":"page"},{"location":"working_with_inference_data/#Renaming-a-dimension","page":"Working with InferenceData","title":"Renaming a dimension","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can rename a dimension in a Dataset using DimensionalData's set method:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"theta_bis = set(post.theta; school=:school_bis)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We can use this, for example, to broadcast functions across multiple arrays, automatically matching up shared dimensions, using DimensionalData.broadcast_dims.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"theta_school_diff = broadcast_dims(-, post.theta, theta_bis)","category":"page"},{"location":"working_with_inference_data/#Compute-and-store-posterior-pushforward-quantities","page":"Working with InferenceData","title":"Compute and store posterior pushforward quantities","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"We use “posterior pushfoward quantities” to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can compute these pushforward operations and store them as a new variable in a copy of the posterior group.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we'll create a new InferenceData with theta_school_diff in the posterior:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_new = Base.setindex(idata, merge(post, (; theta_school_diff)), :posterior)","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Once you have these pushforward quantities in an InferenceData, you’ll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them, or save and share the InferenceData object with the pushforward quantities included.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Here we compute the mcse of theta_school_diff:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"mcse(idata_new.posterior).theta_school_diff","category":"page"},{"location":"working_with_inference_data/#Advanced-subsetting","page":"Working with InferenceData","title":"Advanced subsetting","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"To select the value corresponding to the difference between the Choate and Deerfield schools do:","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"school_idx = [\"Choate\", \"Hotchkiss\", \"Mt. Hermon\"]\nschool_bis_idx = [\"Deerfield\", \"Choate\", \"Lawrenceville\"]\ntheta_school_diff[school=At(school_idx), school_bis=At(school_bis_idx)]","category":"page"},{"location":"working_with_inference_data/#Add-new-chains-using-cat","page":"Working with InferenceData","title":"Add new chains using cat","text":"","category":"section"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"Suppose after checking the mcse and realizing you need more samples, you rerun the model with two chains and obtain an idata_rerun object.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"idata_rerun = InferenceData(; posterior=set(post[chain=At([0, 1])]; chain=[4, 5]))","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"You can combine the two using cat.","category":"page"},{"location":"working_with_inference_data/","page":"Working with InferenceData","title":"Working with InferenceData","text":"cat(idata[[:posterior]], idata_rerun; dims=:chain)","category":"page"}] } diff --git a/dev/working_with_inference_data/index.html b/dev/working_with_inference_data/index.html index fc6e1d0e..ddffe0b9 100644 --- a/dev/working_with_inference_data/index.html +++ b/dev/working_with_inference_data/index.html @@ -1213,4 +1213,4 @@ "inference_library" => "pymc"
- +