API: Turing.Inference
Turing.Inference.CSMC
— TypeCSMC(...)
Equivalent to PG
.
Turing.Inference.ESS
— TypeESS
Elliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
+ m ~ Normal()
+ x ~ Normal(m, 0.5)
+ end
+gdemo (generic function with 2 methods)
+
+julia> sample(gdemo(1.0), ESS(), 1_000) |> mean
+Mean
+
+│ Row │ parameters │ mean │
+│ │ Symbol │ Float64 │
+├─────┼────────────┼──────────┤
+│ 1 │ m │ 0.824853 │
Turing.Inference.Emcee
— TypeEmcee(n_walkers::Int, stretch_length=2.0)
Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
Turing.Inference.ExternalSampler
— TypeExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}
Represents a sampler that is not an implementation of InferenceAlgorithm
.
The Unconstrained
type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler
: the sampler to wrapadtype::ADTypes.AbstractADType
: the automatic differentiation (AD) backend to use
Turing.Inference.Gibbs
— TypeGibbs
A type representing a Gibbs sampler.
Constructors
Gibbs
needs to be given a set of pairs of variable names and samplers. Instead of a single variable name per sampler, one can also give an iterable of variables, all of which are sampled by the same component sampler.
Each variable name can be given as either a Symbol
or a VarName
.
Some examples of valid constructors are:
Gibbs(:x => NUTS(), :y => MH())
+Gibbs(@varname(x) => NUTS(), @varname(y) => MH())
+Gibbs((@varname(x), :y) => NUTS(), :z => MH())
Currently only variable names without indexing are supported, so for instance Gibbs(@varname(x[1]) => NUTS())
does not work. This will hopefully change in the future.
Fields
varnames::NTuple{N, AbstractVector{<:AbstractPPL.VarName}} where N
: varnames representing variables for each samplersamplers::NTuple{N, Any} where N
: samplers for each entry invarnames
Turing.Inference.GibbsContext
— TypeGibbsContext{VNs}(global_varinfo, context)
A context used in the implementation of the Turing.jl Gibbs sampler.
There will be one GibbsContext
for each iteration of a component sampler.
VNs
is a a tuple of symbols for VarName
s that the current component sampler is sampling. For those VarName
s, GibbsContext
will just pass tilde_assume
calls to its child context. For other variables, their values will be fixed to the values they have in global_varinfo
.
The naive implementation of GibbsContext
would simply have a field target_varnames
that would be a collection of VarName
s that the current component sampler is sampling. The reason we instead have a Tuple
type parameter listing Symbol
s is to allow is_target_varname
to benefit from compile time constant propagation. This is important for type stability of tilde_assume
.
Fields
global_varinfo
: aRef
to the globalAbstractVarInfo
object that holds values for all variables, both those fixed and those being sampled. We use aRef
because this field may need to be updated if new variables are introduced.
context
: the child context that tilde calls will eventually be passed onto.
Turing.Inference.HMC
— TypeHMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
+sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
+
+# Reduced step size
+sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
Turing.Inference.HMCDA
— TypeHMCDA(
+ n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
+ adtype::ADTypes.AbstractADType = AutoForwardDiff(),
+)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
Turing.Inference.IS
— TypeIS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
+@model function gdemo(x)
+ s² ~ InverseGamma(2,3)
+ m ~ Normal(0,sqrt.(s))
+ x[1] ~ Normal(m, sqrt.(s))
+ x[2] ~ Normal(m, sqrt.(s))
+ return s², m
+end
+
+sample(gdemo([1.5, 2]), IS(), 1000)
Turing.Inference.MH
— MethodMH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which caseMH
defaults to using the prior for each parameter as the proposal distribution. - An iterable of pairs or tuples mapping a
Symbol
to aAdvancedMH.Proposal
,Distribution
, orFunction
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
+ s² ~ InverseGamma(2,3)
+ m ~ Normal(0, sqrt(s²))
+ x ~ Normal(m, sqrt(s²))
+ y ~ Normal(m, sqrt(s²))
+end
+
+chain = sample(gdemo(1.5, 2.0), MH(), 1_000)
+mean(chain)
Specifying a single distribution implies the use of static MH:
# Use a static proposal for s² (which happens to be the same
+# as the prior) and a static proposal for m (note that this
+# isn't a random walk proposal).
+chain = sample(
+ gdemo(1.5, 2.0),
+ MH(
+ :s² => InverseGamma(2, 3),
+ :m => Normal(0, 1)
+ ),
+ 1_000
+)
+mean(chain)
Specifying explicit proposals using the AdvancedMH
interface:
# Use a static proposal for s² and random walk with proposal
+# standard deviation of 0.25 for m.
+chain = sample(
+ gdemo(1.5, 2.0),
+ MH(
+ :s² => AdvancedMH.StaticProposal(InverseGamma(2,3)),
+ :m => AdvancedMH.RandomWalkProposal(Normal(0, 0.25))
+ ),
+ 1_000
+)
+mean(chain)
Using a custom function to specify a conditional distribution:
# Use a static proposal for s and and a conditional proposal for m,
+# where the proposal is centered around the current sample.
+chain = sample(
+ gdemo(1.5, 2.0),
+ MH(
+ :s² => InverseGamma(2, 3),
+ :m => x -> Normal(x, 1)
+ ),
+ 1_000
+)
+mean(chain)
Providing a covariance matrix will cause MH
to perform random-walk sampling in the transformed space with proposals drawn from a multivariate normal distribution. The provided matrix must be positive semi-definite and square:
# Providing a custom variance-covariance matrix
+chain = sample(
+ gdemo(1.5, 2.0),
+ MH(
+ [0.25 0.05;
+ 0.05 0.50]
+ ),
+ 1_000
+)
+mean(chain)
Turing.Inference.MHLogDensityFunction
— TypeMHLogDensityFunction
A log density function for the MH sampler.
This variant uses the set_namedtuple!
function to update the VarInfo
.
Turing.Inference.NUTS
— TypeNUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
+NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
Turing.Inference.PG
— TypePG(n, space...)
+PG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
+PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a Particle Gibbs sampler of type PG
with n
particles for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
Turing.Inference.PG
— Typestruct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.resampler::Any
: Resampling algorithm.
Turing.Inference.PolynomialStepsize
— MethodPolynomialStepsize(a[, b=0, γ=0.55])
Create a polynomially decaying stepsize function.
At iteration t
, the step size is
\[a (b + t)^{-γ}.\]
Turing.Inference.Prior
— TypePrior()
Algorithm for sampling from the prior.
Turing.Inference.RepeatSampler
— TypeRepeatSampler <: AbstractMCMC.AbstractSampler
A RepeatSampler
is a container for a sampler and a number of times to repeat it.
Fields
sampler
: The sampler to repeatnum_repeat
: The number of times to repeat the sampler
Examples
repeated_sampler = RepeatSampler(sampler, 10)
+AbstractMCMC.step(rng, model, repeated_sampler) # take 10 steps of `sampler`
Turing.Inference.SGHMC
— TypeSGHMC{AD,space}
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e
Fields
learning_rate::Real
momentum_decay::Real
adtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
Turing.Inference.SGHMC
— MethodSGHMC(
+ space::Symbol...;
+ learning_rate::Real,
+ momentum_decay::Real,
+ adtype::ADTypes.AbstractADType = AutoForwardDiff(),
+)
Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
Turing.Inference.SGLD
— TypeSGLD
Stochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any
: Step size function.adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
Turing.Inference.SGLD
— MethodSGLD(
+ space::Symbol...;
+ stepsize = PolynomialStepsize(0.01),
+ adtype::ADTypes.AbstractADType = AutoForwardDiff(),
+)
Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
Turing.Inference.SMC
— TypeSMC(space...)
+SMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
+SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a sequential Monte Carlo sampler of type SMC
for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
Turing.Inference.SMC
— Typestruct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
StatsAPI.predict
— Methodpredict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)
Execute model
conditioned on each sample in chain
, and return the resulting Chains
.
If include_all
is false
, the returned Chains
will contain only those variables sampled/not present in chain
.
Details
Internally calls Turing.Inference.transitions_from_chain
to obtained the samples and then converts these into a Chains
object using AbstractMCMC.bundle_samples
.
Example
julia> using Turing; Turing.setprogress!(false);
+[ Info: [Turing]: progress logging is disabled globally
+
+julia> @model function linear_reg(x, y, σ = 0.1)
+ β ~ Normal(0, 1)
+
+ for i ∈ eachindex(y)
+ y[i] ~ Normal(β * x[i], σ)
+ end
+ end;
+
+julia> σ = 0.1; f(x) = 2 * x + 0.1 * randn();
+
+julia> Δ = 0.1; xs_train = 0:Δ:10; ys_train = f.(xs_train);
+
+julia> xs_test = [10 + Δ, 10 + 2 * Δ]; ys_test = f.(xs_test);
+
+julia> m_train = linear_reg(xs_train, ys_train, σ);
+
+julia> chain_lin_reg = sample(m_train, NUTS(100, 0.65), 200);
+┌ Info: Found initial step size
+└ ϵ = 0.003125
+
+julia> m_test = linear_reg(xs_test, Vector{Union{Missing, Float64}}(undef, length(ys_test)), σ);
+
+julia> predictions = predict(m_test, chain_lin_reg)
+Object of type Chains, with data of type 100×2×1 Array{Float64,3}
+
+Iterations = 1:100
+Thinning interval = 1
+Chains = 1
+Samples per chain = 100
+parameters = y[1], y[2]
+
+2-element Array{ChainDataFrame,1}
+
+Summary Statistics
+ parameters mean std naive_se mcse ess r_hat
+ ────────── ─────── ────── ──────── ─────── ──────── ──────
+ y[1] 20.1974 0.1007 0.0101 missing 101.0711 0.9922
+ y[2] 20.3867 0.1062 0.0106 missing 101.4889 0.9903
+
+Quantiles
+ parameters 2.5% 25.0% 50.0% 75.0% 97.5%
+ ────────── ─────── ─────── ─────── ─────── ───────
+ y[1] 20.0342 20.1188 20.2135 20.2588 20.4188
+ y[2] 20.1870 20.3178 20.3839 20.4466 20.5895
+
+
+julia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));
+
+julia> sum(abs2, ys_test - ys_pred) ≤ 0.1
+true
Turing.Inference.dist_val_tuple
— Methoddist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)
Return two NamedTuples
.
The first NamedTuple
has symbols as keys and distributions as values. The second NamedTuple
has model symbols as keys and their stored values as values.
Turing.Inference.drop_space
— Functiondrop_space(alg::InferenceAlgorithm)
Return an InferenceAlgorithm
like alg
, but with all space information removed.
Turing.Inference.externalsampler
— Methodexternalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)
Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler
: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
: The automatic differentiation (AD) backend to use.unconstrained::Bool=true
: Whether the sampler requires unconstrained space.
Turing.Inference.getparams
— Methodgetparams(model, t)
Return a named tuple of parameters.
Turing.Inference.gibbs_initialstep_recursive
— FunctionTake the first step of MCMC for the first component sampler, and call the same function recursively on the remaining samplers, until no samplers remain. Return the global VarInfo and a tuple of initial states for all component samplers.
Turing.Inference.gibbs_step_recursive
— FunctionRun a Gibbs step for the first varname/sampler/state tuple, and recursively call the same function on the tail, until there are no more samplers left.
Turing.Inference.group_varnames_by_symbol
— Methodgroup_varnames_by_symbol(vns)
Group the varnames by their symbol.
Arguments
vns
: Iterable ofVarName
.
Returns
OrderedDict{Symbol, Vector{VarName}}
: A dictionary mapping symbol to a vector of varnames.
Turing.Inference.isgibbscomponent
— Methodisgibbscomponent(alg::Union{InferenceAlgorithm, AbstractMCMC.AbstractSampler})
Return a boolean indicating whether alg
is a valid component for a Gibbs sampler.
Defaults to false
if no method has been defined for a particular algorithm type.
Turing.Inference.make_conditional
— Methodmake_conditional(model, target_variables, varinfo)
Return a new, conditioned model for a component of a Gibbs sampler.
Arguments
model::DynamicPPL.Model
: The model to condition.target_variables::AbstractVector{<:VarName}
: The target variables of the component
sampler. These will not be conditioned.
varinfo::DynamicPPL.AbstractVarInfo
: Values for all variables in the model. All the
values in varinfo
but not in target_variables
will be conditioned to the values they have in varinfo
.
Returns
- A new model with the variables not in
target_variables
conditioned. - The
GibbsContext
object that will be used to condition the variables. This is necessary
because evaluation can mutate its global_varinfo
field, which we need to access later.
Turing.Inference.match_linking!!
— Methodmatch_linking!!(varinfo_local, prev_state_local, model)
Make sure the linked/invlinked status of varinfo_local matches that of the previous state for this sampler. This is relevant when multilple samplers are sampling the same variables, and one might need it to be linked while the other doesn't.
Turing.Inference.mh_accept
— Methodmh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)
Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion
\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]
for a uniform random number $U \in [0, 1)$.
Turing.Inference.requires_unconstrained_space
— Methodrequires_unconstrained_space(sampler::ExternalSampler)
Return true
if the sampler requires unconstrained space, and false
otherwise.
Turing.Inference.set_namedtuple!
— Methodset_namedtuple!(vi::VarInfo, nt::NamedTuple)
Places the values of a NamedTuple
into the relevant places of a VarInfo
.
Turing.Inference.setparams_varinfo!!
— Methodsetparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)
A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo
object. Also takes the sampler
as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:])
.
model
is typically a DynamicPPL.Model
, but can also be e.g. an AbstractMCMC.LogDensityModel
.
Turing.Inference.transitions_from_chain
— Methodtransitions_from_chain(
+ [rng::AbstractRNG,]
+ model::Model,
+ chain::MCMCChains.Chains;
+ sampler = DynamicPPL.SampleFromPrior()
+)
Execute model
conditioned on each sample in chain
, and return resulting transitions.
The returned transitions are represented in a Vector{<:Turing.Inference.Transition}
.
Details
In a bit more detail, the process is as follows:
- For every
sample
inchain
- For every
variable
insample
- Set
variable
inmodel
to its value insample
- Set
- Execute
model
with variables fixed as above, sampling variables NOT present inchain
usingSampleFromPrior
- Return sampled variables and log-joint
- For every
Example
julia> using Turing
+
+julia> @model function demo()
+ m ~ Normal(0, 1)
+ x ~ Normal(m, 1)
+ end;
+
+julia> m = demo();
+
+julia> chain = Chains(randn(2, 1, 1), ["m"]); # 2 samples of `m`
+
+julia> transitions = Turing.Inference.transitions_from_chain(m, chain);
+
+julia> [Turing.Inference.getlogp(t) for t in transitions] # extract the logjoints
+2-element Array{Float64,1}:
+ -3.6294991938628374
+ -2.5697948166987845
+
+julia> [first(t.θ.x) for t in transitions] # extract samples for `x`
+2-element Array{Array{Float64,1},1}:
+ [-2.0844148956440796]
+ [-1.704630494695469]