diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index e58f5a4..71e7bd3 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-06T12:43:53","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-09T09:33:33","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/example/index.html b/dev/example/index.html index 85b76bd..9db1276 100644 --- a/dev/example/index.html +++ b/dev/example/index.html @@ -66,4 +66,4 @@ acquisition = ExpectedImprovement(), term_cond = IterLimit(10), options = options(), -) +) diff --git a/dev/functions/index.html b/dev/functions/index.html index 618598a..58f9ecf 100644 --- a/dev/functions/index.html +++ b/dev/functions/index.html @@ -1,4 +1,4 @@ Functions · BOSS.jl

Functions

This page contains the documentation for all exported functions.

Main Function

The main function bo!(::BossProblem; kwargs...) performs the Bayesian optimization. It augments the dataset and updates the model parameters and/or hyperparameters stored in problem.data.

BOSS.bo!Function
bo!(problem::BossProblem{Function}; kwargs...)
-x = bo!(problem::BossProblem{Missing}; kwargs...)

Run the Bayesian optimization procedure to solve the given optimization problem or give a recommendation for the next evaluation point if problem.f == missing.

Arguments

  • problem::BossProblem: Defines the optimization problem.

Keywords

  • model_fitter::ModelFitter: Defines the algorithm used to estimate model parameters.
  • acq_maximizer::AcquisitionMaximizer: Defines the algorithm used to maximize the acquisition function.
  • acquisition::AcquisitionFunction: Defines the acquisition function maximized to select promising candidates for further evaluation.
  • term_cond::TermCond: Defines the termination condition.
  • options::BossOptions: Defines miscellaneous options and hyperparameters.

References

BossProblem, ModelFitter, AcquisitionMaximizer, TermCond, BossOptions

Examples

See 'https://github.com/Sheld5/BOSS.jl/tree/master/scripts' for example usage.

source

The following diagram showcases the pipeline of the main function. The package can be used in two modes;

The "BO mode" is used if the objective function is defined within the BossProblem. In this mode, BOSS performs the standard Bayesian optimization procedure while querying the objective function for new points.

The "Recommender mode" is used if the objective function is missing. In this mode, BOSS performs a single iteration of the Bayesian optimization procedure and returns a recommendation for the next evaluation point. The user can evaluate the objective function manually, use the method augment_dataset! to add the result to the data, and call BOSS again for a new recommendation.

BOSS Pipeline

Utility Functions

BOSS.augment_dataset!Function
augment_dataset!(data::ExperimentDataPost, x::AbstractVector{<:Real}, y::AbstractVector{<:Real})
-augment_dataset!(data::ExperimentDataPost, X::AbstractMatrix{<:Real}, Y::AbstractMatrix{<:Real})

Add one (as vectors) or more (as matrices) datapoints to the dataset.

source
BOSS.model_posteriorFunction
model_posterior(::BossProblem) -> (x -> mean, std)

Return the posterior predictive distribution of the Gaussian Process.

The posterior is a function predict(x) -> (mean, std) which gives the mean and std of the predictive distribution as a function of x.

See also: model_posterior_slice

source
BOSS.model_posterior_sliceFunction
model_posterior_slice(::BossProblem, slice::Int) -> (x -> mean, std)

Return the posterior predictive distributions of the given slice output dimension.

For some models, using model_posterior_slice can be more efficient than model_posterior, if one is only interested in the predictive distribution of a certain output dimension.

Note that model_posterior_slice can be used even with "nonscliceable" models.

See also: model_posterior

source
BOSS.average_posteriorFunction

Return an averaged posterior predictive distribution of the given posteriors.

The posterior is a function predict(x) -> (mean, std) which gives the mean and standard deviation of the predictive distribution as a function of x.

source
BOSS.resultFunction
result(problem) -> (x, y)

Return the best found point (x, y).

Returns the point (x, y) from the dataset of the given problem such that y satisfies the constraints and fitness(y) is maximized. Returns nothing if the dataset is empty or if no feasible point is present.

Does not check whether x belongs to the domain as no exterior points should be present in the dataset.

source
+x = bo!(problem::BossProblem{Missing}; kwargs...)

Run the Bayesian optimization procedure to solve the given optimization problem or give a recommendation for the next evaluation point if problem.f == missing.

Arguments

Keywords

References

BossProblem, ModelFitter, AcquisitionMaximizer, TermCond, BossOptions

Examples

See 'https://github.com/Sheld5/BOSS.jl/tree/master/scripts' for example usage.

source

The following diagram showcases the pipeline of the main function. The package can be used in two modes;

The "BO mode" is used if the objective function is defined within the BossProblem. In this mode, BOSS performs the standard Bayesian optimization procedure while querying the objective function for new points.

The "Recommender mode" is used if the objective function is missing. In this mode, BOSS performs a single iteration of the Bayesian optimization procedure and returns a recommendation for the next evaluation point. The user can evaluate the objective function manually, use the method augment_dataset! to add the result to the data, and call BOSS again for a new recommendation.

BOSS Pipeline

Utility Functions

BOSS.augment_dataset!Function
augment_dataset!(data::ExperimentDataPost, x::AbstractVector{<:Real}, y::AbstractVector{<:Real})
+augment_dataset!(data::ExperimentDataPost, X::AbstractMatrix{<:Real}, Y::AbstractMatrix{<:Real})

Add one (as vectors) or more (as matrices) datapoints to the dataset.

source
BOSS.model_posteriorFunction
model_posterior(::BossProblem) -> (x -> mean, std)

Return the posterior predictive distribution of the Gaussian Process.

The posterior is a function predict(x) -> (mean, std) which gives the mean and std of the predictive distribution as a function of x.

See also: model_posterior_slice

source
BOSS.model_posterior_sliceFunction
model_posterior_slice(::BossProblem, slice::Int) -> (x -> mean, std)

Return the posterior predictive distributions of the given slice output dimension.

For some models, using model_posterior_slice can be more efficient than model_posterior, if one is only interested in the predictive distribution of a certain output dimension.

Note that model_posterior_slice can be used even with "nonscliceable" models.

See also: model_posterior

source
BOSS.average_posteriorFunction

Return an averaged posterior predictive distribution of the given posteriors.

The posterior is a function predict(x) -> (mean, std) which gives the mean and standard deviation of the predictive distribution as a function of x.

source
BOSS.resultFunction
result(problem) -> (x, y)

Return the best found point (x, y).

Returns the point (x, y) from the dataset of the given problem such that y satisfies the constraints and fitness(y) is maximized. Returns nothing if the dataset is empty or if no feasible point is present.

Does not check whether x belongs to the domain as no exterior points should be present in the dataset.

source
diff --git a/dev/index.html b/dev/index.html index 3b1707b..b495b9f 100644 --- a/dev/index.html +++ b/dev/index.html @@ -3,4 +3,4 @@ \text{max} \; & \text{fit}(y) \\ \text{s.t.} \; & y < y_\text{max} \\ & x \in \text{Domain} \;, -\end{aligned}\]

where $\text{fit}(y)$ is a real-valued fitness function defined on the outputs, $y_\text{max}$ is a vector defining constraints on outputs, and $\text{Domain}$ defines constraints on inputs.

Active Learning Problem

The BOSS.jl package currently only supports optimization problems out-of-the-box. However, BOSS.jl can be adapted for active learning easily by defining a suitable acquisition function (such as information gain or Kullback-Leibler divergence) to use instead of the expected improvement (see AcquisitionFunction). An acquisition function for active learning will usually not require the fitness function to be defined, so the fitness function can be omitted during problem definition (see BossProblem).

Surrogate Model

The surrogate model approximates the objective function based on the available data. It is defined using the SurrogateModel type. The BOSS.jl package provides a Parametric model, a Nonparametric model, and a Semiparametric model combining the previous two.

The predictive distribution of the Parametric model

\[y \sim \mathcal{N}(m(x; \hat\theta), \hat\sigma_f^2)\]

is given by the parametric function $m(x; \theta)$, the parameter vector $\hat\theta$, and the estimated evaluation noise deviations $\hat\sigma_f$. The model is defined by the parametric function $m(x; \theta)$ together with parameter priors $\theta_i \sim p(\theta_i)$. The parameters $\hat\theta$ and the noise deviations $\hat\sigma_f$ are estimated based on the current dataset.

The Nonparametric model is just an alias for the GaussianProcess model. Gaussian process (GP) is a nonparametric model, so its predictive distribution is based on the whole dataset instead of some vector of parameters. The predictive distribution is given by equations 29, 30 in [1]. The model is defined by defining priors over all its hyperparameters (length scales, amplitudes).

The Semiparametric model combines the previous two models. It is a Gaussian process, but uses the parametric model as the prior mean of the GP (the $\mu_0(x)$ function in [1]). An alternative way of interpreting the semiparametric model is that it fits the data using a parametric model and uses a Gaussian process to model the residual errors of the parametric model. The model is defined by defining both the Parametric and Nonparametric models.

References

[1] Bobak Shahriari et al. “Taking the human out of the loop: A review of Bayesian optimization”. In: Proceedings of the IEEE 104.1 (2015), pp. 148–175

+\end{aligned}\]

where $\text{fit}(y)$ is a real-valued fitness function defined on the outputs, $y_\text{max}$ is a vector defining constraints on outputs, and $\text{Domain}$ defines constraints on inputs.

Active Learning Problem

The BOSS.jl package currently only supports optimization problems out-of-the-box. However, BOSS.jl can be adapted for active learning easily by defining a suitable acquisition function (such as information gain or Kullback-Leibler divergence) to use instead of the expected improvement (see AcquisitionFunction). An acquisition function for active learning will usually not require the fitness function to be defined, so the fitness function can be omitted during problem definition (see BossProblem).

Surrogate Model

The surrogate model approximates the objective function based on the available data. It is defined using the SurrogateModel type. The BOSS.jl package provides a Parametric model, a Nonparametric model, and a Semiparametric model combining the previous two.

The predictive distribution of the Parametric model

\[y \sim \mathcal{N}(m(x; \hat\theta), \hat\sigma_f^2)\]

is given by the parametric function $m(x; \theta)$, the parameter vector $\hat\theta$, and the estimated evaluation noise deviations $\hat\sigma_f$. The model is defined by the parametric function $m(x; \theta)$ together with parameter priors $\theta_i \sim p(\theta_i)$. The parameters $\hat\theta$ and the noise deviations $\hat\sigma_f$ are estimated based on the current dataset.

The Nonparametric model is just an alias for the GaussianProcess model. Gaussian process (GP) is a nonparametric model, so its predictive distribution is based on the whole dataset instead of some vector of parameters. The predictive distribution is given by equations 29, 30 in [1]. The model is defined by defining priors over all its hyperparameters (length scales, amplitudes).

The Semiparametric model combines the previous two models. It is a Gaussian process, but uses the parametric model as the prior mean of the GP (the $\mu_0(x)$ function in [1]). An alternative way of interpreting the semiparametric model is that it fits the data using a parametric model and uses a Gaussian process to model the residual errors of the parametric model. The model is defined by defining both the Parametric and Nonparametric models.

References

[1] Bobak Shahriari et al. “Taking the human out of the loop: A review of Bayesian optimization”. In: Proceedings of the IEEE 104.1 (2015), pp. 148–175

diff --git a/dev/types/index.html b/dev/types/index.html index ff6b72f..cfe74c4 100644 --- a/dev/types/index.html +++ b/dev/types/index.html @@ -5,25 +5,25 @@ describing our knowledge (or lack of it) about the blackbox function. We wish to find `x ∈ domain` such that `fitness(f(x))` is maximized -while satisfying the constraints `f(x) <= y_max`.

Keywords

See also: bo!

source

Fitness

The Fitness type is used to define the fitness function $\text{fit}(y) \rightarrow \mathbb{R}$.

The NoFitness can be used in problems without defined fitness (such as active learning problems). It is the default option used if no fitness is provided to BossProblem. The NoFitness can only be used with AcquisitionFunction that does not require fitness.

The LinFitness can be used to define a simple linear fitness function

\[\text{fit}(y) = \alpha^T y \;.\]

Using LinFitness instead of NonlinFitness may allow for simpler/faster computation of some acquisition functions.

The NonlinFitness can be used to define an arbitrary fitness function

\[\text{fit}(y) \rightarrow \mathbb{R} \;.\]

BOSS.FitnessType

An abstract type for a fitness function measuring the quality of an output y of the objective function.

Fitness is used by the AcquisitionFunction to determine promising points for future evaluations.

All fitness types should implement:

  • (::CustomFitness)(y::AbstractVector{<:Real}) -> fitness::Real

An exception is the NoFitness, which can be used for problem without a well defined fitness. In such case, an AcquisitionFunction which does not depend on Fitness must be used.

See also: NoFitness, LinFitness, NonlinFitness, AcquisitionFunction

source
BOSS.NoFitnessType
NoFitness()

Placeholder for problems with no defined fitness.

BossProblem defined with NoFitness can only be solved with AcquisitionFunction not dependent on Fitness.

source
BOSS.LinFitnessType
LinFitness(coefs::AbstractVector{<:Real})

Used to define a linear fitness function measuring the quality of an output y of the objective function.

May provide better performance than the more general NonlinFitness as some acquisition functions can be calculated analytically with linear fitness functions whereas this may not be possible with a nonlinear fitness function.

See also: NonlinFitness

Example

A fitness function f(y) = y[1] + a * y[2] + b * y[3] can be defined as:

julia> LinFitness([1., a, b])
source
BOSS.NonlinFitnessType
NonlinFitness(fitness::Function)

Used to define a general nonlinear fitness function measuring the quality of an output y of the objective function.

If your fitness function is linear, use LinFitness which may provide better performance.

See also: LinFitness

Example

julia> NonlinFitness(y -> cos(y[1]) + sin(y[2]))
source

Input Domain

The Domain structure is used to define the input domain $x \in \text{Domain}$. The domain is formalized as

\[\begin{aligned} +while satisfying the constraints `f(x) <= y_max`.

Keywords

See also: bo!

source

Fitness

The Fitness type is used to define the fitness function $\text{fit}(y) \rightarrow \mathbb{R}$.

The NoFitness can be used in problems without defined fitness (such as active learning problems). It is the default option used if no fitness is provided to BossProblem. The NoFitness can only be used with AcquisitionFunction that does not require fitness.

The LinFitness can be used to define a simple linear fitness function

\[\text{fit}(y) = \alpha^T y \;.\]

Using LinFitness instead of NonlinFitness may allow for simpler/faster computation of some acquisition functions.

The NonlinFitness can be used to define an arbitrary fitness function

\[\text{fit}(y) \rightarrow \mathbb{R} \;.\]

BOSS.FitnessType

An abstract type for a fitness function measuring the quality of an output y of the objective function.

Fitness is used by the AcquisitionFunction to determine promising points for future evaluations.

All fitness types should implement:

  • (::CustomFitness)(y::AbstractVector{<:Real}) -> fitness::Real

An exception is the NoFitness, which can be used for problem without a well defined fitness. In such case, an AcquisitionFunction which does not depend on Fitness must be used.

See also: NoFitness, LinFitness, NonlinFitness, AcquisitionFunction

source
BOSS.NoFitnessType
NoFitness()

Placeholder for problems with no defined fitness.

BossProblem defined with NoFitness can only be solved with AcquisitionFunction not dependent on Fitness.

source
BOSS.LinFitnessType
LinFitness(coefs::AbstractVector{<:Real})

Used to define a linear fitness function measuring the quality of an output y of the objective function.

May provide better performance than the more general NonlinFitness as some acquisition functions can be calculated analytically with linear fitness functions whereas this may not be possible with a nonlinear fitness function.

See also: NonlinFitness

Example

A fitness function f(y) = y[1] + a * y[2] + b * y[3] can be defined as:

julia> LinFitness([1., a, b])
source
BOSS.NonlinFitnessType
NonlinFitness(fitness::Function)

Used to define a general nonlinear fitness function measuring the quality of an output y of the objective function.

If your fitness function is linear, use LinFitness which may provide better performance.

See also: LinFitness

Example

julia> NonlinFitness(y -> cos(y[1]) + sin(y[2]))
source

Input Domain

The Domain structure is used to define the input domain $x \in \text{Domain}$. The domain is formalized as

\[\begin{aligned} & lb < x < ub \\ & d_i \implies (x_i \in \mathbb{Z}) \\ & \text{cons}(x) > 0 \;. -\end{aligned}\]

BOSS.DomainType
Domain(; kwargs...)

Describes the optimization domain.

Keywords

  • bounds::AbstractBounds: The basic box-constraints on x. This field is mandatory.
  • discrete::AbstractVector{<:Bool}: Can be used to designate some dimensions of the domain as discrete.
  • cons::Union{Nothing, Function}: Used to define arbitrary nonlinear constraints on x. Feasible points x must satisfy all(cons(x) .> 0.). An appropriate acquisition maximizer which can handle nonlinear constraints must be used if cons is provided. (See AcquisitionMaximizer.)
source
BOSS.AbstractBoundsType
bounds = ([0, 0], [1, 1])

const AbstractBounds = Tuple{<:AbstractVector{<:Real}, <:AbstractVector{<:Real}}

Defines box constraints.

source

Output Constraints

Constraints on output vector y can be defined using the y_max field. Providing y_max to BossProblem defines the linear constraints y < y_max.

Arbitrary nonlinear constraints can be defined by augmenting the objective function. For example to define the constraint y[1] * y[2] < c, one can define an augmented objective function

function f_c(x)
+\end{aligned}\]

BOSS.DomainType
Domain(; kwargs...)

Describes the optimization domain.

Keywords

  • bounds::AbstractBounds: The basic box-constraints on x. This field is mandatory.
  • discrete::AbstractVector{<:Bool}: Can be used to designate some dimensions of the domain as discrete.
  • cons::Union{Nothing, Function}: Used to define arbitrary nonlinear constraints on x. Feasible points x must satisfy all(cons(x) .> 0.). An appropriate acquisition maximizer which can handle nonlinear constraints must be used if cons is provided. (See AcquisitionMaximizer.)
source
BOSS.AbstractBoundsType
bounds = ([0, 0], [1, 1])

const AbstractBounds = Tuple{<:AbstractVector{<:Real}, <:AbstractVector{<:Real}}

Defines box constraints.

source

Output Constraints

Constraints on output vector y can be defined using the y_max field. Providing y_max to BossProblem defines the linear constraints y < y_max.

Arbitrary nonlinear constraints can be defined by augmenting the objective function. For example to define the constraint y[1] * y[2] < c, one can define an augmented objective function

function f_c(x)
     y = f(x)  # the original objective function
     y_c = [y..., y[1] * y[2]]
     return y_c
-end

and use

y_max = [fill(Inf, y_dim)..., c]

where y_dim is the output dimension of the original objective function. Note that defining nonlinear constraints this way increases the output dimension of the objective function and thus the model definition has to be modified accordingly.

Surrogate Model

The surrogate model is defined using the SurrogateModel type.

BOSS.SurrogateModelType

An abstract type for a surrogate model approximating the objective function.

Example usage: struct CustomModel <: SurrogateModel ... end

All models should implement:

  • make_discrete(model::CustomModel, discrete::AbstractVector{<:Bool}) -> discrete_model::CustomModel
  • model_posterior(model::CustomModel, data::ExperimentDataMAP) -> (x -> mean, std)
  • model_loglike(model::CustomModel, data::ExperimentData) -> (::ModelParams -> ::Real)
  • sample_params(model::CustomModel) -> ::ModelParams
  • param_priors(model::CustomModel) -> ::ParamPriors

Models may implement:

  • model_posterior_slice(model::CustomModel, data::ExperimentDataMAP, slice::Int) -> (x -> mean, std)

Model can be designated as "sliceable" by defining sliceable(::CustomModel) = true. A sliceable model should additionally implement:

  • model_loglike_slice(model::SliceableModel, data::ExperimentData, slice::Int) -> (::ModelParams -> ::Real)
  • θ_slice(model::SliceableModel, idx::Int) -> Union{Nothing, UnitRange{<:Int}}

See also: LinModel, NonlinModel, GaussianProcess, Semiparametric

source

The LinModel and NonlinModel structures are used to define parametric models. (Some compuatations are simpler/faster with linear model, so the LinModel might provide better performance in the future. This functionality is not implemented yet, so using the NonlinModel is equiavalent for now.)

BOSS.LinModelType
LinModel(; kwargs...)

A parametric surrogate model linear in its parameters.

This model definition will provide better performance than the more general 'NonlinModel' in the future. This feature is not implemented yet so it is equivalent to using NonlinModel for now.

The linear model is defined as

    ϕs = lift(x)
+end

and use

y_max = [fill(Inf, y_dim)..., c]

where y_dim is the output dimension of the original objective function. Note that defining nonlinear constraints this way increases the output dimension of the objective function and thus the model definition has to be modified accordingly.

Surrogate Model

The surrogate model is defined using the SurrogateModel type.

BOSS.SurrogateModelType

An abstract type for a surrogate model approximating the objective function.

Example usage: struct CustomModel <: SurrogateModel ... end

All models should implement:

  • make_discrete(model::CustomModel, discrete::AbstractVector{<:Bool}) -> discrete_model::CustomModel
  • model_posterior(model::CustomModel, data::ExperimentDataMAP) -> (x -> mean, std)
  • model_loglike(model::CustomModel, data::ExperimentData) -> (::ModelParams -> ::Real)
  • sample_params(model::CustomModel) -> ::ModelParams
  • param_priors(model::CustomModel) -> ::ParamPriors

Models may implement:

  • model_posterior_slice(model::CustomModel, data::ExperimentDataMAP, slice::Int) -> (x -> mean, std)

Model can be designated as "sliceable" by defining sliceable(::CustomModel) = true. A sliceable model should additionally implement:

  • model_loglike_slice(model::SliceableModel, data::ExperimentData, slice::Int) -> (::ModelParams -> ::Real)
  • θ_slice(model::SliceableModel, idx::Int) -> Union{Nothing, UnitRange{<:Int}}

See also: LinModel, NonlinModel, GaussianProcess, Semiparametric

source

The LinModel and NonlinModel structures are used to define parametric models. (Some compuatations are simpler/faster with linear model, so the LinModel might provide better performance in the future. This functionality is not implemented yet, so using the NonlinModel is equiavalent for now.)

BOSS.LinModelType
LinModel(; kwargs...)

A parametric surrogate model linear in its parameters.

This model definition will provide better performance than the more general 'NonlinModel' in the future. This feature is not implemented yet so it is equivalent to using NonlinModel for now.

The linear model is defined as

    ϕs = lift(x)
     y = [θs[i]' * ϕs[i] for i in 1:m]

where

    x = [x₁, ..., xₙ]
     y = [y₁, ..., yₘ]
     θs = [θ₁, ..., θₘ], θᵢ = [θᵢ₁, ..., θᵢₚ]
-    ϕs = [ϕ₁, ..., ϕₘ], ϕᵢ = [ϕᵢ₁, ..., ϕᵢₚ]

and $n, m, p ∈ R$.

Keywords

  • lift::Function: Defines the lift function (::Vector{<:Real}) -> (::Vector{Vector{<:Real}}) according to the definition above.
  • theta_priors::AbstractVector{<:UnivariateDistribution}: The prior distributions for the parameters [θ₁₁, ..., θ₁ₚ, ..., θₘ₁, ..., θₘₚ] according to the definition above.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source
BOSS.NonlinModelType
NonlinModel(; kwargs...)

A parametric surrogate model.

If your model is linear, you can use LinModel which will provide better performance in the future. (Not yet implemented.)

Define the model as y = predict(x, θ) where θ are the model parameters.

Keywords

  • predict::Function: The predict function according to the definition above.
  • theta_priors::AbstractVector{<:UnivariateDistribution}: The prior distributions for the model parameters.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source

The GaussianProcess structure is used to define a Gaussian process model. See [1] for more information about Gaussian processes.

BOSS.GaussianProcessType
GaussianProcess(; kwargs...)

A Gaussian Process surrogate model. Each output dimension is modeled by a separate independent process.

Keywords

  • mean::Union{Nothing, Function}: Used as the mean function for the GP. Defaults to nothing equivalent to x -> [0.].
  • kernel::Kernel: The kernel used in the GP. Defaults to the Matern32Kernel().
  • amp_priors::AmplitudePriors: The prior distributions for the amplitude hyperparameters of the GP. The amp_priors should be a vector of y_dim univariate distributions.
  • length_scale_priors::LengthScalePriors: The prior distributions for the length scales of the GP. The length_scale_priors should be a vector of y_dim x_dim-variate distributions where x_dim and y_dim are the dimensions of the input and output of the model respectively.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source

The Semiparametric structure is used to define a semiparametric model combining the parametric and nonparametric (Gaussian process) models.

BOSS.SemiparametricType
Semiparametric(; kwargs...)

A semiparametric surrogate model (a combination of a parametric model and Gaussian Process).

The parametric model is used as the mean of the Gaussian Process and all parameters and hyperparameters are estimated simultaneously.

Keywords

  • parametric::Parametric: The parametric model used as the GP mean function.
  • nonparametric::Nonparametric{Nothing}: The outer GP model without mean.

Note that the parametric model must be defined without noise priors, and the nonparametric model must be defined without mean function.

source

Evaluation Noise

The priors on evaluation noise deviation $\sigma_f$ are defined using the noise_std_priors field of the SurrogateModel.

Experiment Data

The data from all past objective function evaluations as well as estimated parameter and/or hyperparameter values of the surrogate model are stored in the ExperimentData types.

The ExperimentDataPriors structure is used to pass the initial dataset to the BossProblem.

BOSS.ExperimentDataPriorType

Stores the initial data.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.

See also: ExperimentDataPost

source

The ExperimentDataPost types contain the estimated model (hyper)parameters in addition to the dataset. The ExperimentDataMAP structure contains the MAP estimate of the parameters in case a MAP model fitter is used, and the ExperimentDataBI structure contains samples of the parameters in case a Bayesian inference model fitter is used.

BOSS.ExperimentDataMAPType

Stores the data matrices X,Y as well as the optimized model parameters and hyperparameters.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.
  • params::ModelParams: Contains MAP model (hyper)parameters.
  • consistent::Bool: True iff the parameters have been fitted using the current dataset (X, Y). Is set to consistent = false after updating the dataset, and to consistent = true after re-fitting the parameters.

See also: ExperimentDataBI

source
BOSS.ExperimentDataBIType

Stores the data matrices X,Y as well as the sampled model parameters and hyperparameters.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.
  • params::AbstractVector{<:ModelParams}: Contains samples of the model (hyper)parameters.
  • consistent::Bool: True iff the parameters have been fitted using the current dataset (X, Y). Is set to consistent = false after updating the dataset, and to consistent = true after re-fitting the parameters.

See also: ExperimentDataMAP

source

Model Fitter

The ModelFitter type defines the algorithm used to estimate the model (hyper)parameters.

BOSS.ModelFitterType

Specifies the library/algorithm used for model parameter estimation. Inherit this type to define a custom model-fitting algorithms.

Example: struct CustomFitter <: ModelFitter{MAP} ... end or struct CustomFitter <: ModelFitter{BI} ... end

Structures derived from this type have to implement the following method: estimate_parameters(model_fitter::CustomFitter, problem::BossProblem; info::Bool).

This method should return a tuple (params, val). The returned params should be a ModelParams (if CustomAlg <: ModelFitter{MAP}) or a AbstractVector{<:ModelParams} (if CustomAlg <: ModelFitter{BI}). The returned val should be the log likelihood of the parameters (if CustomAlg <: ModelFitter{MAP}), or a vector of log likelihoods of the individual parameter samples (if CustomAlg <: ModelFitter{BI}), or nothing.

See also: OptimizationMAP, TuringBI

source
BOSS.ModelFitType

An abstract type used to differentiate between MAP (Maximum A Posteriori) and BI (Bayesian Inference) types.

source

The OptimizationMAP model fitter can be used to utilize any optimization algorithm from the Optimization.jl package in order to find the MAP estimate of the (hyper)parameters. (See the example usage.)

BOSS.OptimizationMAPType
OptimizationMAP(; kwargs...)

Finds the MAP estimate of the model parameters and hyperparameters using the Optimization.jl package.

Keywords

  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Union{Int, Matrix{Float64}}: The number of optimization restarts, or a vector of tuples (θ, λ, α) containing initial (hyper)parameter values for the optimization runs.
  • parallel::Bool: If parallel=true then the individual restarts are run in parallel.
  • softplus_hyperparams::Bool: If softplus_hyperparams=true then the softplus function is applied to GP hyperparameters (length-scales & amplitudes) and noise deviations to ensure positive values during optimization.
  • softplus_params::Union{Bool, Vector{Bool}}: Defines to which parameters of the parametric model should the softplus function be applied to ensure positive values. Supplying a boolean instead of a binary vector turns the softplus on/off for all parameters. Defaults to false meaning the softplus is applied to no parameters.
source

The TuringBI model fitter can be used to utilize the Turing.jl library in order to sample the (hyper)parameters from the posterior given by the current dataset.

BOSS.TuringBIType
TuringBI(; kwargs...)

Samples the model parameters and hyperparameters using the Turing.jl package.

Keywords

  • sampler::Any: The sampling algorithm used to draw the samples.
  • warmup::Int: The amount of initial unused 'warmup' samples in each chain.
  • samples_in_chain::Int: The amount of samples used from each chain.
  • chain_count::Int: The amount of independent chains sampled.
  • leap_size: Every leap_size-th sample is used from each chain. (To avoid correlated samples.)
  • parallel: If parallel=true then the chains are sampled in parallel.

Sampling Process

In each sampled chain;

  • The first warmup samples are discarded.
  • From the following leap_size * samples_in_chain samples each leap_size-th is kept.

Then the samples from all chains are concatenated and returned.

Total drawn samples: 'chaincount * (warmup + leapsize * samplesinchain)' Total returned samples: 'chaincount * samplesin_chain'

source

The SamplingMAP model fitter preforms MAP estimation by sampling the parameters from their priors and maximizing the posterior probability over the samples. This is a trivial model fitter suitable for simple experimentation with BOSS.jl and/or Bayesian optimization. A more sophisticated model fitter such as OptimizationMAP or TuringBI should be used to solve real problems.

BOSS.SamplingMAPType
SamplingMAP()

Optimizes the model parameters by sampling them from their prior distributions and selecting the best sample in sense of MAP.

Keywords

  • samples::Int: The number of drawn samples.
  • parallel::Bool: The sampling is performed in parallel if parallel=true.
source

The RandomMAP model fitter samples random parameter values from their priors. It does NOT optimize for the most probable parameters in any way. This model fitter is provided solely for easy experimentation with BOSS.jl and should not be used to solve problems.

BOSS.RandomMAPType
RandomMAP()

Returns random model parameters sampled from their respective priors.

Can be useful with RandomSelectAM to avoid unnecessary model parameter estimations.

source

The SampleOptMAP model fitter combines the SamplingMAP and OptimizationMAP. It first samples many model parameter samples from their priors, and subsequently runs multiple optimization runs initiated at the best samples.

BOSS.SampleOptMAPType
SampleOptMAP(; kwargs...)
-SampleOptMAP(::SamplingMAP, ::OptimizationMAP)

Combines SamplingMAP and OptimizationMAP to first sample many parameter samples from the prior, and subsequently start multiple optimization runs initialized from the best samples.

Keywords

  • samples::Int: The number of drawn samples.
  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Int: The number of optimization restarts.
  • parallel::Bool: If parallel=true, then both the sampling and the optimization are performed in parallel.
  • softplus_hyperparams::Bool: If softplus_hyperparams=true then the softplus function is applied to GP hyperparameters (length-scales & amplitudes) and noise deviations to ensure positive values during optimization.
  • softplus_params::Union{Bool, Vector{Bool}}: Defines to which parameters of the parametric model should the softplus function be applied to ensure positive values. Supplying a boolean instead of a binary vector turns the softplus on/off for all parameters. Defaults to false meaning the softplus is applied to no parameters.
source

Acquisition Maximizer

The AcquisitionMaximizer type is used to define the algorithm used to maximize the acquisition function.

BOSS.AcquisitionMaximizerType

Specifies the library/algorithm used for acquisition function optimization. Inherit this type to define a custom acquisition maximizer.

Example: struct CustomAlg <: AcquisitionMaximizer ... end

Structures derived from this type have to implement the following method: maximize_acquisition(acq_maximizer::CustomAlg, acq::AcquisitionFunction, problem::BossProblem, options::BossOptions).

This method should return a tuple (x, val). The returned x is the point of the input domain which maximizes the given acquisition function acq (as a vector), or a batch of points (as a column-wise matrix). The returned val is the acquisition value acq(x), or the values acq.(eachcol(x)) for each point of the batch, or nothing (depending on the acquisition maximizer implementation).

See also: OptimizationAM

source

The OptimizationAM can be used to utilize any optimization algorithm from the Optimization.jl package.

BOSS.OptimizationAMType
OptimizationAM(; kwargs...)

Maximizes the acquisition function using the Optimization.jl library.

Can handle constraints on x if according optimization algorithm is selected.

Keywords

  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Union{<:Int, <:AbstractMatrix{<:Real}}: The number of optimization restarts, or a matrix of optimization intial points as columns.
  • parallel::Bool: If parallel=true then the individual restarts are run in parallel.
  • autodiff:SciMLBase.AbstractADType:: The automatic differentiation module passed to Optimization.OptimizationFunction.
  • kwargs...: Other kwargs are passed to the optimization algorithm.
source

The GridAM maximizes the acquisition function by evaluating all points on a fixed grid of points. This is a trivial acquisition maximizer suitable for simple experimentation with BOSS.jl and/or Bayesian optimization. More sophisticated acquisition maximizers such as OptimizationAM should be used to solve real problems.

BOSS.GridAMType
GridAM(problem, steps; kwargs...)

Maximizes the acquisition function by checking a fine grid of points from the domain.

Extremely simple optimizer which can be used for simple problems or for debugging. Not suitable for problems with high dimensional domain.

Can be used with constraints on x.

Arguments

  • problem::BossProblem: Provide your defined optimization problem.
  • steps::Vector{Float64}: Defines the size of the grid gaps in each x dimension.

Keywords

  • parallel::Bool: If parallel=true then the optimization is parallelized. Defaults to true.
source

The SamplingAM samples random candidate points from the given x_prior distribution and selects the sample with maximal acquisition value.

BOSS.SamplingAMType
SamplingAM(; kwargs...)

Optimizes the acquisition function by sampling candidates from the user-provided prior, and returning the sample with the highest acquisition value.

Keywords

  • x_prior::MultivariateDistribution: The prior over the input domain used to sample candidates.
  • samples::Int: The number of samples to be drawn and evaluated.
  • parallel::Bool: If parallel=true then the sampling is parallelized. Defaults to true.
source

The RandomAM simply returns a random point. It does NOT perform any optimization. This acquisition maximizer is provided solely for easy experimentation with BOSS.jl and should not be used to solve problems.

BOSS.RandomAMType
RandomAM()

Selects a random interior point instead of maximizing the acquisition function. Can be used for method comparison.

Can handle constraints on x, but does so by generating random points in the box domain until a point satisfying the constraints is found. Therefore it can take a long time or even get stuck if the constraints are very tight.

source

The SampleOptAM samples many candidate points from the given x_prior distribution, and subsequently performs multiple optimization runs initiated from the best samples.

BOSS.SampleOptAMType
SampleOptAM(; kwargs...)

Optimizes the acquisition function by first sampling candidates from the user-provided prior, and then running multiple optimization runs initiated from the samples with the highest acquisition values.

Keywords

  • x_prior::MultivariateDistribution: The prior over the input domain used to sample candidates.
  • samples::Int: The number of samples to be drawn and evaluated.
  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Int: The number of optimization restarts.
  • parallel::Bool: If parallel=true, both the sampling and individual optimization runs are performed in parallel.
  • autodiff:SciMLBase.AbstractADType:: The automatic differentiation module passed to Optimization.OptimizationFunction.
  • kwargs...: Other kwargs are passed to the optimization algorithm.
source

The SequentialBatchAM can be used as a wrapper of any of the other acquisition maximizers. It returns a batch of promising points for future evaluations instead of a single point, and thus allows for evaluation of the objective function in batches.

BOSS.SequentialBatchAMType
SequentialBatchAM(::AcquisitionMaximizer, ::Int)
-SequentialBatchAM(; am, batch_size)

Provides multiple candidates for batched objective function evaluation.

Selects the candidates sequentially by iterating the following steps:

  • ) Use the 'inner' acquisition maximizer to select a candidatex`.
    1. Extend the dataset with a 'speculative' new data point
    created by taking the candidate x and the posterior predictive mean of the surrogate .
    1. If batch_size candidates have been selected, return them.
    Otherwise, goto step 1).

Keywords

  • am::AcquisitionMaximizer: The inner acquisition maximizer.
  • batch_size::Int: The number of candidates to be selected.
source

Acquisition Function

The acquisition function is defined using the AcquisitionFunction type.

BOSS.AcquisitionFunctionType

Specifies the acquisition function describing the "quality" of a potential next evaluation point. Inherit this type to define a custom acquisition function.

Example: struct CustomAcq <: AcquisitionFunction ... end

All acquisition functions should implement: (acquisition::CustomAcq)(problem::BossProblem, options::BossOptions)

This method should return a function acq(x::AbstractVector{<:Real}) = val::Real, which is maximized to select the next evaluation function of blackbox function in each iteration.

See also: ExpectedImprovement

source

The ExpectedImprovement defines the expected improvement acquisition function. See [1] for more information.

BOSS.ExpectedImprovementType
ExpectedImprovement(; kwargs...)

The expected improvement (EI) acquisition function.

Fitness function must be defined as a part of the problem definition in order to use EI. (See Fitness.)

Measures the quality of a potential evaluation point x as the expected improvement in best-so-far achieved fitness by evaluating the objective function at y = f(x).

In case of constrained problems, the expected improvement is additionally weighted by the probability of feasibility of y. I.e. the probability that all(cons(y) .> 0.).

If the problem is constrained on y and no feasible point has been observed yet, then the probability of feasibility alone is returned as the acquisition function.

Rather than using the actual evaluations (xᵢ,yᵢ) from the dataset, the best-so-far achieved fitness is calculated as the maximum fitness among the means ̂yᵢ of the posterior predictive distribution of the model evaluated at xᵢ. This is a simple way to handle evaluation noise which may not be suitable for problems with substantial noise. In case of Bayesian Inference, an averaged posterior of the model posterior samples is used for the prediction of ŷᵢ.

Keywords

  • ϵ_samples::Int: Controls how many samples are used to approximate EI. The ϵ_samples keyword is ignored unless MAP model fitter and NonlinFitness are used! In case of BI model fitter, the number of samples is instead set equal to the number of posterior samples. In case of LinearFitness, the expected improvement can be calculated analytically.

  • cons_safe::Bool: If set to true, the acquisition function acq(x) is made 'constraint-safe' by checking the bounds and constraints during each evaluation. Set cons_safe to true if the evaluation of the model at exterior points may cause errors or nonsensical values. You may set cons_safe to false if the evaluation of the model at exterior points can provide useful information to the acquisition maximizer and does not cause errors. Defaults to true.

source

Termination Conditions

The TermCond type is used to define the termination condition of the BO procedure.

BOSS.TermCondType

Specifies the termination condition of the whole BOSS algorithm. Inherit this type to define a custom termination condition.

Example: struct CustomCond <: TermCond ... end

Structures derived from this type have to implement the following method: (cond::CustomCond)(problem::BossProblem)

This method should return true to keep the optimization running and return false once the optimization is to be terminated.

See also: IterLimit

source

The IterLimit terminates the procedure after a predefined number of iterations.

BOSS.IterLimitType
IterLimit(iter_max::Int)

Terminates the BOSS algorithm after predefined number of iterations.

See also: bo!

source

Miscellaneous

The BossOptions structure is used to define miscellaneous hyperparameters of the BOSS.jl package.

BOSS.BossOptionsType
BossOptions(; kwargs...)

Stores miscellaneous settings of the BOSS algorithm.

Keywords

  • info::Bool: Setting info=false silences the BOSS algorithm.
  • debug::Bool: Set debug=true to print stactraces of caught optimization errors.
  • parallel_evals::Symbol: Possible values: :serial, :parallel, :distributed. Defaults to :parallel. Determines whether to run multiple objective function evaluations within one batch in serial, parallel, or distributed fashion. (Only has an effect if batching AM is used.)
  • callback::BossCallback: If provided, callback(::BossProblem; kwargs...) will be called before the BO procedure starts and after every iteration.

See also: bo!

source

The BossCallback type is used to pass callbacks which will be called in every iteration of the BO procedure (and once before the procedure starts).

BOSS.BossCallbackType

If an object cb of type BossCallback is passed to BossOptions, the method shown below will be called before the BO procedure starts and after every iteration.

cb(problem::BossProblem;
+    ϕs = [ϕ₁, ..., ϕₘ], ϕᵢ = [ϕᵢ₁, ..., ϕᵢₚ]

and $n, m, p ∈ R$.

Keywords

  • lift::Function: Defines the lift function (::Vector{<:Real}) -> (::Vector{Vector{<:Real}}) according to the definition above.
  • theta_priors::AbstractVector{<:UnivariateDistribution}: The prior distributions for the parameters [θ₁₁, ..., θ₁ₚ, ..., θₘ₁, ..., θₘₚ] according to the definition above.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source
BOSS.NonlinModelType
NonlinModel(; kwargs...)

A parametric surrogate model.

If your model is linear, you can use LinModel which will provide better performance in the future. (Not yet implemented.)

Define the model as y = predict(x, θ) where θ are the model parameters.

Keywords

  • predict::Function: The predict function according to the definition above.
  • theta_priors::AbstractVector{<:UnivariateDistribution}: The prior distributions for the model parameters.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source

The GaussianProcess structure is used to define a Gaussian process model. See [1] for more information about Gaussian processes.

BOSS.GaussianProcessType
GaussianProcess(; kwargs...)

A Gaussian Process surrogate model. Each output dimension is modeled by a separate independent process.

Keywords

  • mean::Union{Nothing, Function}: Used as the mean function for the GP. Defaults to nothing equivalent to x -> [0.].
  • kernel::Kernel: The kernel used in the GP. Defaults to the Matern32Kernel().
  • amp_priors::AmplitudePriors: The prior distributions for the amplitude hyperparameters of the GP. The amp_priors should be a vector of y_dim univariate distributions.
  • length_scale_priors::LengthScalePriors: The prior distributions for the length scales of the GP. The length_scale_priors should be a vector of y_dim x_dim-variate distributions where x_dim and y_dim are the dimensions of the input and output of the model respectively.
  • noise_std_priors::NoiseStdPriors: The prior distributions of the noise standard deviations of each y dimension.
source

The Semiparametric structure is used to define a semiparametric model combining the parametric and nonparametric (Gaussian process) models.

BOSS.SemiparametricType
Semiparametric(; kwargs...)

A semiparametric surrogate model (a combination of a parametric model and Gaussian Process).

The parametric model is used as the mean of the Gaussian Process and all parameters and hyperparameters are estimated simultaneously.

Keywords

  • parametric::Parametric: The parametric model used as the GP mean function.
  • nonparametric::Nonparametric{Nothing}: The outer GP model without mean.

Note that the parametric model must be defined without noise priors, and the nonparametric model must be defined without mean function.

source

Evaluation Noise

The priors on evaluation noise deviation $\sigma_f$ are defined using the noise_std_priors field of the SurrogateModel.

Experiment Data

The data from all past objective function evaluations as well as estimated parameter and/or hyperparameter values of the surrogate model are stored in the ExperimentData types.

The ExperimentDataPriors structure is used to pass the initial dataset to the BossProblem.

BOSS.ExperimentDataPriorType

Stores the initial data.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.

See also: ExperimentDataPost

source

The ExperimentDataPost types contain the estimated model (hyper)parameters in addition to the dataset. The ExperimentDataMAP structure contains the MAP estimate of the parameters in case a MAP model fitter is used, and the ExperimentDataBI structure contains samples of the parameters in case a Bayesian inference model fitter is used.

BOSS.ExperimentDataMAPType

Stores the data matrices X,Y as well as the optimized model parameters and hyperparameters.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.
  • params::ModelParams: Contains MAP model (hyper)parameters.
  • consistent::Bool: True iff the parameters have been fitted using the current dataset (X, Y). Is set to consistent = false after updating the dataset, and to consistent = true after re-fitting the parameters.

See also: ExperimentDataBI

source
BOSS.ExperimentDataBIType

Stores the data matrices X,Y as well as the sampled model parameters and hyperparameters.

Fields

  • X::AbstractMatrix{<:Real}: Contains the objective function inputs as columns.
  • Y::AbstractMatrix{<:Real}: Contains the objective function outputs as columns.
  • params::AbstractVector{<:ModelParams}: Contains samples of the model (hyper)parameters.
  • consistent::Bool: True iff the parameters have been fitted using the current dataset (X, Y). Is set to consistent = false after updating the dataset, and to consistent = true after re-fitting the parameters.

See also: ExperimentDataMAP

source

Model Fitter

The ModelFitter type defines the algorithm used to estimate the model (hyper)parameters.

BOSS.ModelFitterType

Specifies the library/algorithm used for model parameter estimation. Inherit this type to define a custom model-fitting algorithms.

Example: struct CustomFitter <: ModelFitter{MAP} ... end or struct CustomFitter <: ModelFitter{BI} ... end

Structures derived from this type have to implement the following method: estimate_parameters(model_fitter::CustomFitter, problem::BossProblem; info::Bool).

This method should return a tuple (params, val). The returned params should be a ModelParams (if CustomAlg <: ModelFitter{MAP}) or a AbstractVector{<:ModelParams} (if CustomAlg <: ModelFitter{BI}). The returned val should be the log likelihood of the parameters (if CustomAlg <: ModelFitter{MAP}), or a vector of log likelihoods of the individual parameter samples (if CustomAlg <: ModelFitter{BI}), or nothing.

See also: OptimizationMAP, TuringBI

source
BOSS.ModelFitType

An abstract type used to differentiate between MAP (Maximum A Posteriori) and BI (Bayesian Inference) types.

source

The OptimizationMAP model fitter can be used to utilize any optimization algorithm from the Optimization.jl package in order to find the MAP estimate of the (hyper)parameters. (See the example usage.)

BOSS.OptimizationMAPType
OptimizationMAP(; kwargs...)

Finds the MAP estimate of the model parameters and hyperparameters using the Optimization.jl package.

Keywords

  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Union{Int, Matrix{Float64}}: The number of optimization restarts, or a vector of tuples (θ, λ, α) containing initial (hyper)parameter values for the optimization runs.
  • parallel::Bool: If parallel=true then the individual restarts are run in parallel.
  • softplus_hyperparams::Bool: If softplus_hyperparams=true then the softplus function is applied to GP hyperparameters (length-scales & amplitudes) and noise deviations to ensure positive values during optimization.
  • softplus_params::Union{Bool, Vector{Bool}}: Defines to which parameters of the parametric model should the softplus function be applied to ensure positive values. Supplying a boolean instead of a binary vector turns the softplus on/off for all parameters. Defaults to false meaning the softplus is applied to no parameters.
source

The TuringBI model fitter can be used to utilize the Turing.jl library in order to sample the (hyper)parameters from the posterior given by the current dataset.

BOSS.TuringBIType
TuringBI(; kwargs...)

Samples the model parameters and hyperparameters using the Turing.jl package.

Keywords

  • sampler::Any: The sampling algorithm used to draw the samples.
  • warmup::Int: The amount of initial unused 'warmup' samples in each chain.
  • samples_in_chain::Int: The amount of samples used from each chain.
  • chain_count::Int: The amount of independent chains sampled.
  • leap_size: Every leap_size-th sample is used from each chain. (To avoid correlated samples.)
  • parallel: If parallel=true then the chains are sampled in parallel.

Sampling Process

In each sampled chain;

  • The first warmup samples are discarded.
  • From the following leap_size * samples_in_chain samples each leap_size-th is kept.

Then the samples from all chains are concatenated and returned.

Total drawn samples: 'chaincount * (warmup + leapsize * samplesinchain)' Total returned samples: 'chaincount * samplesin_chain'

source

The SamplingMAP model fitter preforms MAP estimation by sampling the parameters from their priors and maximizing the posterior probability over the samples. This is a trivial model fitter suitable for simple experimentation with BOSS.jl and/or Bayesian optimization. A more sophisticated model fitter such as OptimizationMAP or TuringBI should be used to solve real problems.

BOSS.SamplingMAPType
SamplingMAP()

Optimizes the model parameters by sampling them from their prior distributions and selecting the best sample in sense of MAP.

Keywords

  • samples::Int: The number of drawn samples.
  • parallel::Bool: The sampling is performed in parallel if parallel=true.
source

The RandomMAP model fitter samples random parameter values from their priors. It does NOT optimize for the most probable parameters in any way. This model fitter is provided solely for easy experimentation with BOSS.jl and should not be used to solve problems.

BOSS.RandomMAPType
RandomMAP()

Returns random model parameters sampled from their respective priors.

Can be useful with RandomSelectAM to avoid unnecessary model parameter estimations.

source

The SampleOptMAP model fitter combines the SamplingMAP and OptimizationMAP. It first samples many model parameter samples from their priors, and subsequently runs multiple optimization runs initiated at the best samples.

BOSS.SampleOptMAPType
SampleOptMAP(; kwargs...)
+SampleOptMAP(::SamplingMAP, ::OptimizationMAP)

Combines SamplingMAP and OptimizationMAP to first sample many parameter samples from the prior, and subsequently start multiple optimization runs initialized from the best samples.

Keywords

  • samples::Int: The number of drawn samples.
  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Int: The number of optimization restarts.
  • parallel::Bool: If parallel=true, then both the sampling and the optimization are performed in parallel.
  • softplus_hyperparams::Bool: If softplus_hyperparams=true then the softplus function is applied to GP hyperparameters (length-scales & amplitudes) and noise deviations to ensure positive values during optimization.
  • softplus_params::Union{Bool, Vector{Bool}}: Defines to which parameters of the parametric model should the softplus function be applied to ensure positive values. Supplying a boolean instead of a binary vector turns the softplus on/off for all parameters. Defaults to false meaning the softplus is applied to no parameters.
source

Acquisition Maximizer

The AcquisitionMaximizer type is used to define the algorithm used to maximize the acquisition function.

BOSS.AcquisitionMaximizerType

Specifies the library/algorithm used for acquisition function optimization. Inherit this type to define a custom acquisition maximizer.

Example: struct CustomAlg <: AcquisitionMaximizer ... end

Structures derived from this type have to implement the following method: maximize_acquisition(acq_maximizer::CustomAlg, acq::AcquisitionFunction, problem::BossProblem, options::BossOptions).

This method should return a tuple (x, val). The returned x is the point of the input domain which maximizes the given acquisition function acq (as a vector), or a batch of points (as a column-wise matrix). The returned val is the acquisition value acq(x), or the values acq.(eachcol(x)) for each point of the batch, or nothing (depending on the acquisition maximizer implementation).

See also: OptimizationAM

source

The OptimizationAM can be used to utilize any optimization algorithm from the Optimization.jl package.

BOSS.OptimizationAMType
OptimizationAM(; kwargs...)

Maximizes the acquisition function using the Optimization.jl library.

Can handle constraints on x if according optimization algorithm is selected.

Keywords

  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Union{<:Int, <:AbstractMatrix{<:Real}}: The number of optimization restarts, or a matrix of optimization intial points as columns.
  • parallel::Bool: If parallel=true then the individual restarts are run in parallel.
  • autodiff:SciMLBase.AbstractADType:: The automatic differentiation module passed to Optimization.OptimizationFunction.
  • kwargs...: Other kwargs are passed to the optimization algorithm.
source

The GridAM maximizes the acquisition function by evaluating all points on a fixed grid of points. This is a trivial acquisition maximizer suitable for simple experimentation with BOSS.jl and/or Bayesian optimization. More sophisticated acquisition maximizers such as OptimizationAM should be used to solve real problems.

BOSS.GridAMType
GridAM(problem, steps; kwargs...)

Maximizes the acquisition function by checking a fine grid of points from the domain.

Extremely simple optimizer which can be used for simple problems or for debugging. Not suitable for problems with high dimensional domain.

Can be used with constraints on x.

Arguments

  • problem::BossProblem: Provide your defined optimization problem.
  • steps::Vector{Float64}: Defines the size of the grid gaps in each x dimension.

Keywords

  • parallel::Bool: If parallel=true then the optimization is parallelized. Defaults to true.
source

The SamplingAM samples random candidate points from the given x_prior distribution and selects the sample with maximal acquisition value.

BOSS.SamplingAMType
SamplingAM(; kwargs...)

Optimizes the acquisition function by sampling candidates from the user-provided prior, and returning the sample with the highest acquisition value.

Keywords

  • x_prior::MultivariateDistribution: The prior over the input domain used to sample candidates.
  • samples::Int: The number of samples to be drawn and evaluated.
  • parallel::Bool: If parallel=true then the sampling is parallelized. Defaults to true.
source

The RandomAM simply returns a random point. It does NOT perform any optimization. This acquisition maximizer is provided solely for easy experimentation with BOSS.jl and should not be used to solve problems.

BOSS.RandomAMType
RandomAM()

Selects a random interior point instead of maximizing the acquisition function. Can be used for method comparison.

Can handle constraints on x, but does so by generating random points in the box domain until a point satisfying the constraints is found. Therefore it can take a long time or even get stuck if the constraints are very tight.

source

The SampleOptAM samples many candidate points from the given x_prior distribution, and subsequently performs multiple optimization runs initiated from the best samples.

BOSS.SampleOptAMType
SampleOptAM(; kwargs...)

Optimizes the acquisition function by first sampling candidates from the user-provided prior, and then running multiple optimization runs initiated from the samples with the highest acquisition values.

Keywords

  • x_prior::MultivariateDistribution: The prior over the input domain used to sample candidates.
  • samples::Int: The number of samples to be drawn and evaluated.
  • algorithm::Any: Defines the optimization algorithm.
  • multistart::Int: The number of optimization restarts.
  • parallel::Bool: If parallel=true, both the sampling and individual optimization runs are performed in parallel.
  • autodiff:SciMLBase.AbstractADType:: The automatic differentiation module passed to Optimization.OptimizationFunction.
  • kwargs...: Other kwargs are passed to the optimization algorithm.
source

The SequentialBatchAM can be used as a wrapper of any of the other acquisition maximizers. It returns a batch of promising points for future evaluations instead of a single point, and thus allows for evaluation of the objective function in batches.

BOSS.SequentialBatchAMType
SequentialBatchAM(::AcquisitionMaximizer, ::Int)
+SequentialBatchAM(; am, batch_size)

Provides multiple candidates for batched objective function evaluation.

Selects the candidates sequentially by iterating the following steps:

  • ) Use the 'inner' acquisition maximizer to select a candidatex`.
    1. Extend the dataset with a 'speculative' new data point
    created by taking the candidate x and the posterior predictive mean of the surrogate .
    1. If batch_size candidates have been selected, return them.
    Otherwise, goto step 1).

Keywords

  • am::AcquisitionMaximizer: The inner acquisition maximizer.
  • batch_size::Int: The number of candidates to be selected.
source

Acquisition Function

The acquisition function is defined using the AcquisitionFunction type.

BOSS.AcquisitionFunctionType

Specifies the acquisition function describing the "quality" of a potential next evaluation point. Inherit this type to define a custom acquisition function.

Example: struct CustomAcq <: AcquisitionFunction ... end

All acquisition functions should implement: (acquisition::CustomAcq)(problem::BossProblem, options::BossOptions)

This method should return a function acq(x::AbstractVector{<:Real}) = val::Real, which is maximized to select the next evaluation function of blackbox function in each iteration.

See also: ExpectedImprovement

source

The ExpectedImprovement defines the expected improvement acquisition function. See [1] for more information.

BOSS.ExpectedImprovementType
ExpectedImprovement(; kwargs...)

The expected improvement (EI) acquisition function.

Fitness function must be defined as a part of the problem definition in order to use EI. (See Fitness.)

Measures the quality of a potential evaluation point x as the expected improvement in best-so-far achieved fitness by evaluating the objective function at y = f(x).

In case of constrained problems, the expected improvement is additionally weighted by the probability of feasibility of y. I.e. the probability that all(cons(y) .> 0.).

If the problem is constrained on y and no feasible point has been observed yet, then the probability of feasibility alone is returned as the acquisition function.

Rather than using the actual evaluations (xᵢ,yᵢ) from the dataset, the best-so-far achieved fitness is calculated as the maximum fitness among the means ̂yᵢ of the posterior predictive distribution of the model evaluated at xᵢ. This is a simple way to handle evaluation noise which may not be suitable for problems with substantial noise. In case of Bayesian Inference, an averaged posterior of the model posterior samples is used for the prediction of ŷᵢ.

Keywords

  • ϵ_samples::Int: Controls how many samples are used to approximate EI. The ϵ_samples keyword is ignored unless MAP model fitter and NonlinFitness are used! In case of BI model fitter, the number of samples is instead set equal to the number of posterior samples. In case of LinearFitness, the expected improvement can be calculated analytically.

  • cons_safe::Bool: If set to true, the acquisition function acq(x) is made 'constraint-safe' by checking the bounds and constraints during each evaluation. Set cons_safe to true if the evaluation of the model at exterior points may cause errors or nonsensical values. You may set cons_safe to false if the evaluation of the model at exterior points can provide useful information to the acquisition maximizer and does not cause errors. Defaults to true.

source

Termination Conditions

The TermCond type is used to define the termination condition of the BO procedure.

BOSS.TermCondType

Specifies the termination condition of the whole BOSS algorithm. Inherit this type to define a custom termination condition.

Example: struct CustomCond <: TermCond ... end

Structures derived from this type have to implement the following method: (cond::CustomCond)(problem::BossProblem)

This method should return true to keep the optimization running and return false once the optimization is to be terminated.

See also: IterLimit

source

The IterLimit terminates the procedure after a predefined number of iterations.

BOSS.IterLimitType
IterLimit(iter_max::Int)

Terminates the BOSS algorithm after predefined number of iterations.

See also: bo!

source

Miscellaneous

The BossOptions structure is used to define miscellaneous hyperparameters of the BOSS.jl package.

BOSS.BossOptionsType
BossOptions(; kwargs...)

Stores miscellaneous settings of the BOSS algorithm.

Keywords

  • info::Bool: Setting info=false silences the BOSS algorithm.
  • debug::Bool: Set debug=true to print stactraces of caught optimization errors.
  • parallel_evals::Symbol: Possible values: :serial, :parallel, :distributed. Defaults to :parallel. Determines whether to run multiple objective function evaluations within one batch in serial, parallel, or distributed fashion. (Only has an effect if batching AM is used.)
  • callback::BossCallback: If provided, callback(::BossProblem; kwargs...) will be called before the BO procedure starts and after every iteration.

See also: bo!

source

The BossCallback type is used to pass callbacks which will be called in every iteration of the BO procedure (and once before the procedure starts).

BOSS.BossCallbackType

If an object cb of type BossCallback is passed to BossOptions, the method shown below will be called before the BO procedure starts and after every iteration.

cb(problem::BossProblem;
     model_fitter::ModelFitter,
     acq_maximizer::AcquisitionMaximizer,
     acquisition::AcquisitionFunction,
     term_cond::TermCond,
     options::BossOptions,
     first::Bool,
-)

The kwargs first is true on the first callback before the BO procedure starts, and is false on all subsequent callbacks after each iteration.

See PlotCallback for an example usage of this feature for plotting.

source

The PlotCallback provides plots the state of the BO procedure in every iteration. It currently only supports one-dimensional input spaces.

BOSS.PlotCallbackType
PlotOptions(Plots; kwargs...)

If PlotOptions is passed to BossOptions as callback, the state of the optimization problem is plotted in each iteration. Only works with one-dimensional x domains but supports multi-dimensional y.

Arguments

  • Plots::Module: Evaluate using Plots and pass the Plots module to PlotsOptions.

Keywords

  • f_true::Union{Nothing, Function}: The true objective function to be plotted.
  • points::Int: The number of points in each plotted function.
  • xaxis::Symbol: Used to change the x axis scale (:identity, :log).
  • yaxis::Symbol: Used to change the y axis scale (:identity, :log).
  • title::String: The plot title.
source

References

[1] Bobak Shahriari et al. “Taking the human out of the loop: A review of Bayesian optimization”. In: Proceedings of the IEEE 104.1 (2015), pp. 148–175

+)

The kwargs first is true on the first callback before the BO procedure starts, and is false on all subsequent callbacks after each iteration.

See PlotCallback for an example usage of this feature for plotting.

source
BOSS.NoCallbackType
NoCallback()

Does nothing.

source

The PlotCallback provides plots the state of the BO procedure in every iteration. It currently only supports one-dimensional input spaces.

BOSS.PlotCallbackType
PlotOptions(Plots; kwargs...)

If PlotOptions is passed to BossOptions as callback, the state of the optimization problem is plotted in each iteration. Only works with one-dimensional x domains but supports multi-dimensional y.

Arguments

  • Plots::Module: Evaluate using Plots and pass the Plots module to PlotsOptions.

Keywords

  • f_true::Union{Nothing, Function}: The true objective function to be plotted.
  • points::Int: The number of points in each plotted function.
  • xaxis::Symbol: Used to change the x axis scale (:identity, :log).
  • yaxis::Symbol: Used to change the y axis scale (:identity, :log).
  • title::String: The plot title.
source

References

[1] Bobak Shahriari et al. “Taking the human out of the loop: A review of Bayesian optimization”. In: Proceedings of the IEEE 104.1 (2015), pp. 148–175