Gaussian Processes
GaussianProcesses.predict_f
— Methodpredict_f(gp::GPBase, X::Matrix{Float64}[]; full_cov::Bool = false)
Return posterior mean and variance of the Gaussian Process gp
at specfic points which are given as columns of matrix X
. If full_cov
is true
, the full covariance matrix is returned instead of only variances.
GaussianProcesses.GPE
— TypeGPE(x, y, mean, kernel[, logNoise])
Fit a Gaussian process to a set of training points. The Gaussian process is defined in terms of its user-defined mean and covariance (kernel) functions. As a default it is assumed that the observations are noise free.
Arguments:
x::AbstractVecOrMat{Float64}
: Input observationsy::AbstractVector{Float64}
: Output observationsmean::Mean
: Mean functionkernel::Kernel
: Covariance functionlogNoise::Float64
: Natural logarithm of the standard deviation for the observation noise. The default is -2.0, which is equivalent to assuming no observation noise.
GaussianProcesses.GPE
— MethodGPE(; mean::Mean = MeanZero(), kernel::Kernel = SE(0.0, 0.0), logNoise::AbstractFloat = -2.0)
Construct a GPE
object without observations.
GaussianProcesses.GP
— FunctionGP(x, y, mean::Mean, kernel::Kernel[, logNoise::AbstractFloat=-2.0])
Fit a Gaussian process that is defined by its mean
, its kernel
, and the logarithm logNoise
of the standard deviation of its observation noise to a set of training points x
and y
.
See also: GPE
GaussianProcesses.update_target!
— Methodupdate_target!(gp::GPE, ...)
Update the log-posterior
of a Gaussian process gp
.
GaussianProcesses.GPA
— MethodGPA(x, y, mean, kernel, lik)
Fit a Gaussian process to a set of training points. The Gaussian process with non-Gaussian observations is defined in terms of its user-defined likelihood function, mean and covaiance (kernel) functions.
The non-Gaussian likelihood is handled by an approximate method (e.g. Monte Carlo). The latent function values are represented by centered (whitened) variables $f(x) = m(x) + Lv$ where $v ∼ N(0, I)$ and $LLᵀ = K_θ$.
Arguments:
x::AbstractVecOrMat{Float64}
: Input observationsy::AbstractVector{<:Real}
: Output observationsmean::Mean
: Mean functionkernel::Kernel
: Covariance functionlik::Likelihood
: Likelihood function
GaussianProcesses.GP
— MethodGP(x, y, mean::Mean, kernel::Kernel, lik::Likelihood)
Fit a Gaussian process that is defined by its mean
, its kernel
, and its likelihood function lik
to a set of training points x
and y
.
See also: GPA
GaussianProcesses.update_target!
— Methodupdate_target!(gp::GPA, ...)
Update the log-posterior
of a Gaussian process gp
.
GaussianProcesses.ess
— Methodess(gp::GPBase; kwargs...)
Sample GP hyperparameters using the elliptical slice sampling algorithm described in,
Murray, Iain, Ryan P. Adams, and David JC MacKay. "Elliptical slice sampling." Journal of Machine Learning Research 9 (2010): 541-548.
Requires hyperparameter priors to be Gaussian.
GaussianProcesses.mcmc
— Methodmcmc(gp::GPBase; kwargs...)
Runs Hamiltonian Monte Carlo algorithm for estimating the hyperparameters of Gaussian process GPE
and the latent function in the case of GPA
.
GaussianProcesses.CovarianceStrategy
— TypeThe abstract CovarianceStrategy type is for types that control how
the covariance matrices and their positive definite representation
are obtained or approximated. See SparseStrategy for examples.
GaussianProcesses.make_posdef!
— Methodmake_posdef!(m::Matrix{Float64}, chol_factors::Matrix{Float64})
Try to encode covariance matrix m
as a positive definite matrix. The chol_factors
matrix is recycled to store the cholesky decomposition, so as to reduce the number of memory allocations.
Sometimes covariance matrices of Gaussian processes are positive definite mathematically but have negative eigenvalues numerically. To resolve this issue, small weights are added to the diagonal (and hereby all eigenvalues are raised by that amount mathematically) until all eigenvalues are positive numerically.
GaussianProcesses.predictMVN
— Method predictMVN(xpred::AbstractMatrix, xtrain::AbstractMatrix, ytrain::AbstractVector,
kernel::Kernel, meanf::Mean, alpha::AbstractVector,
covstrat::CovarianceStrategy, Ktrain::AbstractPDMat)
Compute predictions using the standard multivariate normal conditional distribution formulae.
GaussianProcesses.AbstractGradientPrecompute
— TypeAbstractGradientPrecompute types hold results of pre-computations of kernel gradients.
GaussianProcesses.dmll_kern!
— Methoddmll_kern!((dmll::AbstractVector, k::Kernel, X::AbstractMatrix, data::KernelData, ααinvcKI::AbstractMatrix))
Derivative of the marginal log likelihood log p(Y|θ) with respect to the kernel hyperparameters.
GaussianProcesses.fit!
— Methodfit!(gp::GPE{X,Y}, x::X, y::Y)
Fit Gaussian process GPE
to a training data set consisting of input observations x
and output observations y
.
GaussianProcesses.get_ααinvcKI!
— Methodget_ααinvcKI!(ααinvcKI::Matrix{Float64}, cK::AbstractPDMat, α::Vector)
Write ααᵀ - cK⁻¹
to ααinvcKI
avoiding any memory allocation, where cK
and α
are the covariance matrix and the alpha vector of a Gaussian process, respectively. Hereby α
is defined as cK⁻¹ (Y - μ)
.
GaussianProcesses.initialise_target!
— Methodinitialise_target!(gp::GPE)
Initialise the log-posterior
of a Gaussian process gp
.
GaussianProcesses.update_cK!
— Methodupdate_cK!(gp::GPE)
Update the covariance matrix and its Cholesky decomposition of Gaussian process gp
.
GaussianProcesses.update_dmll!
— Method update_dmll!(gp::GPE, ...)
Update the gradient of the marginal log-likelihood of Gaussian process gp
.
GaussianProcesses.update_mll!
— Methodupdate_mll!(gp::GPE)
Modification of initialise_target! that reuses existing matrices to avoid unnecessary memory allocations, which speeds things up significantly.
GaussianProcesses.update_mll_and_dmll!
— Methodupdate_mll_and_dmll!(gp::GPE, ...)
Update the gradient of the marginal log-likelihood of a Gaussian process gp
.
GaussianProcesses.update_target_and_dtarget!
— Methodupdate_target_and_dtarget!(gp::GPE, ...)
Update the log-posterior
of a Gaussian process gp
and its derivative.
GaussianProcesses.initialise_ll!
— Methodinitialise_ll!(gp::GPA)
Initialise the log-likelihood of Gaussian process gp
.
GaussianProcesses.initialise_target!
— Methodinitialise_target!(gp::GPA)
Initialise the log-posterior
of a Gaussian process gp
.
GaussianProcesses.update_cK!
— Methodupdate_cK!(gp::GPA)
Update the covariance matrix and its Cholesky decomposition of Gaussian process gp
.
GaussianProcesses.update_dll!
— Method update_dll!(gp::GPA, ...)
Update the gradient of the log-likelihood of Gaussian process gp
.
GaussianProcesses.update_target_and_dtarget!
— Methodupdate_target_and_dtarget!(gp::GPA, ...)
Update the log-posterior
of a Gaussian process gp
and its derivative.
GaussianProcesses.composite_param_names
— Methodcomposite_param_names(objects, prefix)
Call get_param_names
on each element of objects
and prefix the returned name of the element at index i
with prefix * i * '_'
.
Examples
julia> GaussianProcesses.get_param_names(ProdKernel(Mat12Iso(1/2, 1/2), SEArd([0.0, 1.0], 0.0)))
5-element Array{Symbol,1}:
:pk1_ll
:pk1_lσ
:pk2_ll_1
:pk2_ll_2
:pk2_lσ
GaussianProcesses.map_column_pairs!
— Methodmap_column_pairs!(D::Matrix{Float64}, f, X::Matrix{Float64}[, Y::Matrix{Float64} = X])
Like map_column_pairs
, but stores the result in D
rather than a new matrix.
GaussianProcesses.map_column_pairs
— Methodmap_column_pairs(f, X::Matrix{Float64}[, Y::Matrix{Float64} = X])
Create a matrix by applying function f
to each pair of columns of input matrices X
and Y
.