Gaussian Processes

GaussianProcesses.predict_fMethod
predict_f(gp::GPBase, X::Matrix{Float64}[]; full_cov::Bool = false)

Return posterior mean and variance of the Gaussian Process gp at specfic points which are given as columns of matrix X. If full_cov is true, the full covariance matrix is returned instead of only variances.

source
GaussianProcesses.GPEType
GPE(x, y, mean, kernel[, logNoise])

Fit a Gaussian process to a set of training points. The Gaussian process is defined in terms of its user-defined mean and covariance (kernel) functions. As a default it is assumed that the observations are noise free.

Arguments:

  • x::AbstractVecOrMat{Float64}: Input observations
  • y::AbstractVector{Float64}: Output observations
  • mean::Mean: Mean function
  • kernel::Kernel: Covariance function
  • logNoise::Float64: Natural logarithm of the standard deviation for the observation noise. The default is -2.0, which is equivalent to assuming no observation noise.
source
GaussianProcesses.GPEMethod
GPE(; mean::Mean = MeanZero(), kernel::Kernel = SE(0.0, 0.0), logNoise::AbstractFloat = -2.0)

Construct a GPE object without observations.

source
GaussianProcesses.GPFunction
GP(x, y, mean::Mean, kernel::Kernel[, logNoise::AbstractFloat=-2.0])

Fit a Gaussian process that is defined by its mean, its kernel, and the logarithm logNoise of the standard deviation of its observation noise to a set of training points x and y.

See also: GPE

source
GaussianProcesses.GPAMethod
GPA(x, y, mean, kernel, lik)

Fit a Gaussian process to a set of training points. The Gaussian process with non-Gaussian observations is defined in terms of its user-defined likelihood function, mean and covaiance (kernel) functions.

The non-Gaussian likelihood is handled by an approximate method (e.g. Monte Carlo). The latent function values are represented by centered (whitened) variables $f(x) = m(x) + Lv$ where $v ∼ N(0, I)$ and $LLᵀ = K_θ$.

Arguments:

  • x::AbstractVecOrMat{Float64}: Input observations
  • y::AbstractVector{<:Real}: Output observations
  • mean::Mean: Mean function
  • kernel::Kernel: Covariance function
  • lik::Likelihood: Likelihood function
source
GaussianProcesses.GPMethod
GP(x, y, mean::Mean, kernel::Kernel, lik::Likelihood)

Fit a Gaussian process that is defined by its mean, its kernel, and its likelihood function lik to a set of training points x and y.

See also: GPA

source
GaussianProcesses.update_target!Method
update_target!(gp::GPA, ...)

Update the log-posterior

\[\log p(θ, v | y) ∝ \log p(y | v, θ) + \log p(v) + \log p(θ)\]

of a Gaussian process gp.

source
GaussianProcesses.essMethod
ess(gp::GPBase; kwargs...)

Sample GP hyperparameters using the elliptical slice sampling algorithm described in,

Murray, Iain, Ryan P. Adams, and David JC MacKay. "Elliptical slice sampling." Journal of Machine Learning Research 9 (2010): 541-548.

Requires hyperparameter priors to be Gaussian.

source
GaussianProcesses.mcmcMethod
mcmc(gp::GPBase; kwargs...)

Runs Hamiltonian Monte Carlo algorithm for estimating the hyperparameters of Gaussian process GPE and the latent function in the case of GPA.

source
GaussianProcesses.CovarianceStrategyType
The abstract CovarianceStrategy type is for types that control how
the covariance matrices and their positive definite representation
are obtained or approximated. See SparseStrategy for examples.
source
GaussianProcesses.make_posdef!Method
make_posdef!(m::Matrix{Float64}, chol_factors::Matrix{Float64})

Try to encode covariance matrix m as a positive definite matrix. The chol_factors matrix is recycled to store the cholesky decomposition, so as to reduce the number of memory allocations.

Sometimes covariance matrices of Gaussian processes are positive definite mathematically but have negative eigenvalues numerically. To resolve this issue, small weights are added to the diagonal (and hereby all eigenvalues are raised by that amount mathematically) until all eigenvalues are positive numerically.

source
GaussianProcesses.predictMVNMethod
    predictMVN(xpred::AbstractMatrix, xtrain::AbstractMatrix, ytrain::AbstractVector,
               kernel::Kernel, meanf::Mean, alpha::AbstractVector,
               covstrat::CovarianceStrategy, Ktrain::AbstractPDMat)

Compute predictions using the standard multivariate normal conditional distribution formulae.

source
GaussianProcesses.dmll_kern!Method
dmll_kern!((dmll::AbstractVector, k::Kernel, X::AbstractMatrix, data::KernelData, ααinvcKI::AbstractMatrix))

Derivative of the marginal log likelihood log p(Y|θ) with respect to the kernel hyperparameters.

source
GaussianProcesses.fit!Method
fit!(gp::GPE{X,Y}, x::X, y::Y)

Fit Gaussian process GPE to a training data set consisting of input observations x and output observations y.

source
GaussianProcesses.get_ααinvcKI!Method
get_ααinvcKI!(ααinvcKI::Matrix{Float64}, cK::AbstractPDMat, α::Vector)

Write ααᵀ - cK⁻¹ to ααinvcKI avoiding any memory allocation, where cK and α are the covariance matrix and the alpha vector of a Gaussian process, respectively. Hereby α is defined as cK⁻¹ (Y - μ).

source
GaussianProcesses.update_mll!Method
update_mll!(gp::GPE)

Modification of initialise_target! that reuses existing matrices to avoid unnecessary memory allocations, which speeds things up significantly.

source
GaussianProcesses.composite_param_namesMethod
composite_param_names(objects, prefix)

Call get_param_names on each element of objects and prefix the returned name of the element at index i with prefix * i * '_'.

Examples

julia> GaussianProcesses.get_param_names(ProdKernel(Mat12Iso(1/2, 1/2), SEArd([0.0, 1.0], 0.0)))
5-element Array{Symbol,1}:
 :pk1_ll
 :pk1_lσ
 :pk2_ll_1
 :pk2_ll_2
 :pk2_lσ
source
GaussianProcesses.map_column_pairsMethod
map_column_pairs(f, X::Matrix{Float64}[, Y::Matrix{Float64} = X])

Create a matrix by applying function f to each pair of columns of input matrices X and Y.

source