Cross Validation
GaussianProcesses.dlogpdθ_CVfold
— Methoddlogpdθ_CVfold_kern!(∂logp∂θ::AbstractVector{<:Real}, gp::GPE, folds::Folds;
noise::Bool, domean::Bool, kern::Bool)
Derivative of leave-one-out CV criterion with respect to the noise, mean and kernel hyperparameters. See Rasmussen & Williams equations 5.13.
GaussianProcesses.dlogpdθ_LOO
— Methoddlogpdθ_LOO(gp::GPE; noise::Bool, domean::Bool, kern::Bool)
Derivatives of the leave-one-out CV criterion.
See also: logp_LOO
, dlogpdθ_LOO_kern!
, dlogpdσ2_LOO
GaussianProcesses.dlogpdθ_LOO_kern!
— Methoddlogpdθ_LOO_kern!(gp::GPE)
Gradient of leave-one-out CV criterion with respect to the kernel hyperparameters. See Rasmussen & Williams equations 5.13.
See also: logp_LOO
, dlogpdσ2_LOO
, dlogpdθ_LOO
GaussianProcesses.dlogpdσ2_LOO
— Methoddlogpdσ2_LOO(invΣ::PDMat, x::AbstractMatrix, y::AbstractVector, data::KernelData, alpha::AbstractVector)
Derivative of leave-one-out CV criterion with respect to the logNoise parameter.
See also: logp_LOO
, dlogpdθ_LOO_kern!
, dlogpdθ_LOO
GaussianProcesses.gradient_fold
— Methodgradient_fold(invΣ, alpha, ZjΣinv, Zjα, V::AbstractVector{Int})
Gradient with respect to the kernel hyperparameters of the CV criterion component for one validation fold:
where Y_V
is the validation set and Y_T
is the training set (all other observations) for this fold.
GaussianProcesses.logp_CVfold
— Methodlogp_CVfold(gp::GPE, folds::Folds)
CV criterion for arbitrary fold. A fold is a set of indices of the validation set.
See also predict_CVfold
, logp_LOO
GaussianProcesses.logp_LOO
— Methodlogp_LOO(gp::GPE)
Leave-one-out log probability CV criterion. This is implemented by summing the normal log-pdf of each observation with predictive LOO mean and variance parameters obtained by the predict_LOO
function.
See also: logp_CVfold
, update_mll!
GaussianProcesses.predict_CVfold
— Methodpredict_CVfold(gp::GPE, folds::Folds)
Cross-validated predictions for arbitrary folds. A fold is a set of indices of the validation set. Returns predictions of yV (validation set) given all other observations yT (training set), as a vector of means and covariances. Using the notation from Rasmussen & Williams, see e.g. equation 5.12: σᵢ = 𝕍 (yᵢ | y₋ᵢ)^(1/2) μᵢ = 𝔼 (yᵢ | y₋ᵢ)
The output is the same as fitting the GP on xT,yT, and calling predict_f
on x_V to obtain the LOO predictive mean and covariance, repeated for each fold V. With GPs, this can thankfully be done analytically with a bit of linear algebra, which is what this function implements.
See also: predict_LOO
GaussianProcesses.predict_LOO
— Methodpredict_LOO(gp::GPE)
Leave-one-out cross-validated predictions. Returns predictions of yᵢ given all other observations y₋ᵢ, as a vector of means and variances. Using the notation from Rasmussen & Williams, see e.g. equation 5.12: σᵢ = 𝕍 (yᵢ | y₋ᵢ)^(1/2) μᵢ = 𝔼 (yᵢ | y₋ᵢ)
The output is the same as fitting the GP on x₋ᵢ,y₋ᵢ, and calling predict_f
on xᵢ to obtain the LOO predictive mean and variance, repeated for each observation i. With GPs, this can thankfully be done analytically with a bit of linear algebra, which is what this function implements.
See also: predict_CVfold
, logp_LOO