gpflow.models.gpr#
Classes#
gpflow.models.gpr.GPR_deprecated#
- class gpflow.models.gpr.GPR_deprecated(data, kernel, mean_function=None, noise_variance=None, likelihood=None)[source]#
Bases:
GPModel
,InternalDataTrainingLossMixin
Gaussian Process Regression.
This is a vanilla implementation of GP regression with a Gaussian likelihood. Multiple columns of Y are treated independently.
The log likelihood of this model is given by
\[\log p(Y \,|\, \mathbf f) = \mathcal N(Y \,|\, 0, \sigma_n^2 \mathbf{I})\]To train the model, we maximise the log _marginal_ likelihood w.r.t. the likelihood variance and kernel hyperparameters theta. The marginal likelihood is found by integrating the likelihood over the prior, and has the form
\[\log p(Y \,|\, \sigma_n, \theta) = \mathcal N(Y \,|\, 0, \mathbf{K} + \sigma_n^2 \mathbf{I})\]- Parameters:
data (
Tuple
[Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
],Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –mean_function (
Optional
[MeanFunction
]) –noise_variance (
Union
[int
,float
,Sequence
[Any
],ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
,None
]) –likelihood (
Optional
[Gaussian
]) –
- log_marginal_likelihood()[source]#
Computes the log marginal likelihood.
\[\log p(Y | \theta).\]- Return type:
Tensor
- Returns:
return has shape [].
- maximum_log_likelihood_objective()[source]#
Objective for maximum likelihood estimation. Should be maximized. E.g. log-marginal likelihood (hyperparameter likelihood) for GPR, or lower bound to the log-marginal likelihood (ELBO) for sparse and variational GPs.
- Return type:
Tensor
- Returns:
return has shape [].
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
This method computes predictions at X in R^{N x D} input points
\[p(F* | Y)\]where F* are points on the GP at new data points, Y are noisy observations at training data points.
- Parameters:
Xnew (
Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]) –Xnew has shape [batch…, N, D].
full_cov (
bool
) –full_output_cov (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
gpflow.models.gpr.GPR_with_posterior#
- class gpflow.models.gpr.GPR_with_posterior(data, kernel, mean_function=None, noise_variance=None, likelihood=None)[source]#
Bases:
GPR_deprecated
This is an implementation of GPR that provides a posterior() method that enables caching for faster subsequent predictions.
- Parameters:
data (
Tuple
[Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
],Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –mean_function (
Optional
[MeanFunction
]) –noise_variance (
Union
[int
,float
,Sequence
[Any
],ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
,None
]) –likelihood (
Optional
[Gaussian
]) –
- posterior(precompute_cache=PrecomputeCacheType.TENSOR)[source]#
Create the Posterior object which contains precomputed matrices for faster prediction.
precompute_cache has three settings:
PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
- Parameters:
precompute_cache (
PrecomputeCacheType
) –- Return type:
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
For backwards compatibility, GPR’s predict_f uses the fused (no-cache) computation, which is more efficient during training.
- For faster (cached) prediction, predict directly from the posterior object, i.e.,:
model.posterior().predict_f(Xnew, …)
- Parameters:
Xnew (
Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]) –Xnew has shape [batch…, N, D].
full_cov (
bool
) –full_output_cov (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).