gpflow.models.svgp#
Classes#
gpflow.models.svgp.SVGP_deprecated#
- class gpflow.models.svgp.SVGP_deprecated(kernel, likelihood, inducing_variable, *, mean_function=None, num_latent_gps=1, q_diag=False, q_mu=None, q_sqrt=None, whiten=True, num_data=None)[source]#
Bases:
GPModel
,ExternalDataTrainingLossMixin
This is the Sparse Variational GP (SVGP).
The key reference is Hensman et al. [HMG15].
- Parameters:
kernel (
Kernel
) –likelihood (
Likelihood
) –inducing_variable (
Union
[InducingVariables
,Tensor
,ndarray
[Any
,Any
]]) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
int
) –q_diag (
bool
) –q_mu (
Optional
[Tensor
]) –q_sqrt (
Optional
[Tensor
]) –whiten (
bool
) –num_data (
Optional
[Tensor
]) –
- elbo(data)[source]#
This gives a variational bound (the evidence lower bound or ELBO) on the log marginal likelihood of the model.
gpflow.models.svgp.SVGP_with_posterior#
- class gpflow.models.svgp.SVGP_with_posterior(kernel, likelihood, inducing_variable, *, mean_function=None, num_latent_gps=1, q_diag=False, q_mu=None, q_sqrt=None, whiten=True, num_data=None)[source]#
Bases:
SVGP_deprecated
This is the Sparse Variational GP (SVGP).
The key reference is Hensman et al. [HMG15].
This class provides a posterior() method that enables caching for faster subsequent predictions.
- Parameters:
kernel (
Kernel
) –likelihood (
Likelihood
) –inducing_variable (
Union
[InducingVariables
,Tensor
,ndarray
[Any
,Any
]]) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
int
) –q_diag (
bool
) –q_mu (
Optional
[Tensor
]) –q_sqrt (
Optional
[Tensor
]) –whiten (
bool
) –num_data (
Optional
[Tensor
]) –
- posterior(precompute_cache=PrecomputeCacheType.TENSOR)[source]#
Create the Posterior object which contains precomputed matrices for faster prediction.
precompute_cache has three settings:
PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
- Parameters:
precompute_cache (
PrecomputeCacheType
) –- Return type:
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
For backwards compatibility, SVGP’s predict_f uses the fused (no-cache) computation, which is more efficient during training.
- For faster (cached) prediction, predict directly from the posterior object, i.e.,:
model.posterior().predict_f(Xnew, …)
- Parameters:
Xnew (
Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]) –Xnew has shape [batch…, N, D].
full_cov (
bool
) –full_output_cov (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).