gpflow.models.vgp#
Classes#
gpflow.models.vgp.VGP_deprecated#
- class gpflow.models.vgp.VGP_deprecated(data, kernel, likelihood, mean_function=None, num_latent_gps=None)[source]#
Bases:
GPModel
,InternalDataTrainingLossMixin
This method approximates the Gaussian process posterior using a multivariate Gaussian.
The idea is that the posterior over the function-value vector F is approximated by a Gaussian, and the KL divergence is minimised between the approximation and the posterior.
This implementation is equivalent to SVGP with X=Z, but is more efficient. The whitened representation is used to aid optimization.
The posterior approximation is
\[q(\mathbf f) = N(\mathbf f \,|\, \boldsymbol \mu, \boldsymbol \Sigma)\]For a use example see Classification, other data distributions, VGP and SVGP.
- Parameters:
data (
Tuple
[Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
],Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –likelihood (
Likelihood
) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
Optional
[int
]) –
- elbo()[source]#
This method computes the variational lower bound on the likelihood, which is:
E_{q(F)} [ log p(Y|F) ] - KL[ q(F) || p(F)]
with
q(mathbf f) = N(mathbf f ,|, boldsymbol mu, boldsymbol Sigma)
- Return type:
Tensor
- Returns:
return has shape [].
- maximum_log_likelihood_objective()[source]#
Objective for maximum likelihood estimation. Should be maximized. E.g. log-marginal likelihood (hyperparameter likelihood) for GPR, or lower bound to the log-marginal likelihood (ELBO) for sparse and variational GPs.
- Return type:
Tensor
- Returns:
return has shape [].
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
Compute the mean and variance of the posterior latent function(s) at the input points.
Given $x_i$ this computes $f_i$, for:
\begin{align} \theta & \sim p(\theta) \\ f & \sim \mathcal{GP}(m(x), k(x, x'; \theta)) \\ f_i & = f(x_i) \\ \end{align}For an example of how to use
predict_f
, see Basic Usage with GPR.- Parameters:
Xnew (
Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]) –Xnew has shape [batch…, N, D].
Input locations at which to compute mean and variance.
full_cov (
bool
) – IfTrue
, compute the full covariance between the inputs. IfFalse
, only returns the point-wise variance.full_output_cov (
bool
) – IfTrue
, compute the full covariance between the outputs. IfFalse
, assumes outputs are independent.
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
gpflow.models.vgp.VGP_with_posterior#
- class gpflow.models.vgp.VGP_with_posterior(data, kernel, likelihood, mean_function=None, num_latent_gps=None)[source]#
Bases:
VGP_deprecated
This is an implementation of VGP that provides a posterior() method that enables caching for faster subsequent predictions.
- Parameters:
data (
Tuple
[Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
],Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –likelihood (
Likelihood
) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
Optional
[int
]) –
- posterior(precompute_cache=PrecomputeCacheType.TENSOR)[source]#
Create the Posterior object which contains precomputed matrices for faster prediction.
precompute_cache has three settings:
PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
- Parameters:
precompute_cache (
PrecomputeCacheType
) –- Return type:
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
For backwards compatibility, VGP’s predict_f uses the fused (no-cache) computation, which is more efficient during training.
- For faster (cached) prediction, predict directly from the posterior object, i.e.,:
model.posterior().predict_f(Xnew, …)
- Parameters:
Xnew (
Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]) –Xnew has shape [batch…, N, D].
full_cov (
bool
) –full_output_cov (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
Functions#
gpflow.models.vgp.update_vgp_data#
- gpflow.models.vgp.update_vgp_data(vgp, new_data)[source]#
Set the data on the given VGP model, and update its variational parameters.
As opposed to many of the other models the VGP has internal parameters whose shape depends on the shape of the data. This functions updates the internal data of the given vgp, and updates the variational parameters to fit.
This function requires that the input vgp were create with
tf.Variable`s for `data
.- Parameters:
new_data (
Tuple
[Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
],Union
[ndarray
[Any
,Any
],Tensor
,Variable
,Parameter
]]) –new_data[0] has shape [N, D].
new_data[1] has shape [N, P].
vgp (
VGP_deprecated
) –
- Return type:
None