gpflow.models.vgp¶
gpflow.models.vgp.VGP_deprecated¶
- class gpflow.models.vgp.VGP_deprecated(data, kernel, likelihood, mean_function=None, num_latent_gps=None)[source]¶
Bases:
gpflow.models.model.GPModel
,gpflow.models.training_mixins.InternalDataTrainingLossMixin
This method approximates the Gaussian process posterior using a multivariate Gaussian.
The idea is that the posterior over the function-value vector F is approximated by a Gaussian, and the KL divergence is minimised between the approximation and the posterior.
This implementation is equivalent to SVGP with X=Z, but is more efficient. The whitened representation is used to aid optimization.
The posterior approximation is
\[q(\mathbf f) = N(\mathbf f \,|\, \boldsymbol \mu, \boldsymbol \Sigma)\]- Attributes
name
Returns the name of this module as passed or determined in the ctor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of non-trainable variables owned by this module and its submodules.
- parameters
submodules
Sequence of all sub-modules.
- trainable_parameters
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
variables
Sequence of variables owned by this module and its submodules.
Methods
calc_num_latent_gps
(kernel, likelihood, ...)Calculates the number of latent GPs required given the number of outputs output_dim and the type of likelihood and kernel.
calc_num_latent_gps_from_data
(data, kernel, ...)Calculates the number of latent GPs required based on the data as well as the type of kernel and likelihood.
elbo
()This method computes the variational lower bound on the likelihood, which is:
log_posterior_density
(*args, **kwargs)This may be the posterior with respect to the hyperparameters (e.g.
log_prior_density
()Sum of the log prior probability densities of all (constrained) variables in this model.
Objective for maximum likelihood estimation.
predict_f_samples
(Xnew[, num_samples, ...])Produce samples from the posterior latent function(s) at the input points.
predict_log_density
(data[, full_cov, ...])Compute the log density of the data at the new data points.
predict_y
(Xnew[, full_cov, full_output_cov])Compute the mean and variance of the held-out data at the input points.
training_loss
()Returns the training loss for this model.
training_loss_closure
(*[, compile])Convenience method.
with_name_scope
(method)Decorator to automatically enter the module name scope.
predict_f
- Parameters
data (
Tuple
[Union
[ndarray
,Tensor
,Variable
,Parameter
],Union
[ndarray
,Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –likelihood (
Likelihood
) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
Optional
[int
]) –
gpflow.models.vgp.VGP_with_posterior¶
- class gpflow.models.vgp.VGP_with_posterior(data, kernel, likelihood, mean_function=None, num_latent_gps=None)[source]¶
Bases:
gpflow.models.vgp.VGP_deprecated
This is an implementation of VGP that provides a posterior() method that enables caching for faster subsequent predictions.
- Attributes
name
Returns the name of this module as passed or determined in the ctor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of non-trainable variables owned by this module and its submodules.
- parameters
submodules
Sequence of all sub-modules.
- trainable_parameters
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
variables
Sequence of variables owned by this module and its submodules.
Methods
calc_num_latent_gps
(kernel, likelihood, ...)Calculates the number of latent GPs required given the number of outputs output_dim and the type of likelihood and kernel.
calc_num_latent_gps_from_data
(data, kernel, ...)Calculates the number of latent GPs required based on the data as well as the type of kernel and likelihood.
elbo
()This method computes the variational lower bound on the likelihood, which is:
log_posterior_density
(*args, **kwargs)This may be the posterior with respect to the hyperparameters (e.g.
log_prior_density
()Sum of the log prior probability densities of all (constrained) variables in this model.
maximum_log_likelihood_objective
()Objective for maximum likelihood estimation.
posterior
([precompute_cache])Create the Posterior object which contains precomputed matrices for faster prediction.
predict_f
(Xnew[, full_cov, full_output_cov])For backwards compatibility, VGP's predict_f uses the fused (no-cache) computation, which is more efficient during training.
predict_f_samples
(Xnew[, num_samples, ...])Produce samples from the posterior latent function(s) at the input points.
predict_log_density
(data[, full_cov, ...])Compute the log density of the data at the new data points.
predict_y
(Xnew[, full_cov, full_output_cov])Compute the mean and variance of the held-out data at the input points.
training_loss
()Returns the training loss for this model.
training_loss_closure
(*[, compile])Convenience method.
with_name_scope
(method)Decorator to automatically enter the module name scope.
- Parameters
data (
Tuple
[Union
[ndarray
,Tensor
,Variable
,Parameter
],Union
[ndarray
,Tensor
,Variable
,Parameter
]]) –kernel (
Kernel
) –likelihood (
Likelihood
) –mean_function (
Optional
[MeanFunction
]) –num_latent_gps (
Optional
[int
]) –
- posterior(precompute_cache=PrecomputeCacheType.TENSOR)[source]¶
Create the Posterior object which contains precomputed matrices for faster prediction.
precompute_cache has three settings:
PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]¶
For backwards compatibility, VGP’s predict_f uses the fused (no-cache) computation, which is more efficient during training.
- For faster (cached) prediction, predict directly from the posterior object, i.e.,:
model.posterior().predict_f(Xnew, …)
- Parameters
Xnew (
Union
[ndarray
,Tensor
,Variable
,Parameter
]) –- Return type
Tuple
[Tensor
,Tensor
]
gpflow.models.vgp.update_vgp_data¶
- gpflow.models.vgp.update_vgp_data(vgp, new_data)[source]¶
Set the data on the given VGP model, and update its variational parameters.
As opposed to many of the other models the VGP has internal parameters whose shape depends on the shape of the data. This functions updates the internal data of the given vgp, and updates the variational parameters to fit.
This function requires that the input :param:`vgp` were create with
tf.Variable`s for :param:`data
.- Parameters
vgp (
VGP_deprecated
) –new_data (
Tuple
[Union
[ndarray
,Tensor
,Variable
,Parameter
],Union
[ndarray
,Tensor
,Variable
,Parameter
]]) –
- Return type
None