gpflow.models.svgp¶
gpflow.models.svgp.SVGP_deprecated¶
- class gpflow.models.svgp.SVGP_deprecated(kernel, likelihood, inducing_variable, *, mean_function=None, num_latent_gps=1, q_diag=False, q_mu=None, q_sqrt=None, whiten=True, num_data=None)[source]¶
Bases:
gpflow.models.model.GPModel
,gpflow.models.training_mixins.ExternalDataTrainingLossMixin
This is the Sparse Variational GP (SVGP). The key reference is
@inproceedings{hensman2014scalable, title={Scalable Variational Gaussian Process Classification}, author={Hensman, James and Matthews, Alexander G. de G. and Ghahramani, Zoubin}, booktitle={Proceedings of AISTATS}, year={2015} }
- Attributes
name
Returns the name of this module as passed or determined in the ctor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of non-trainable variables owned by this module and its submodules.
- parameters
submodules
Sequence of all sub-modules.
- trainable_parameters
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
variables
Sequence of variables owned by this module and its submodules.
Methods
calc_num_latent_gps
(kernel, likelihood, ...)Calculates the number of latent GPs required given the number of outputs output_dim and the type of likelihood and kernel.
calc_num_latent_gps_from_data
(data, kernel, ...)Calculates the number of latent GPs required based on the data as well as the type of kernel and likelihood.
elbo
(data)This gives a variational bound (the evidence lower bound or ELBO) on the log marginal likelihood of the model.
log_posterior_density
(*args, **kwargs)This may be the posterior with respect to the hyperparameters (e.g.
log_prior_density
()Sum of the log prior probability densities of all (constrained) variables in this model.
Objective for maximum likelihood estimation.
predict_f_samples
(Xnew[, num_samples, ...])Produce samples from the posterior latent function(s) at the input points.
predict_log_density
(data[, full_cov, ...])Compute the log density of the data at the new data points.
predict_y
(Xnew[, full_cov, full_output_cov])Compute the mean and variance of the held-out data at the input points.
training_loss
(data)Returns the training loss for this model.
training_loss_closure
(data, *[, compile])Returns a closure that computes the training loss, which by default is wrapped in tf.function().
with_name_scope
(method)Decorator to automatically enter the module name scope.
predict_f
prior_kl
- Parameters
num_latent_gps (
int
) –q_diag (
bool
) –whiten (
bool
) –
- elbo(data)[source]¶
This gives a variational bound (the evidence lower bound or ELBO) on the log marginal likelihood of the model.
gpflow.models.svgp.SVGP_with_posterior¶
- class gpflow.models.svgp.SVGP_with_posterior(kernel, likelihood, inducing_variable, *, mean_function=None, num_latent_gps=1, q_diag=False, q_mu=None, q_sqrt=None, whiten=True, num_data=None)[source]¶
Bases:
gpflow.models.svgp.SVGP_deprecated
This is the Sparse Variational GP (SVGP). The key reference is
@inproceedings{hensman2014scalable, title={Scalable Variational Gaussian Process Classification}, author={Hensman, James and Matthews, Alexander G. de G. and Ghahramani, Zoubin}, booktitle={Proceedings of AISTATS}, year={2015} }
This class provides a posterior() method that enables caching for faster subsequent predictions.
- Attributes
name
Returns the name of this module as passed or determined in the ctor.
name_scope
Returns a tf.name_scope instance for this class.
non_trainable_variables
Sequence of non-trainable variables owned by this module and its submodules.
- parameters
submodules
Sequence of all sub-modules.
- trainable_parameters
trainable_variables
Sequence of trainable variables owned by this module and its submodules.
variables
Sequence of variables owned by this module and its submodules.
Methods
calc_num_latent_gps
(kernel, likelihood, ...)Calculates the number of latent GPs required given the number of outputs output_dim and the type of likelihood and kernel.
calc_num_latent_gps_from_data
(data, kernel, ...)Calculates the number of latent GPs required based on the data as well as the type of kernel and likelihood.
elbo
(data)This gives a variational bound (the evidence lower bound or ELBO) on the log marginal likelihood of the model.
log_posterior_density
(*args, **kwargs)This may be the posterior with respect to the hyperparameters (e.g.
log_prior_density
()Sum of the log prior probability densities of all (constrained) variables in this model.
maximum_log_likelihood_objective
(data)Objective for maximum likelihood estimation.
posterior
([precompute_cache])Create the Posterior object which contains precomputed matrices for faster prediction.
predict_f
(Xnew[, full_cov, full_output_cov])For backwards compatibility, SVGP's predict_f uses the fused (no-cache) computation, which is more efficient during training.
predict_f_samples
(Xnew[, num_samples, ...])Produce samples from the posterior latent function(s) at the input points.
predict_log_density
(data[, full_cov, ...])Compute the log density of the data at the new data points.
predict_y
(Xnew[, full_cov, full_output_cov])Compute the mean and variance of the held-out data at the input points.
training_loss
(data)Returns the training loss for this model.
training_loss_closure
(data, *[, compile])Returns a closure that computes the training loss, which by default is wrapped in tf.function().
with_name_scope
(method)Decorator to automatically enter the module name scope.
prior_kl
- Parameters
num_latent_gps (
int
) –q_diag (
bool
) –whiten (
bool
) –
- posterior(precompute_cache=PrecomputeCacheType.TENSOR)[source]¶
Create the Posterior object which contains precomputed matrices for faster prediction.
precompute_cache has three settings:
PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]¶
For backwards compatibility, SVGP’s predict_f uses the fused (no-cache) computation, which is more efficient during training.
- For faster (cached) prediction, predict directly from the posterior object, i.e.,:
model.posterior().predict_f(Xnew, …)
- Parameters
Xnew (
Union
[ndarray
,Tensor
,Variable
,Parameter
]) –- Return type
Tuple
[Tensor
,Tensor
]