gpflow.posteriors#
Classes#
gpflow.posteriors.AbstractPosterior#
- class gpflow.posteriors.AbstractPosterior(kernel, X_data, cache=None, mean_function=None)[source]#
Bases:
gpflow.base.Module,abc.ABC- Parameters
kernel (
Kernel) –X_data (
Union[Tensor,InducingVariables]) –cache (
Optional[Tuple[Tensor,...]]) –mean_function (
Optional[MeanFunction]) –
- fused_predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
Computes predictive mean and (co)variance at Xnew, including mean_function Does not make use of caching
- Parameters
Xnew (
Union[ndarray[Any,Any],Tensor,Variable,Parameter]) –full_cov (
bool) –full_output_cov (
bool) –
- Return type
Tuple[Tensor,Tensor]
- predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#
Computes predictive mean and (co)variance at Xnew, including mean_function. Relies on precomputed alpha and Qinv (see _precompute method)
- Parameters
Xnew (
Union[ndarray[Any,Any],Tensor,Variable,Parameter]) –full_cov (
bool) –full_output_cov (
bool) –
- Return type
Tuple[Tensor,Tensor]
- update_cache(precompute_cache=None)[source]#
Sets the cache depending on the value of precompute_cache to a tf.Tensor, tf.Variable, or clears the cache. If precompute_cache is not given, the setting defaults to the most-recently-used one.
- Parameters
precompute_cache (
Optional[PrecomputeCacheType]) –- Return type
None
gpflow.posteriors.BasePosterior#
- class gpflow.posteriors.BasePosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.AbstractPosterior- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.FallbackIndependentLatentPosterior#
- class gpflow.posteriors.FallbackIndependentLatentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.FullyCorrelatedPosterior- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.GPRPosterior#
- class gpflow.posteriors.GPRPosterior(kernel, data, likelihood_variance, mean_function, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.AbstractPosterior- Parameters
kernel (
Kernel) –data (
Tuple[Union[ndarray[Any,Any],Tensor,Variable,Parameter],Union[ndarray[Any,Any],Tensor,Variable,Parameter]]) –likelihood_variance (
Parameter) –mean_function (
MeanFunction) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.IndependentPosterior#
- class gpflow.posteriors.IndependentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.BasePosterior- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.IndependentPosteriorMultiOutput#
- class gpflow.posteriors.IndependentPosteriorMultiOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.IndependentPosterior- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.IndependentPosteriorSingleOutput#
- class gpflow.posteriors.IndependentPosteriorSingleOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.IndependentPosterior- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.LinearCoregionalizationPosterior#
- class gpflow.posteriors.LinearCoregionalizationPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.IndependentPosteriorMultiOutput- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Tensor) –q_sqrt (
Tensor) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.PrecomputeCacheType#
- class gpflow.posteriors.PrecomputeCacheType(value)[source]#
Bases:
enum.EnumPrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.
PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).
PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.
gpflow.posteriors.PrecomputedValue#
- class gpflow.posteriors.PrecomputedValue(value, axis_dynamic)[source]#
Bases:
object- Parameters
value (
Tensor) –axis_dynamic (
Tuple[bool,...]) –
- axis_dynamic: Tuple[bool, ...]#
A tuple with one element per dimension of value. That element is True if that dimension of value might change size.
- value: tensorflow.python.framework.ops.Tensor#
The precomputed value itself.
- static wrap_alpha_Qinv(alpha, Qinv)[source]#
Wraps alpha and Qinv in PrecomputedValues.
- Parameters
- Return type
Tuple[PrecomputedValue,...]
gpflow.posteriors.SGPRPosterior#
- class gpflow.posteriors.SGPRPosterior(kernel, data, inducing_variable, likelihood_variance, num_latent_gps, mean_function, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.AbstractPosteriorThis class represents posteriors which can be derived from SGPR models to compute faster predictions on unseen points.
- Parameters
kernel (
Kernel) –data (
Tuple[Union[ndarray[Any,Any],Tensor,Variable,Parameter],Union[ndarray[Any,Any],Tensor,Variable,Parameter]]) –inducing_variable (
InducingPoints) –likelihood_variance (
Parameter) –num_latent_gps (
int) –mean_function (
MeanFunction) –precompute_cache (
Optional[PrecomputeCacheType]) –
gpflow.posteriors.VGPPosterior#
- class gpflow.posteriors.VGPPosterior(kernel, X, q_mu, q_sqrt, mean_function=None, white=True, *, precompute_cache)[source]#
Bases:
gpflow.posteriors.AbstractPosterior- Parameters
kernel (
Kernel) –X (
Tensor) –q_mu (
Tensor) –q_sqrt (
Tensor) –mean_function (
Optional[MeanFunction]) –white (
bool) –precompute_cache (
Optional[PrecomputeCacheType]) –
Functions#
gpflow.posteriors.create_posterior#
- gpflow.posteriors.create_posterior(kernel, inducing_variable, q_mu, q_sqrt, whiten, mean_function=None, precompute_cache=PrecomputeCacheType.TENSOR)[source]#
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –q_mu (
Union[ndarray[Any,Any],Tensor,Variable,Parameter]) –q_sqrt (
Union[ndarray[Any,Any],Tensor,Variable,Parameter]) –whiten (
bool) –mean_function (
Optional[MeanFunction]) –precompute_cache (
Union[PrecomputeCacheType,str,None]) –
- Return type
gpflow.posteriors.get_posterior_class#
This function uses multiple dispatch, which will depend on the type of argument passed in:
gpflow.posteriors.get_posterior_class( Kernel, InducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_base_case(...)
- gpflow.posteriors._get_posterior_base_case(kernel, inducing_variable)[source]#
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –
- Return type
Type[BasePosterior]
gpflow.posteriors.get_posterior_class( MultioutputKernel, InducingPoints )
# dispatch to -> gpflow.posteriors._get_posterior_fully_correlated_mo(...)
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –
- Return type
Type[BasePosterior]
gpflow.posteriors.get_posterior_class( SharedIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SharedIndependent, SharedIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independent_mo(...)
- gpflow.posteriors._get_posterior_independent_mo(kernel, inducing_variable)[source]#
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –
- Return type
Type[BasePosterior]
gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independentlatent_mo_fallback(...)
- gpflow.posteriors._get_posterior_independentlatent_mo_fallback(kernel, inducing_variable)[source]#
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –
- Return type
Type[BasePosterior]
gpflow.posteriors.get_posterior_class( LinearCoregionalization, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( LinearCoregionalization, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(...)
- gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(kernel, inducing_variable)[source]#
- Parameters
kernel (
Kernel) –inducing_variable (
InducingVariables) –
- Return type
Type[BasePosterior]