gpflow.posteriors#

Classes#

gpflow.posteriors.AbstractPosterior#

class gpflow.posteriors.AbstractPosterior(kernel, X_data, cache=None, mean_function=None)[source]#

Bases: Module, ABC

Parameters:
fused_predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#

Computes predictive mean and (co)variance at Xnew, including mean_function Does not make use of caching

Parameters:
  • Xnew (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • Xnew has shape [batch…, N, D].

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type:

Tuple[Tensor, Tensor]

Returns:

  • return[0] has shape [batch…, N, P].

  • return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.

  • return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.

  • return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).

  • return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).

predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#

Computes predictive mean and (co)variance at Xnew, including mean_function. Relies on precomputed alpha and Qinv (see _precompute method)

Parameters:
  • Xnew (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • Xnew has shape [batch…, N, D].

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type:

Tuple[Tensor, Tensor]

Returns:

  • return[0] has shape [batch…, N, P].

  • return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.

  • return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.

  • return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).

  • return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).

update_cache(precompute_cache=None)[source]#

Sets the cache depending on the value of precompute_cache to a tf.Tensor, tf.Variable, or clears the cache. If precompute_cache is not given, the setting defaults to the most-recently-used one.

Parameters:

precompute_cache (Optional[PrecomputeCacheType]) –

Return type:

None

gpflow.posteriors.BasePosterior#

class gpflow.posteriors.BasePosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: AbstractPosterior

Parameters:

gpflow.posteriors.FallbackIndependentLatentPosterior#

class gpflow.posteriors.FallbackIndependentLatentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: FullyCorrelatedPosterior

Parameters:

gpflow.posteriors.FullyCorrelatedPosterior#

class gpflow.posteriors.FullyCorrelatedPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: BasePosterior

Parameters:

gpflow.posteriors.GPRPosterior#

class gpflow.posteriors.GPRPosterior(kernel, data, likelihood, mean_function, *, precompute_cache)[source]#

Bases: AbstractPosterior

Parameters:

gpflow.posteriors.IndependentPosterior#

class gpflow.posteriors.IndependentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: BasePosterior

Parameters:

gpflow.posteriors.IndependentPosteriorMultiOutput#

class gpflow.posteriors.IndependentPosteriorMultiOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: IndependentPosterior

Parameters:

gpflow.posteriors.IndependentPosteriorSingleOutput#

class gpflow.posteriors.IndependentPosteriorSingleOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: IndependentPosterior

Parameters:

gpflow.posteriors.LinearCoregionalizationPosterior#

class gpflow.posteriors.LinearCoregionalizationPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: IndependentPosteriorMultiOutput

Parameters:

gpflow.posteriors.PrecomputeCacheType#

class gpflow.posteriors.PrecomputeCacheType(value)[source]#

Bases: Enum

  • PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.

  • PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).

  • PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.

gpflow.posteriors.PrecomputedValue#

class gpflow.posteriors.PrecomputedValue(value, axis_dynamic)[source]#

Bases: object

Parameters:
  • value (Tensor) –

  • axis_dynamic (Tuple[bool, ...]) –

axis_dynamic: Tuple[bool, ...]#

A tuple with one element per dimension of value. That element is True if that dimension of value might change size.

value: Tensor#

The precomputed value itself.

static wrap_alpha_Qinv(alpha, Qinv)[source]#

Wraps alpha and Qinv in PrecomputedValues.

Parameters:
  • alpha (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • alpha has shape [M_L_or_L_M_M…].

  • Qinv (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • Qinv has shape [M_M_or_L_M_M…].

Return type:

Tuple[PrecomputedValue, ...]

gpflow.posteriors.SGPRPosterior#

class gpflow.posteriors.SGPRPosterior(kernel, data, inducing_variable, likelihood, num_latent_gps, mean_function, *, precompute_cache)[source]#

Bases: AbstractPosterior

This class represents posteriors which can be derived from SGPR models to compute faster predictions on unseen points.

Parameters:

gpflow.posteriors.VGPPosterior#

class gpflow.posteriors.VGPPosterior(kernel, X, q_mu, q_sqrt, mean_function=None, white=True, *, precompute_cache)[source]#

Bases: AbstractPosterior

Parameters:
  • kernel (Kernel) –

  • X (Tensor) –

  • q_mu (Tensor) –

  • q_sqrt (Tensor) –

  • mean_function (Optional[MeanFunction]) –

  • white (bool) –

  • precompute_cache (Optional[PrecomputeCacheType]) –

Functions#

gpflow.posteriors.create_posterior#

gpflow.posteriors.create_posterior(kernel, inducing_variable, q_mu, q_sqrt, whiten, mean_function=None, precompute_cache=PrecomputeCacheType.TENSOR)[source]#
Parameters:
Return type:

BasePosterior

gpflow.posteriors.get_posterior_class#

This function uses multiple dispatch, which will depend on the type of argument passed in:

gpflow.posteriors.get_posterior_class( Kernel, InducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_base_case(...)
gpflow.posteriors._get_posterior_base_case(kernel, inducing_variable)[source]#
Parameters:
Return type:

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( MultioutputKernel, InducingPoints )
# dispatch to -> gpflow.posteriors._get_posterior_fully_correlated_mo(...)
gpflow.posteriors._get_posterior_fully_correlated_mo(kernel, inducing_variable)[source]#
Parameters:
Return type:

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( SharedIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SharedIndependent, SharedIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independent_mo(...)
gpflow.posteriors._get_posterior_independent_mo(kernel, inducing_variable)[source]#
Parameters:
Return type:

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independentlatent_mo_fallback(...)
gpflow.posteriors._get_posterior_independentlatent_mo_fallback(kernel, inducing_variable)[source]#
Parameters:
Return type:

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( LinearCoregionalization, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( LinearCoregionalization, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(...)
gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(kernel, inducing_variable)[source]#
Parameters:
Return type:

Type[BasePosterior]

gpflow.posteriors.get_precomputed_value_shape#

gpflow.posteriors.get_precomputed_value_shape(shaped, context)[source]#
Parameters:
Return type:

Optional[Tuple[Optional[int], ...]]