gpflow.posteriors#

Classes#

gpflow.posteriors.AbstractPosterior#

class gpflow.posteriors.AbstractPosterior(kernel, X_data, cache=None, mean_function=None)[source]#

Bases: gpflow.base.Module, abc.ABC

Parameters
fused_predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#

Computes predictive mean and (co)variance at Xnew, including mean_function Does not make use of caching

Parameters
  • Xnew (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type

Tuple[Tensor, Tensor]

predict_f(Xnew, full_cov=False, full_output_cov=False)[source]#

Computes predictive mean and (co)variance at Xnew, including mean_function. Relies on precomputed alpha and Qinv (see _precompute method)

Parameters
  • Xnew (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type

Tuple[Tensor, Tensor]

update_cache(precompute_cache=None)[source]#

Sets the cache depending on the value of precompute_cache to a tf.Tensor, tf.Variable, or clears the cache. If precompute_cache is not given, the setting defaults to the most-recently-used one.

Parameters

precompute_cache (Optional[PrecomputeCacheType]) –

Return type

None

gpflow.posteriors.BasePosterior#

class gpflow.posteriors.BasePosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.AbstractPosterior

Parameters

gpflow.posteriors.FallbackIndependentLatentPosterior#

class gpflow.posteriors.FallbackIndependentLatentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.FullyCorrelatedPosterior

Parameters

gpflow.posteriors.FullyCorrelatedPosterior#

class gpflow.posteriors.FullyCorrelatedPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.BasePosterior

Parameters

gpflow.posteriors.GPRPosterior#

class gpflow.posteriors.GPRPosterior(kernel, data, likelihood_variance, mean_function, *, precompute_cache)[source]#

Bases: gpflow.posteriors.AbstractPosterior

Parameters

gpflow.posteriors.IndependentPosterior#

class gpflow.posteriors.IndependentPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.BasePosterior

Parameters

gpflow.posteriors.IndependentPosteriorMultiOutput#

class gpflow.posteriors.IndependentPosteriorMultiOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.IndependentPosterior

Parameters

gpflow.posteriors.IndependentPosteriorSingleOutput#

class gpflow.posteriors.IndependentPosteriorSingleOutput(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.IndependentPosterior

Parameters

gpflow.posteriors.LinearCoregionalizationPosterior#

class gpflow.posteriors.LinearCoregionalizationPosterior(kernel, inducing_variable, q_mu, q_sqrt, whiten=True, mean_function=None, *, precompute_cache)[source]#

Bases: gpflow.posteriors.IndependentPosteriorMultiOutput

Parameters

gpflow.posteriors.PrecomputeCacheType#

class gpflow.posteriors.PrecomputeCacheType(value)[source]#

Bases: enum.Enum

  • PrecomputeCacheType.TENSOR (or “tensor”): Precomputes the cached quantities and stores them as tensors (which allows differentiating through the prediction). This is the default.

  • PrecomputeCacheType.VARIABLE (or “variable”): Precomputes the cached quantities and stores them as variables, which allows for updating their values without changing the compute graph (relevant for AOT compilation).

  • PrecomputeCacheType.NOCACHE (or “nocache” or None): Avoids immediate cache computation. This is useful for avoiding extraneous computations when you only want to call the posterior’s fused_predict_f method.

gpflow.posteriors.PrecomputedValue#

class gpflow.posteriors.PrecomputedValue(value, axis_dynamic)[source]#

Bases: object

Parameters
  • value (Tensor) –

  • axis_dynamic (Tuple[bool, ...]) –

axis_dynamic: Tuple[bool, ...]#

A tuple with one element per dimension of value. That element is True if that dimension of value might change size.

value: tensorflow.python.framework.ops.Tensor#

The precomputed value itself.

static wrap_alpha_Qinv(alpha, Qinv)[source]#

Wraps alpha and Qinv in PrecomputedValues.

Parameters
  • alpha (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

  • Qinv (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

Return type

Tuple[PrecomputedValue, ...]

gpflow.posteriors.SGPRPosterior#

class gpflow.posteriors.SGPRPosterior(kernel, data, inducing_variable, likelihood_variance, num_latent_gps, mean_function, *, precompute_cache)[source]#

Bases: gpflow.posteriors.AbstractPosterior

This class represents posteriors which can be derived from SGPR models to compute faster predictions on unseen points.

Parameters

gpflow.posteriors.VGPPosterior#

class gpflow.posteriors.VGPPosterior(kernel, X, q_mu, q_sqrt, mean_function=None, white=True, *, precompute_cache)[source]#

Bases: gpflow.posteriors.AbstractPosterior

Parameters
  • kernel (Kernel) –

  • X (Tensor) –

  • q_mu (Tensor) –

  • q_sqrt (Tensor) –

  • mean_function (Optional[MeanFunction]) –

  • white (bool) –

  • precompute_cache (Optional[PrecomputeCacheType]) –

Functions#

gpflow.posteriors.add_noise_cov#

gpflow.posteriors.add_noise_cov(K, likelihood_variance)[source]#

Returns K + σ² I, where σ² is the likelihood noise variance (scalar), and I is the corresponding identity matrix.

Parameters
  • K (Tensor) –

  • likelihood_variance (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

Return type

Tensor

gpflow.posteriors.base_conditional_with_lm#

gpflow.posteriors.base_conditional_with_lm(Kmn, Lm, Knn, f, *, full_cov=False, q_sqrt=None, white=False)[source]#

Has the same functionality as the base_conditional function, except that instead of Kmm this function accepts Lm, which is the Cholesky decomposition of Kmm.

This allows Lm to be precomputed, which can improve performance.

Parameters
  • Kmn (Tensor) –

  • Lm (Tensor) –

  • Knn (Tensor) –

  • f (Tensor) –

  • full_cov (bool) –

  • q_sqrt (Optional[Tensor]) –

  • white (bool) –

Return type

Tuple[Tensor, Tensor]

gpflow.posteriors.create_posterior#

gpflow.posteriors.create_posterior(kernel, inducing_variable, q_mu, q_sqrt, whiten, mean_function=None, precompute_cache=PrecomputeCacheType.TENSOR)[source]#
Parameters
Return type

BasePosterior

gpflow.posteriors.expand_independent_outputs#

gpflow.posteriors.expand_independent_outputs(fvar, full_cov, full_output_cov)[source]#

Reshapes fvar to the correct shape, specified by full_cov and full_output_cov.

Parameters
  • fvar (Tensor) – has shape [N, P] (full_cov = False) or [P, N, N] (full_cov = True).

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type

Tensor

Returns

  1. full_cov: True and full_output_cov: True fvar [N, P, N, P]

  2. full_cov: True and full_output_cov: False fvar [P, N, N]

  3. full_cov: False and full_output_cov: True fvar [N, P, P]

  4. full_cov: False and full_output_cov: False fvar [N, P]

gpflow.posteriors.eye#

gpflow.posteriors.eye(num, value, dtype=None)[source]#
Parameters
  • num (int) –

  • value (Tensor) –

  • dtype (Optional[DType]) –

Return type

Tensor

gpflow.posteriors.fully_correlated_conditional#

gpflow.posteriors.fully_correlated_conditional(Kmn, Kmm, Knn, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#

This function handles conditioning of multi-output GPs in the case where the conditioning points are all fully correlated, in both the prior and posterior.

Parameters
  • Kmn (Tensor) – [M, N, P]

  • Kmm (Tensor) – [M, M]

  • Knn (Tensor) – [N, P] or [N, P, N, P]

  • f (Tensor) – data matrix, [M, 1]

  • q_sqrt (Optional[Tensor]) – [1, M, M] or [1, L]

  • full_cov (bool) – calculate covariance between inputs

  • full_output_cov (bool) – calculate covariance between outputs

  • white (bool) – use whitened representation

Return type

Tuple[Tensor, Tensor]

Returns

  • mean: [N, P]

  • variance: [N, P], [N, P, P], [P, N, N], [N, P, N, P]

gpflow.posteriors.get_posterior_class#

This function uses multiple dispatch, which will depend on the type of argument passed in:

gpflow.posteriors.get_posterior_class( Kernel, InducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_base_case(...)
gpflow.posteriors._get_posterior_base_case(kernel, inducing_variable)[source]#
Parameters
Return type

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( MultioutputKernel, InducingPoints )
# dispatch to -> gpflow.posteriors._get_posterior_fully_correlated_mo(...)
gpflow.posteriors._get_posterior_fully_correlated_mo(kernel, inducing_variable)[source]#
Parameters
Return type

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( SharedIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SharedIndependent, SharedIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( SeparateIndependent, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independent_mo(...)
gpflow.posteriors._get_posterior_independent_mo(kernel, inducing_variable)[source]#
Parameters
Return type

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( IndependentLatent, FallbackSharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_independentlatent_mo_fallback(...)
gpflow.posteriors._get_posterior_independentlatent_mo_fallback(kernel, inducing_variable)[source]#
Parameters
Return type

Type[BasePosterior]

gpflow.posteriors.get_posterior_class( LinearCoregionalization, SeparateIndependentInducingVariables )
gpflow.posteriors.get_posterior_class( LinearCoregionalization, SharedIndependentInducingVariables )
# dispatch to -> gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(...)
gpflow.posteriors._get_posterior_linearcoregionalization_mo_efficient(kernel, inducing_variable)[source]#
Parameters
Return type

Type[BasePosterior]

gpflow.posteriors.independent_interdomain_conditional#

gpflow.posteriors.independent_interdomain_conditional(Kmn, Kmm, Knn, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#

The inducing outputs live in the g-space (R^L).

Interdomain conditional calculation.

Parameters
  • Kmn (Tensor) – [M, L, N, P]

  • Kmm (Tensor) – [L, M, M]

  • Knn (Tensor) – [N, P] or [N, P, P] or [P, N, N] or [N, P, N, P]

  • f (Tensor) – data matrix, [M, L]

  • q_sqrt (Optional[Tensor]) – [L, M, M] or [M, L]

  • full_cov (bool) – calculate covariance between inputs

  • full_output_cov (bool) – calculate covariance between outputs

  • white (bool) – use whitened representation

Return type

Tuple[Tensor, Tensor]

Returns

  • mean: [N, P]

  • variance: [N, P], [N, P, P], [P, N, N], [N, P, N, P]

gpflow.posteriors.leading_transpose#

gpflow.posteriors.leading_transpose(tensor, perm, leading_dim=0)[source]#

Transposes tensors with leading dimensions.

Leading dimensions in permutation list represented via ellipsis and is of type List[Union[int, type(…)] (please note, due to mypy issues, List[Any] is used instead). When leading dimensions are found, transpose method considers them as a single grouped element indexed by 0 in perm list. So, passing perm=[-2, …, -1], you assume that your input tensor has […, A, B] shape, and you want to move leading dims between A and B dimensions. Dimension indices in permutation list can be negative or positive. Valid positive indices start from 1 up to the tensor rank, viewing leading dimensions as zero index.

Example:

a = tf.random.normal((1, 2, 3, 4, 5, 6))
# [..., A, B, C],
# where A is 1st element,
# B is 2nd element and
# C is 3rd element in
# permutation list,
# leading dimensions are [1, 2, 3]
# which are 0th element in permutation list
b = leading_transpose(a, [3, -3, ..., -2])  # [C, A, ..., B]
sess.run(b).shape

output> (6, 4, 1, 2, 3, 5)
Parameters
  • tensor (Tensor) – TensorFlow tensor.

  • perm (List[Any]) – List of permutation indices.

  • leading_dim (int) –

Return type

Tensor

Returns

TensorFlow tensor.

Raises

ValueError – when cannot be found.

gpflow.posteriors.mix_latent_gp#

gpflow.posteriors.mix_latent_gp(W, g_mean, g_var, full_cov, full_output_cov)[source]#

Takes the mean and variance of an uncorrelated L-dimensional latent GP and returns the mean and the variance of the mixed GP, f = W g, where both f and g are GPs, with W having a shape [P, L]

Parameters
  • W (Tensor) – [P, L]

  • g_mean (Tensor) – […, N, L]

  • g_var (Tensor) – […, N, L] (full_cov = False) or [L, …, N, N] (full_cov = True)

  • full_cov (bool) –

  • full_output_cov (bool) –

Return type

Tuple[Tensor, Tensor]

Returns

f_mean and f_var, shape depends on full_cov and full_output_cov

gpflow.posteriors.separate_independent_conditional_implementation#

gpflow.posteriors.separate_independent_conditional_implementation(Kmns, Kmms, Knns, f, *, full_cov=False, q_sqrt=None, white=False)[source]#

Multi-output GP with independent GP priors.

Number of latent processes equals the number of outputs (L = P). The covariance matrices used to calculate the conditional have the following shape:

  • Kuu: [P, M, M]

  • Kuf: [P, M, N]

  • Kff: [P, N] or [P, N, N]

Further reference:

  • See gpflow.conditionals._conditional for a detailed explanation of conditional in the single-output case.

  • See the multioutput notebook for more information about the multioutput framework.

  • See above for the parameters and the return value.

Parameters
  • Kmns (Tensor) –

  • Kmms (Tensor) –

  • Knns (Tensor) –

  • f (Tensor) –

  • full_cov (bool) –

  • q_sqrt (Optional[Tensor]) –

  • white (bool) –

Return type

Tuple[Tensor, Tensor]