gpflow.conditionals¶
gpflow.conditionals.base_conditional¶
- gpflow.conditionals.base_conditional(Kmn, Kmm, Knn, f, *, full_cov=False, q_sqrt=None, white=False)[source]¶
- Given a g1 and g2, and distribution p and q such that
p(g2) = N(g2; 0, Kmm)
p(g1) = N(g1; 0, Knn) p(g1 | g2) = N(g1; Knm (Kmm⁻¹) g2, Knn - Knm (Kmm⁻¹) Kmn)
- And
q(g2) = N(g2; f, q_sqrt q_sqrtᵀ)
- This method computes the mean and (co)variance of
q(g1) = ∫ q(g2) p(g1 | g2)
- Parameters
Kmn (
Tensor
) – [M, …, N]Kmm (
Tensor
) – [M, M]Knn (
Tensor
) – […, N, N] or Nf (
Tensor
) – [M, R]full_cov (
bool
) – boolq_sqrt (
Optional
[Tensor
]) – If this is a Tensor, it must have shape [R, M, M] (lower triangular) or [M, R] (diagonal)white (
bool
) – bool
- Return type
Tuple
[Tensor
,Tensor
]- Returns
[N, R] or [R, N, N]
gpflow.conditionals.conditional¶
This function uses multiple dispatch, which will depend on the type of argument passed in:
gpflow.conditionals.conditional( object, InducingVariables, Kernel, object )
# dispatch to -> gpflow.conditionals.conditionals._sparse_conditional(...)
- gpflow.conditionals.conditionals._sparse_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Single-output GP conditional.
The covariance matrices used to calculate the conditional have the following shape: - Kuu: [M, M] - Kuf: [M, N] - Kff: [N, N]
- Parameters
- :param Xnew: data matrix, size [N, D].
- :param f: data matrix, [M, R]
- :param full_cov: return the covariance between the datapoints
- :param full_output_cov: return the covariance between the outputs.
- NOTE: as we are using a single-output kernel with repetitions
these covariances will be zero.
- :param q_sqrt: matrix of standard-deviations or Cholesky matrices,
size [M, R] or [R, M, M].
- :param white: boolean of whether to use the whitened representation
- :return:
mean: [N, R]
variance: [N, R], [R, N, N], [N, R, R] or [N, R, N, R]
Please see gpflow.conditional._expand_independent_outputs for more information about the shape of the variance, depending on full_cov and full_output_cov.
- Parameters
Xnew (
Tensor
) –inducing_variable (
InducingVariables
) –kernel (
Kernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, object, Kernel, object )
# dispatch to -> gpflow.conditionals.conditionals._dense_conditional(...)
- gpflow.conditionals.conditionals._dense_conditional(Xnew, X, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Given f, representing the GP at the points X, produce the mean and (co-)variance of the GP at the points Xnew.
Additionally, there may be Gaussian uncertainty about f as represented by q_sqrt. In this case f represents the mean of the distribution and q_sqrt the square-root of the covariance.
- Additionally, the GP may have been centered (whitened) so that
p(v) = 𝒩(𝟎, 𝐈) f = 𝐋v
- thus
p(f) = 𝒩(𝟎, 𝐋𝐋ᵀ) = 𝒩(𝟎, 𝐊).
In this case f represents the values taken by v.
The method can either return the diagonals of the covariance matrix for each output (default) or the full covariance matrix (full_cov=True).
We assume R independent GPs, represented by the columns of f (and the first dimension of q_sqrt).
- Parameters
Xnew (
Tensor
) – data matrix, size [N, D]. Evaluate the GP at these new pointsX (
Tensor
) – data points, size [M, D].kernel (
Kernel
) – GPflow kernel.f (
Tensor
) – data matrix, [M, R], representing the function values at X, for R functions.q_sqrt (
Optional
[Tensor
]) – matrix of standard-deviations or Cholesky matrices, size [M, R] or [R, M, M].white (
bool
) – boolean of whether to use the whitened representation as described above.
- Return type
Tuple
[Tensor
,Tensor
]- Returns
mean: [N, R]
variance: [N, R] (full_cov = False), [R, N, N] (full_cov = True)
- Parameters
full_cov (
bool
) –full_output_cov (
bool
) –
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, SharedIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.shared_independent_conditional(...)
Multioutput conditional for an independent kernel and shared inducing inducing. Same behaviour as conditional with non-multioutput kernels. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [M, M] - Kuf: [M, N] - Kff: N or [N, N]
- Parameters
Xnew (
Tensor
) –inducing_variable (
SharedIndependentInducingVariables
) –kernel (
SharedIndependent
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, SharedIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
MultioutputKernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, SeparateIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
MultioutputKernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, SeparateIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
MultioutputKernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, FallbackSharedIndependentInducingVariables, IndependentLatent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Interdomain conditional with independent latents. In this case the number of latent GPs (L) will be different than the number of outputs (P) The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [M, L, N, P] - Kff: [N, P, N, P], [N, P, P], [N, P]
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
IndependentLatent
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, FallbackSeparateIndependentInducingVariables, IndependentLatent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Interdomain conditional with independent latents. In this case the number of latent GPs (L) will be different than the number of outputs (P) The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [M, L, N, P] - Kff: [N, P, N, P], [N, P, P], [N, P]
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
IndependentLatent
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, InducingPoints, MultioutputKernel, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.inducing_point_conditional(...)
- gpflow.conditionals.multioutput.conditionals.inducing_point_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Multi-output GP with fully correlated inducing variables. The inducing variables are shaped in the same way as evaluations of K, to allow a default inducing point scheme for multi-output kernels. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [M, L, M, L] - Kuf: [M, L, N, P] - Kff: [N, P, N, P], [N, P, P], [N, P]
- Parameters
- :param f: variational mean, [L, 1]
- :param q_sqrt: standard-deviations or cholesky, [L, 1] or [1, L, L]
- Parameters
Xnew (
Tensor
) –inducing_variable (
InducingPoints
) –kernel (
MultioutputKernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, LinearCoregionalization, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(...)
- gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Most efficient routine to project L independent latent gps through a mixing matrix W. The mixing matrix is a member of the LinearCoregionalization and has shape [P, L]. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [L, M, N] - Kff: [L, N] or [L, N, N]
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
LinearCoregionalization
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, LinearCoregionalization, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(...)
- gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]¶
Most efficient routine to project L independent latent gps through a mixing matrix W. The mixing matrix is a member of the LinearCoregionalization and has shape [P, L]. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [L, M, N] - Kff: [L, N] or [L, N, N]
- Parameters
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
LinearCoregionalization
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.sample_conditional¶
This function uses multiple dispatch, which will depend on the type of argument passed in:
gpflow.conditionals.sample_conditional( object, SharedIndependentInducingVariables, LinearCoregionalization, object )
# dispatch to -> gpflow.conditionals.multioutput.sample_conditionals._sample_conditional(...)
- gpflow.conditionals.multioutput.sample_conditionals._sample_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False, num_samples=None)[source]¶
sample_conditional will return a sample from the conditinoal distribution. In most cases this means calculating the conditional mean m and variance v and then returning m + sqrt(v) * eps, with eps ~ N(0, 1). However, for some combinations of Mok and Mof more efficient sampling routines exists. The dispatcher will make sure that we use the most efficent one. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
] :return: [N, P] (full_output_cov = False) or [N, P, P] (full_output_cov = True)- Parameters
Xnew (
Tensor
) –inducing_variable (
SharedIndependentInducingVariables
) –kernel (
LinearCoregionalization
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –num_samples (
Optional
[int
]) –
gpflow.conditionals.sample_conditional( object, InducingVariables, Kernel, object )
# dispatch to -> gpflow.conditionals.sample_conditionals._sample_conditional(...)
- gpflow.conditionals.sample_conditionals._sample_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False, num_samples=None)[source]¶
sample_conditional will return a sample from the conditional distribution. In most cases this means calculating the conditional mean m and variance v and then returning m + sqrt(v) * eps, with eps ~ N(0, 1). However, for some combinations of Mok and Mof more efficient sampling routines exists. The dispatcher will make sure that we use the most efficient one. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
] :return: samples, mean, covsamples has shape [num_samples, N, P] or [N, P] if num_samples is None mean and cov as for conditional()
- Parameters
Xnew (
Tensor
) –inducing_variable (
InducingVariables
) –kernel (
Kernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –num_samples (
Optional
[int
]) –
gpflow.conditionals.sample_conditional( object, object, Kernel, object )
# dispatch to -> gpflow.conditionals.sample_conditionals._sample_conditional(...)
- gpflow.conditionals.sample_conditionals._sample_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False, num_samples=None)[source]¶
sample_conditional will return a sample from the conditional distribution. In most cases this means calculating the conditional mean m and variance v and then returning m + sqrt(v) * eps, with eps ~ N(0, 1). However, for some combinations of Mok and Mof more efficient sampling routines exists. The dispatcher will make sure that we use the most efficient one. :rtype:
Tuple
[Tensor
,Tensor
,Tensor
] :return: samples, mean, covsamples has shape [num_samples, N, P] or [N, P] if num_samples is None mean and cov as for conditional()
- Parameters
Xnew (
Tensor
) –inducing_variable (
InducingVariables
) –kernel (
Kernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –num_samples (
Optional
[int
]) –
gpflow.conditionals.uncertain_conditional¶
- gpflow.conditionals.uncertain_conditional(Xnew_mu, Xnew_var, inducing_variable, kernel, q_mu, q_sqrt, *, mean_function=None, full_output_cov=False, full_cov=False, white=False)[source]¶
Calculates the conditional for uncertain inputs Xnew, p(Xnew) = N(Xnew_mu, Xnew_var). See
conditional
documentation for further reference. :type Xnew_mu:Tensor
:param Xnew_mu: mean of the inputs, size [N, D]in :type Xnew_var:Tensor
:param Xnew_var: covariance matrix of the inputs, size [N, n, n] :type inducing_variable:InducingVariables
:param inducing_variable: gpflow.InducingVariable object, only InducingPoints is supported :type kernel:Kernel
:param kernel: gpflow kernel object. :type q_mu:Tensor
:param q_mu: mean inducing points, size [M, Dout] :type q_sqrt:Tensor
:param q_sqrt: cholesky of the covariance matrix of the inducing points, size [t, M, M] :type full_output_cov:bool
:param full_output_cov: boolean wheter to compute covariance between output dimension.Influences the shape of return value
fvar
. Default is False- Parameters
white (
bool
) – boolean whether to use whitened representation. Default is False.- Return fmean, fvar
mean and covariance of the conditional, size
fmean
is [N, Dout], sizefvar
depends onfull_output_cov
: if Truef_var
is [N, t, t], if False thenf_var
is [N, Dout]
- Parameters
mean_function (
Optional
[MeanFunction
]) –full_cov (
bool
) –
- Return type
Tuple
[Tensor
,Tensor
]