gpflow.conditionals#
Modules#
Functions#
gpflow.conditionals.base_conditional#
- gpflow.conditionals.base_conditional(Kmn, Kmm, Knn, f, *, full_cov=False, q_sqrt=None, white=False)[source]#
Given a g1 and g2, and distribution p and q such that:
p(g2) = N(g2; 0, Kmm) p(g1) = N(g1; 0, Knn) p(g1 | g2) = N(g1; Knm (Kmm⁻¹) g2, Knn - Knm (Kmm⁻¹) Kmn)
And:
q(g2) = N(g2; f, q_sqrt q_sqrtᵀ)
This method computes the mean and (co)variance of:
q(g1) = ∫ q(g2) p(g1 | g2)
- Parameters:
q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [M_R_or_R_M_M…].
If this is a Tensor, it must have shape [R, M, M] (lower triangular) or [M, R] (diagonal)
Kmn (
Tensor
) –Kmn has shape [M, batch…, N].
Kmm (
Tensor
) –Kmm has shape [M, M].
Knn (
Tensor
) –Knn has shape [batch…, N, N] if full_cov.
Knn has shape [batch…, N] if not full_cov.
f (
Tensor
) –f has shape [M, R].
full_cov (
bool
) –white (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, R].
return[1] has shape [batch…, N, R] if not full_cov.
return[1] has shape [batch…, R, N, N] if full_cov.
mean, variance
gpflow.conditionals.conditional#
This function uses multiple dispatch, which will depend on the type of argument passed in:
gpflow.conditionals.conditional( object, InducingVariables, Kernel, object )
# dispatch to -> gpflow.conditionals.conditionals._sparse_conditional(...)
- gpflow.conditionals.conditionals._sparse_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
Single-output GP conditional.
The covariance matrices used to calculate the conditional have the following shape:
Kuu: [M, M]
Kuf: [M, N]
Kff: [N, N]
Further reference:
See gpflow.conditionals._dense_conditional (below) for a detailed explanation of conditional in the single-output case.
See the multiouput notebook for more information about the multiouput framework.
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
data matrix
f (
Tensor
) –f has shape [M, R].
data matrix
full_cov (
bool
) – return the covariance between the datapointsfull_output_cov (
bool
) – return the covariance between the outputs. NOTE: as we are using a single-output kernel with repetitions these covariances will be zero.q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [M_R_or_R_M_M…].
matrix of standard-deviations or Cholesky matrices, size [M, R] or [R, M, M].
white (
bool
) – boolean of whether to use the whitened representationinducing_variable (
InducingVariables
) –inducing_variable has shape [M, D, broadcast R].
kernel (
Kernel
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, R].
return[1] has shape [batch…, N, R, N, R] if full_cov and full_output_cov.
return[1] has shape [batch…, N, R, R] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, R] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, R, N, N] if full_cov and (not full_output_cov).
mean and variance
gpflow.conditionals.conditional( object, object, Kernel, object )
# dispatch to -> gpflow.conditionals.conditionals._dense_conditional(...)
- gpflow.conditionals.conditionals._dense_conditional(Xnew, X, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
Given f, representing the GP at the points X, produce the mean and (co-)variance of the GP at the points Xnew.
Additionally, there may be Gaussian uncertainty about f as represented by q_sqrt. In this case f represents the mean of the distribution and q_sqrt the square-root of the covariance.
Additionally, the GP may have been centered (whitened) so that:
p(v) = 𝒩(𝟎, 𝐈) f = 𝐋v
thus:
p(f) = 𝒩(𝟎, 𝐋𝐋ᵀ) = 𝒩(𝟎, 𝐊).
In this case f represents the values taken by v.
The method can either return the diagonals of the covariance matrix for each output (default) or the full covariance matrix (full_cov=True).
We assume R independent GPs, represented by the columns of f (and the first dimension of q_sqrt).
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
data matrix. Evaluate the GP at these new points
X (
Tensor
) –X has shape [M, D].
data points.
kernel (
Kernel
) – GPflow kernel.f (
Tensor
) –f has shape [M, R].
data matrix, representing the function values at X, for R functions.
q_sqrt (
Optional
[Tensor
]) – matrix of standard-deviations or Cholesky matrices, size [M, R] or [R, M, M].white (
bool
) – boolean of whether to use the whitened representation as described above.full_cov (
bool
) –full_output_cov (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, R].
return[1] has shape [batch…, N, R, N, R] if full_cov and full_output_cov.
return[1] has shape [batch…, N, R, R] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, R] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, R, N, N] if full_cov and (not full_output_cov).
mean and variance
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, SharedIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.shared_independent_conditional(...)
Multioutput conditional for an independent kernel and shared inducing inducing. Same behaviour as conditional with non-multioutput kernels. The covariance matrices used to calculate the conditional have the following shape:
Kuu: [M, M]
Kuf: [M, N]
Kff: N or [N, N]
Further reference:
See gpflow.conditionals._conditional for a detailed explanation of conditional in the single-output case.
See the multioutput notebook for more information about the multioutput framework.
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
data matrix, size [N, D].
f (
Tensor
) –f has shape [M, L].
data matrix, [M, P]
full_cov (
bool
) – return the covariance between the datapointsfull_output_cov (
bool
) – return the covariance between the outputs. Note: as we are using a independent kernel these covariances will be zero.q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [M_L_or_L_M_M…].
matrix of standard-deviations or Cholesky matrices, size [M, P] or [P, M, M].
white (
bool
) – boolean of whether to use the whitened representationinducing_variable (
SharedIndependentInducingVariables
) –inducing_variable has shape [M, D, broadcast L].
kernel (
SharedIndependent
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
mean: [N, P]
variance: [N, P], [P, N, N], [N, P, P] or [N, P, N, P]
Please see gpflow.conditional._expand_independent_outputs for more information about the shape of the variance, depending on full_cov and full_output_cov.
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, SharedIndependent, object )
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, SeparateIndependent, object )
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, SeparateIndependent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.separate_independent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
- Parameters:
Xnew (
Tensor
) –inducing_variable (
MultioutputInducingVariables
) –kernel (
MultioutputKernel
) –f (
Tensor
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]
gpflow.conditionals.conditional( object, FallbackSharedIndependentInducingVariables, IndependentLatent, object )
gpflow.conditionals.conditional( object, FallbackSeparateIndependentInducingVariables, IndependentLatent, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(...)
- gpflow.conditionals.multioutput.conditionals.fallback_independent_latent_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
Interdomain conditional with independent latents. In this case the number of latent GPs (L) will be different than the number of outputs (P) The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [M, L, N, P] - Kff: [N, P, N, P], [N, P, P], [N, P]
Further reference:
See gpflow.conditionals._conditional for a detailed explanation of conditional in the single-output case.
See the multioutput notebook for more information about the multioutput framework.
See above for the parameters and the return value.
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
inducing_variable (
MultioutputInducingVariables
) –inducing_variable has shape [M, D, broadcast L].
f (
Tensor
) –f has shape [M, L].
q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [M_L_or_L_M_M…].
kernel (
IndependentLatent
) –full_cov (
bool
) –full_output_cov (
bool
) –white (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
gpflow.conditionals.conditional( object, InducingPoints, MultioutputKernel, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.inducing_point_conditional(...)
- gpflow.conditionals.multioutput.conditionals.inducing_point_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
Multi-output GP with fully correlated inducing variables. The inducing variables are shaped in the same way as evaluations of K, to allow a default inducing point scheme for multi-output kernels. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [M, L, M, L] - Kuf: [M, L, N, P] - Kff: [N, P, N, P], [N, P, P], [N, P]
Further reference:
See gpflow.conditionals._conditional for a detailed explanation of conditional in the single-output case.
See the multioutput notebook for more information about the multioutput framework.
- Parameters:
f (
Tensor
) –f has shape [L, 1].
variational mean, [L, 1]
q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [L_1_or_1_L_L…].
standard-deviations or cholesky, [L, 1] or [1, L, L]
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
inducing_variable (
InducingPoints
) –inducing_variable has shape [M, D, broadcast L].
kernel (
MultioutputKernel
) –full_cov (
bool
) –full_output_cov (
bool
) –white (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
gpflow.conditionals.conditional( object, SharedIndependentInducingVariables, LinearCoregionalization, object )
gpflow.conditionals.conditional( object, SeparateIndependentInducingVariables, LinearCoregionalization, object )
# dispatch to -> gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(...)
- gpflow.conditionals.multioutput.conditionals.coregionalization_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False)[source]#
Most efficient routine to project L independent latent gps through a mixing matrix W. The mixing matrix is a member of the LinearCoregionalization and has shape [P, L]. The covariance matrices used to calculate the conditional have the following shape: - Kuu: [L, M, M] - Kuf: [L, M, N] - Kff: [L, N] or [L, N, N]
Further reference:
See gpflow.conditionals._conditional for a detailed explanation of conditional in the single-output case.
See the multioutput notebook for more information about the multioutput framework.
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
inducing_variable (
MultioutputInducingVariables
) –inducing_variable has shape [M, D, broadcast L].
f (
Tensor
) –f has shape [M, L].
q_sqrt (
Optional
[Tensor
]) –q_sqrt has shape [M_L_or_L_M_M…].
kernel (
LinearCoregionalization
) –full_cov (
bool
) –full_output_cov (
bool
) –white (
bool
) –
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P].
return[1] has shape [batch…, N, P, N, P] if full_cov and full_output_cov.
return[1] has shape [batch…, N, P, P] if (not full_cov) and full_output_cov.
return[1] has shape [batch…, N, P] if (not full_cov) and (not full_output_cov).
return[1] has shape [batch…, P, N, N] if full_cov and (not full_output_cov).
gpflow.conditionals.sample_conditional#
This function uses multiple dispatch, which will depend on the type of argument passed in:
gpflow.conditionals.sample_conditional( object, SharedIndependentInducingVariables, LinearCoregionalization, object )
# dispatch to -> gpflow.conditionals.multioutput.sample_conditionals._sample_conditional(...)
- gpflow.conditionals.multioutput.sample_conditionals._sample_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False, num_samples=None)[source]#
sample_conditional will return a sample from the conditional distribution. In most cases this means calculating the conditional mean m and variance v and then returning m + sqrt(v) * eps, with eps ~ N(0, 1). However, for some combinations of Mok and Mof, more efficient sampling routines exist. The dispatcher will make sure that we use the most efficent one.
- Return type:
Tuple
[Tensor
,Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, P] if num_samples is None.
return[0] has shape [batch…, num_samples, N, P] if num_samples is not None.
return[1] has shape [batch…, N, P].
return[2] has shape [batch…, N, P].
samples, mean, cov
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
inducing_variable (
SharedIndependentInducingVariables
) –inducing_variable has shape [M, D, maybe_R…].
f (
Tensor
) –f has shape [M, R].
kernel (
LinearCoregionalization
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –num_samples (
Optional
[int
]) –
gpflow.conditionals.sample_conditional( object, InducingVariables, Kernel, object )
gpflow.conditionals.sample_conditional( object, object, Kernel, object )
# dispatch to -> gpflow.conditionals.sample_conditionals._sample_conditional(...)
- gpflow.conditionals.sample_conditionals._sample_conditional(Xnew, inducing_variable, kernel, f, *, full_cov=False, full_output_cov=False, q_sqrt=None, white=False, num_samples=None)[source]#
sample_conditional will return a sample from the conditional distribution. In most cases this means calculating the conditional mean m and variance v and then returning m + sqrt(v) * eps, with eps ~ N(0, 1). However, for some combinations of Mok and Mof more efficient sampling routines exists. The dispatcher will make sure that we use the most efficient one.
- Return type:
Tuple
[Tensor
,Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, R] if num_samples is None.
return[0] has shape [batch…, num_samples, N, R] if num_samples is not None.
return[1] has shape [batch…, N, R].
return[2] has shape [batch…, N, R, R] if (not full_cov) and full_output_cov.
return[2] has shape [batch…, N, R] if (not full_cov) and (not full_output_cov).
return[2] has shape [batch…, R, N, N] if full_cov and (not full_output_cov).
samples, mean, cov
- Parameters:
Xnew (
Tensor
) –Xnew has shape [batch…, N, D].
inducing_variable (
InducingVariables
) –inducing_variable has shape [M, D, maybe_R…].
f (
Tensor
) –f has shape [M, R].
kernel (
Kernel
) –full_cov (
bool
) –full_output_cov (
bool
) –q_sqrt (
Optional
[Tensor
]) –white (
bool
) –num_samples (
Optional
[int
]) –
gpflow.conditionals.uncertain_conditional#
- gpflow.conditionals.uncertain_conditional(Xnew_mu, Xnew_var, inducing_variable, kernel, q_mu, q_sqrt, *, mean_function=None, full_output_cov=False, full_cov=False, white=False)[source]#
Calculates the conditional for uncertain inputs Xnew, p(Xnew) = N(Xnew_mu, Xnew_var). See
conditional
documentation for further reference.- Parameters:
Xnew_mu (
Tensor
) –Xnew_mu has shape [batch…, N, Din].
mean of the inputs
Xnew_var (
Tensor
) –Xnew_var has shape [batch…, N, n, n].
covariance matrix of the inputs
inducing_variable (
InducingVariables
) –inducing_variable has shape [M, Din, broadcast t].
gpflow.InducingVariable object, only InducingPoints is supported
kernel (
Kernel
) – gpflow kernel object.q_mu (
Tensor
) –q_mu has shape [M, Dout].
mean inducing points
q_sqrt (
Tensor
) –q_sqrt has shape [t, M, M].
cholesky of the covariance matrix of the inducing points
full_output_cov (
bool
) – boolean wheter to compute covariance between output dimension. Influences the shape of return valuefvar
. Default is Falsewhite (
bool
) – boolean whether to use whitened representation. Default is False.mean_function (
Optional
[MeanFunction
]) –full_cov (
bool
) –
- Return fmean, fvar:
mean and covariance of the conditional
- Return type:
Tuple
[Tensor
,Tensor
]- Returns:
return[0] has shape [batch…, N, Dout].
return[1] has shape [batch…, N, Dout] if not full_output_cov.
return[1] has shape [batch…, N, t, t] if full_output_cov.