gpflow.utilities.ops¶
gpflow.utilities.ops.broadcasting_elementwise¶
- gpflow.utilities.ops.broadcasting_elementwise(op, a, b)[source]¶
Apply binary operation op to every pair in tensors a and b.
- Parameters
op – binary operator on tensors, e.g. tf.add, tf.substract
a – tf.Tensor, shape [n_1, …, n_a]
b – tf.Tensor, shape [m_1, …, m_b]
- Returns
tf.Tensor, shape [n_1, …, n_a, m_1, …, m_b]
gpflow.utilities.ops.difference_matrix¶
- gpflow.utilities.ops.difference_matrix(X, X2)[source]¶
Returns (X - X2ᵀ)
This function can deal with leading dimensions in X and X2. For example, If X has shape [M, D] and X2 has shape [N, D], the output will have shape [M, N, D]. If X has shape [I, J, M, D] and X2 has shape [K, L, N, D], the output will have shape [I, J, M, K, L, N, D].
gpflow.utilities.ops.pca_reduce¶
- gpflow.utilities.ops.pca_reduce(X, latent_dim)[source]¶
A helpful function for linearly reducing the dimensionality of the input points X to latent_dim dimensions.
- Parameters
X (
Tensor
) – data array of size N (number of points) x D (dimensions)latent_dim (
Tensor
) – Number of latent dimensions Q < D
- Return type
Tensor
- Returns
PCA projection array of size [N, Q].
gpflow.utilities.ops.square_distance¶
- gpflow.utilities.ops.square_distance(X, X2)[source]¶
Returns ||X - X2ᵀ||² Due to the implementation and floating-point imprecision, the result may actually be very slightly negative for entries very close to each other.
This function can deal with leading dimensions in X and X2. In the sample case, where X and X2 are both 2 dimensional, for example, X is [N, D] and X2 is [M, D], then a tensor of shape [N, M] is returned. If X is [N1, S1, D] and X2 is [N2, S2, D] then the output will be [N1, S1, N2, S2].