gpflow.utilities.ops#

Functions#

gpflow.utilities.ops.broadcasting_elementwise#

gpflow.utilities.ops.broadcasting_elementwise(op, a, b)[source]#

Apply binary operation op to every pair in tensors a and b.

Parameters:
  • op (Callable[[Tensor, Tensor], Tensor]) – binary operator on tensors, e.g. tf.add, tf.substract

  • a (Tensor) –

    • a has shape [a_shape…].

  • b (Tensor) –

    • b has shape [b_shape…].

Return type:

Tensor

Returns:

  • return has shape [a_shape…, b_shape…].

gpflow.utilities.ops.cast#

gpflow.utilities.ops.cast(value, dtype, name=None)[source]#
Parameters:
  • value (Union[Tensor, ndarray[Any, Any]]) –

  • dtype (DType) –

  • name (Optional[str]) –

Return type:

Tensor

gpflow.utilities.ops.difference_matrix#

gpflow.utilities.ops.difference_matrix(X, X2)[source]#

Returns (X - X2ᵀ)

Parameters:
  • X (Tensor) –

    • X has shape [batch…, N, D].

  • X2 (Optional[Tensor]) –

    • X2 has shape [batch2…, N2, D].

Return type:

Tensor

Returns:

  • return has shape [batch…, N, N, D] if X2 is None.

  • return has shape [batch…, N, batch2…, N2, D] if X2 is not None.

gpflow.utilities.ops.eye#

gpflow.utilities.ops.eye(num, value, dtype=None)[source]#
Parameters:
  • num (int) –

  • value (Tensor) –

  • dtype (Optional[DType]) –

Return type:

Tensor

gpflow.utilities.ops.leading_transpose#

gpflow.utilities.ops.leading_transpose(tensor, perm, leading_dim=0)[source]#

Transposes tensors with leading dimensions.

Leading dimensions in permutation list represented via ellipsis and is of type List[Union[int, type(…)] (please note, due to mypy issues, List[Any] is used instead). When leading dimensions are found, transpose method considers them as a single grouped element indexed by 0 in perm list. So, passing perm=[-2, …, -1], you assume that your input tensor has […, A, B] shape, and you want to move leading dims between A and B dimensions. Dimension indices in permutation list can be negative or positive. Valid positive indices start from 1 up to the tensor rank, viewing leading dimensions as zero index.

Example:

a = tf.random.normal((1, 2, 3, 4, 5, 6))
# [..., A, B, C],
# where A is 1st element,
# B is 2nd element and
# C is 3rd element in
# permutation list,
# leading dimensions are [1, 2, 3]
# which are 0th element in permutation list
b = leading_transpose(a, [3, -3, ..., -2])  # [C, A, ..., B]
sess.run(b).shape

output> (6, 4, 1, 2, 3, 5)
Parameters:
  • tensor (Tensor) –

    • tensor has shape [any…].

    TensorFlow tensor.

  • perm (List[Any]) – List of permutation indices.

  • leading_dim (int) –

Return type:

Tensor

Returns:

  • return has shape [transposed_any…].

TensorFlow tensor.

Raises:

ValueError – when cannot be found.

gpflow.utilities.ops.pca_reduce#

gpflow.utilities.ops.pca_reduce(X, latent_dim)[source]#

Linearly reduce the dimensionality of the input points X to latent_dim dimensions.

Parameters:
  • X (Tensor) –

    • X has shape [N, D].

    Data to reduce.

  • latent_dim (Tensor) –

    • latent_dim has shape [].

    Number of latent dimension, Q < D.

Return type:

Tensor

Returns:

  • return has shape [N, Q].

PCA projection array.

gpflow.utilities.ops.square_distance#

gpflow.utilities.ops.square_distance(X, X2)[source]#

Returns ||X - X2ᵀ||²

Due to the implementation and floating-point imprecision, the result may actually be very slightly negative for entries very close to each other.

Parameters:
  • X (Tensor) –

    • X has shape [batch…, N, D].

  • X2 (Optional[Tensor]) –

    • X2 has shape [batch2…, N2, D].

Return type:

Tensor

Returns:

  • return has shape [batch…, N, N] if X2 is None.

  • return has shape [batch…, N, batch2…, N2] if X2 is not None.