gpflow.utilities#

Modules#

Classes#

gpflow.utilities.Dispatcher#

class gpflow.utilities.Dispatcher(name, doc=None)[source]#

Bases: Dispatcher

multipledispatch.Dispatcher uses a generator to yield the desired function implementation, which is problematic as TensorFlow’s autograph is not able to compile code that passes through generators.

This class overwrites the problematic method in the original Dispatcher and solely makes use of simple for-loops, which are compilable by AutoGraph.

dispatch(*types)[source]#

Returns matching function for types; if not existing returns None.

Parameters:

types (Union[Type[Any], Tuple[Type[Any], ...]]) –

Return type:

Optional[Callable[..., Any]]

dispatch_or_raise(*types)[source]#

Returns matching function for types; if not existing raises an error.

Parameters:

types (Union[Type[Any], Tuple[Type[Any], ...]]) –

Return type:

Callable[..., Any]

get_first_occurrence(*types)[source]#

Returns the first occurrence of a matching function

Based on multipledispatch.Dispatcher.dispatch_iter, which returns an iterator of matching functions. This method uses the same logic to select functions, but simply returns the first element of the iterator. If no matching functions are found, None is returned.

Parameters:

types (Union[Type[Any], Tuple[Type[Any], ...]]) –

Return type:

Optional[Callable[..., Any]]

register(*types, **kwargs)[source]#

register dispatcher with new implementation

>>> f = Dispatcher('f')
>>> @f.register(int)
... def inc(x):
...     return x + 1
>>> @f.register(float)
... def dec(x):
...     return x - 1
>>> @f.register(list)
... @f.register(tuple)
... def reverse(x):
...     return x[::-1]
>>> f(1)
2
>>> f(1.0)
0.0
>>> f([1, 2, 3])
[3, 2, 1]
Parameters:
  • types (Union[Type[Any], Tuple[Type[Any], ...]]) –

  • kwargs (Any) –

Return type:

Callable[[TypeVar(_C, bound= Callable[..., Any])], TypeVar(_C, bound= Callable[..., Any])]

Functions#

gpflow.utilities.add_likelihood_noise_cov#

gpflow.utilities.add_likelihood_noise_cov(K, likelihood, X)[source]#

Returns K + σ², where σ² is the likelihood noise variance.

Parameters:
  • K (Tensor) –

    • K has shape [batch…, N, N].

  • X (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • X has shape [batch…, N, D].

  • likelihood (Gaussian) –

Return type:

Tensor

Returns:

  • return has shape [batch…, N, N].

gpflow.utilities.add_noise_cov#

gpflow.utilities.add_noise_cov(K, likelihood_variance)[source]#

Returns K + σ², where σ² is the diagonal likelihood noise variance.

Parameters:
  • K (Tensor) –

    • K has shape [batch…, N, N].

  • likelihood_variance (Union[ndarray[Any, Any], Tensor, Variable, Parameter]) –

    • likelihood_variance has shape [broadcast batch…, broadcast N].

Return type:

Tensor

Returns:

  • return has shape [batch…, N, N].

gpflow.utilities.assert_params_false#

gpflow.utilities.assert_params_false(called_method, **kwargs)[source]#

Asserts that parameters are False.

Parameters:
  • called_method (Callable[..., Any]) – The method or function that is calling this. Used for nice error messages.

  • kwargs (bool) – Parameters that must be False.

Raises:

NotImplementedError – If any kwargs are True.

Return type:

None

gpflow.utilities.deepcopy#

gpflow.utilities.deepcopy(input_module, memo=None)[source]#

Returns a deepcopy of the input tf.Module. To do that first resets the caches stored inside each tfp.bijectors.Bijector to allow the deepcopy of the tf.Module.

Parameters:
  • input_module (TypeVar(M, bound= Module)) – tf.Module including keras.Model, keras.layers.Layer and gpflow.Module.

  • memo (Optional[Dict[int, Any]]) – passed through to func:copy.deepcopy (see https://docs.python.org/3/library/copy.html).

Return type:

TypeVar(M, bound= Module)

Returns:

Returns a deepcopy of an input object.

gpflow.utilities.freeze#

gpflow.utilities.freeze(input_module)[source]#

Returns a deepcopy of the input tf.Module with constants instead of variables and parameters.

Parameters:

input_module (TypeVar(M, bound= Module)) – tf.Module or gpflow.Module.

Return type:

TypeVar(M, bound= Module)

Returns:

Returns a frozen deepcopy of an input object.

gpflow.utilities.is_variable#

gpflow.utilities.is_variable(t)[source]#

Returns whether the t is a TensorFlow variable.

Parameters:

t (Union[int, float, Sequence[Any], ndarray[Any, Any], Tensor, Variable, Parameter]) –

Return type:

bool

gpflow.utilities.leaf_components#

gpflow.utilities.leaf_components(input)[source]#
Parameters:

input (Module) –

Return type:

Mapping[str, Union[Variable, Parameter]]

gpflow.utilities.multiple_assign#

gpflow.utilities.multiple_assign(module, parameters)[source]#

Multiple assign takes a dictionary with new values. Dictionary keys are paths to the tf.Variable`s or `gpflow.Parameter of the input module.

Parameters:
  • module (Module) – tf.Module.

  • parameters (Mapping[str, Tensor]) – a dictionary with keys of the form “.module.path.to.variable” and new value tensors.

Return type:

None

gpflow.utilities.parameter_dict#

gpflow.utilities.parameter_dict(module)[source]#

Returns a dictionary of parameters (variables) for the tf.Module component. Dictionary keys are relative paths to the attributes to which parameters (variables) assigned to.

class SubModule(tf.Module):
def __init__(self):

self.parameter = gpflow.Parameter(1.0) self.variable = tf.Variable(1.0)

class Module(tf.Module):
def __init__(self):

self.submodule = SubModule()

m = Module() params = parameter_dict(m) # { # “.submodule.parameter”: <parameter object>, # “.submodule.variable”: <variable object> # }

Parameters:

module (Module) –

Return type:

Dict[str, Union[Variable, Parameter]]

gpflow.utilities.positive#

gpflow.utilities.positive(lower=None, base=None)[source]#

Returns a positive bijector (a reversible transformation from real to positive numbers).

Parameters:
  • lower (Optional[float]) – overrides default lower bound (if None, defaults to gpflow.config.default_positive_minimum())

  • base (Optional[str]) – overrides base positive bijector (if None, defaults to gpflow.config.default_positive_bijector())

Return type:

Bijector

Returns:

a bijector instance

gpflow.utilities.print_summary#

gpflow.utilities.print_summary(module, fmt=None)[source]#

Prints a summary of the parameters and variables contained in a tf.Module.

Parameters:
  • module (Module) –

  • fmt (Optional[str]) –

Return type:

None

gpflow.utilities.read_values#

gpflow.utilities.read_values(module)[source]#

Returns a dictionary of numpy values of the module parameters (variables).

Parameters:

module (Module) –

Return type:

Dict[str, ndarray[Any, Any]]

gpflow.utilities.reset_cache_bijectors#

gpflow.utilities.reset_cache_bijectors(input_module)[source]#

Recursively finds tfp.bijectors.Bijector-s inside the components of the tf.Module using traverse_component. Resets the caches stored inside each tfp.bijectors.Bijector.

Parameters:

input_module (Module) – tf.Module including keras.Model, keras.layers.Layer and gpflow.Module.

Return type:

Module

Returns:

same object but with all bijector caches reset

gpflow.utilities.select_dict_parameters_with_prior#

gpflow.utilities.select_dict_parameters_with_prior(model)[source]#

Collects parameters with prior into a dictionary.

Parameters:

model (Module) –

Return type:

Dict[str, Parameter]

gpflow.utilities.tabulate_module_summary#

gpflow.utilities.tabulate_module_summary(module, tablefmt=None)[source]#
Parameters:
  • module (Module) –

  • tablefmt (Optional[str]) –

Return type:

str

gpflow.utilities.to_default_float#

gpflow.utilities.to_default_float(x)[source]#
Parameters:

x (Union[int, float, Sequence[Any], ndarray[Any, Any], Tensor, Variable, Parameter]) –

Return type:

Tensor

gpflow.utilities.to_default_int#

gpflow.utilities.to_default_int(x)[source]#
Parameters:

x (Union[int, float, Sequence[Any], ndarray[Any, Any], Tensor, Variable, Parameter]) –

Return type:

Tensor

gpflow.utilities.training_loop#

gpflow.utilities.training_loop(closure, optimizer=None, var_list=None, maxiter=1000, compile=False)[source]#

Simple generic training loop. At each iteration uses a GradientTape to compute the gradients of a loss function with respect to a set of variables.

Parameters:
  • closure (Callable[[], Tensor]) – Callable that constructs a loss function based on data and model being trained

  • optimizer (Optional[Optimizer]) – tf.optimizers or tf.keras.optimizers that updates variables by applying the corresponding loss gradients. Adam is a default optimizer with default settings.

  • var_list (Optional[List[Variable]]) – List of model variables to be learnt during training

  • maxiter (int) – Maximum number of

  • compile (bool) –

Return type:

None

Returns:

gpflow.utilities.traverse_module#

gpflow.utilities.traverse_module(m, acc, update_cb, target_types)[source]#

Recursively traverses m, accumulating in acc a path and a state until it finds an object of type in target_types to apply update_cb to update the accumulator acc and/or the object.

Parameters:
  • m (TypeVar(TraverseInput, Variable, Module, Parameter)) – tf.Module, tf.Variable or gpflow.Parameter

  • acc (Tuple[str, TypeVar(State)]) – Tuple of path and state

  • update_cb (Callable[[TypeVar(TraverseInput, Variable, Module, Parameter), str, TypeVar(State)], TypeVar(State)]) – Callable

  • target_types (Tuple[Type[Any], ...]) – target class types

Return type:

TypeVar(State)

Returns:

gpflow.utilities.triangular#

gpflow.utilities.triangular()[source]#

Returns instance of a (lower) triangular bijector.

Return type:

Bijector

gpflow.utilities.triangular_size#

gpflow.utilities.triangular_size(n)[source]#

Returns the number of non-zero elements in an n by n triangular matrix.

Parameters:

n (Tensor) –

  • n has shape [].

Return type:

Tensor

Returns:

  • return has shape [].