User Guide#

You can use this document to get familiar with the more in-depth topics of GPflow. If you are new to GPflow you should see our Getting Started guide first. We have also provided a flow diagram to guide you to the relevant parts of GPflow for your specific problem.

Theoretical notes#

The following notebooks relate to the theory of Gaussian processes and approximations. These are not required reading for using GPflow, but are included for those interested in the theoretical underpinning and technical details.

Why we like the Variational Free Energy (VFE) objective rather than the Fully Independent Training Conditional (FITC) approximation for our sparse approximations.

Demonstrates the overlapping behaviour of many of the GPflow model classes in special cases (specifically, with a Gaussian likelihood and, for sparse approximations, inducing points fixed to the data points).

Tailored models#

This section shows how to use GPflow’s utilities and codebase to build new probabilistic models. These can be seen as complete examples.

How to implement a covariance function that is not available by default in GPflow. For this example, we look at the Brownian motion covariance.

Two ways to combine TensorFlow neural networks with GPflow models.

How to use a neural network as a mean function.

How GPflow’s utilities make it easy to build other, non-GP probabilistic models.

Advanced needs#

This section explains the more complex models and features that are available in GPflow.

Using Hamiltonian Monte Carlo to sample the posterior GP and hyperparameters.

Using GPflow to deal with ordinal variables.

Using GPflow’s Sparse Variational Gaussian Process (SVGP) model [HMG15] [HFL13]. Use sparse methods when dealing with large datasets (more than around a thousand data points).

On classification with more than two classes.

Most GP models assume the noise is constant across all data points. This notebook shows how to model simple varying noise.

This is a more expensive, but also more powerful way to handly noise that varies across data points.

How to deal with regime-changes in 1D datasets.

How we can use GPs with convolutional kernels for image classification.

For when you have multiple outputs, that all are observed at all data points.

For when you have multiple outputs, but not all of them are observed at every data point.

How to use caching to speed up repeated predictions.

How to use the Bayesian GPLVM model. This is an unsupervised learning method usually used for dimensionality reduction.

In this notebook we demonstrate how new types of inducing variables can easily be incorporated in the GPflow framework. As an example case, we use variational Fourier features.

How to optimize the variational approximate posterior’s parameters.