
Such a PDE, named Beltrami flow, has the form of isotropic non-euclidean diffusion and produces edge-preserving image denoising. Such architectures might be more interepretable than the typical ones, owing to the fact that the diffusion PDE we consider can be seen as the gradient flow of some associated energy.Īt the same time, while the GRAND model offers continuous time in the place of layers in traditional GNNs, the spatial part of the equation is still discrete and relies on the input graph. Importantly, in this diffusion model, the domain (graph) is fixed and some property defined on it (features) evolves.Ī different concept commonly used in differential geometry is that of geometric flows, evolving the properties of the domain itself . This idea was adopted in the 1990s in the field of image processing for modeling images as manifolds embedded in a joint positional and color space and evolved them by a PDE minimizing the harmonic energy of the embedding.

Some of these solvers do not have an immediate analogy among popular GNN architectures, potentially promising new interesting Graph Neural Network designs. implicit, multistep, adaptive, and multigrid schemes) with guaranteed stability and convergence properties.

The PDE mindset offers multiple advantages, such as the possibility to exploit efficient numerical solvers (e.g. Graph Neural Diffusion. Graph Neural Networks (GNNs) learn by performing some form of message passing on the graph, whereby features are passed from node to node across the edges. Such a mechanism is related to diffusion processes on graphs that can be expressed in the form of a partial differential equation (PDE) called “diffusion equation”. In a recent ICML paper, we showed that the discretization of such PDEs with nonlinear learnable diffusivity functions (referred to as “ Graph Neural Diffusion” or GRAND) generalizes a broad class of GNN architectures such as Graph Attention Networks (GAT).
