LD²: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings


Decoupled heterophilous precomputation and training


Authors
Ningyi Liao, Siqiang Luo, Xiang Li, Jieming Shi
Publication
In Advances in Neural Information Processing Systems 36: 10197–10209
Type
Conference paper NeurIPS 2023

TL;DR

A scalable heterophilous GNN with lightweight minibatch training.

Abstract

Heterophilous Graph Neural Network (GNN) is a family of GNNs that specializes in learning graphs under heterophily, where connected nodes tend to have different labels. Most existing heterophilous models incorporate iterative non-local computations to capture node relationships. However, these approaches have limited application to large-scale graphs due to their high computational costs and challenges in adopting minibatch schemes. In this work, we study the scalability issues of heterophilous GNN and propose a scalable model, LD², which simplifies the learning process by decoupling graph propagation and generating expressive embeddings prior to training. Theoretical analysis demonstrates that LD² achieves optimal time complexity in training, as well as a memory footprint that remains independent of the graph scale. We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15x speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines.


Citation
Ningyi Liao, Siqiang Luo, Xiang Li, Jieming Shi. "LD²: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings." In Advances in Neural Information Processing Systems 36: 10197–10209. 2023.