Scalable Heterophilous Graph Neural Network with Decoupled Embeddings



Authors
Ningyi Liao
Date
Jul 31, 2024 14:45 — 15:00
Location
SMU, Singapore
School of Computing and Information Systems, Singapore Management University, Singapore

TL;DR

We propose LD² as a scalable heterophilous GNN with linear complexity and lightweight minibatch training.

Abstract

Heterophilous Graph Neural Network (GNN) is a family of GNNs that specializes in learning graphs under heterophily, where connected nodes tend to have different labels. Most existing heterophilous models incorporate iterative non-local computations to capture node relationships. However, these approaches have limited application to large-scale graphs due to their high computational costs and challenges in adopting minibatch schemes. In this work, we study the scalability issues of heterophilous GNN and propose a scalable model, LD², which simplifies the learning process by decoupling graph propagation and generating expressive embeddings prior to training. Theoretical analysis demonstrates that LD² achieves optimal time complexity in training, as well as a memory footprint that remains independent of the graph scale. We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15x speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines.