Scalable Decoupling Graph Neural Networks with Feature-Oriented Optimization


Feature-oriented decoupled graph propagation


Authors
Ningyi Liao, Dingheng Mo, Siqiang Luo, Xiang Li, Pengcheng Yin
Publication
The VLDB Journal 33: 667–683
Type
Journal article

TL;DR

SCARA extension with a more scalable reuse scheme.

Abstract

Recent advances in data processing have stimulated the demand for learning graphs of very large scales. Graph Neural Networks (GNNs), being an emerging and powerful approach in solving graph learning tasks, are known to be difficult to scale up. Most scalable models apply node-based techniques in simplifying the expensive graph message-passing propagation procedure of GNNs. However, we find such acceleration insufficient when applied to million- or even billion-scale graphs. In this work, we propose SCARA, a scalable GNN with feature-oriented optimization for graph computation. SCARA efficiently computes graph embedding from the dimension of node features, and further selects and reuses feature computation results to reduce overhead. Theoretical analysis indicates that our model achieves sub-linear time complexity with a guaranteed precision in propagation process as well as GNN training and inference. We conduct extensive experiments on various datasets to evaluate the efficacy and efficiency of SCARA. Performance comparison with baselines shows that SCARA can reach up to 800× graph propagation acceleration than current state-of-the-art methods with fast convergence and comparable accuracy. Most notably, it is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111M nodes, 1.6B edges) in 13 seconds.


Citation
Ningyi Liao, Dingheng Mo, Siqiang Luo, Xiang Li, Pengcheng Yin. "Scalable Decoupling Graph Neural Networks with Feature-Oriented Optimization." The VLDB Journal 33: 667–683. 2023.

→ Conference Version
→ Patents Filed