Hiding in the Network: Attribute-Oriented Differential Privacy for Graph Neural Networks

Authors
Yuxin Qi, Xi Lin, Jiani Zhu, Ningyi Liao, Jianhua Li
Publication
In IEEE Transactions on Information Forensics and Security 20(2025): 7998–8013
Type
Journal article TIFS 2025

Abstract

Graph Neural Networks (GNNs) have demonstrated remarkable potential in various downstream tasks by effectively capturing the relational dependencies among nodes in graphs. However, this capability also brings significant privacy risks: when GNNs encode topological information and node features into their output, sensitive information can be inadvertently exposed, leading to severe privacy breaches. Existing privacy-preserving GNNs primarily focus on protecting the existence of individual nodes or edges, overlooking practical scenarios where nodes and edges are often publicly accessible and only specific sensitive attributes require protection, resulting in a lack of consideration for attribute sensitivity and challenges in balancing privacy and utility. In this paper, we study the problem of hiding sensitive information during GNN training and limiting its exposure in the outputs, while better defending against attribute inference attacks (AIAs) and achieving improved performance. To achieve this, we propose an attribute-oriented differentially private graph neural network (AODP-GNN) that enforces attribute-specific privacy guarantees through dynamic privacy budgets and relevance-aware noise injection, optimizing the balance between privacy and utility. Specifically, we design a neighborhood-aware private embedding generation mechanism and a mutual information minimization-based optimization strategy that operate before the deep interactions of feature interaction and model optimization to strengthen defense against AIAs. To enhance the balance between privacy and utility, we further develop a relevance-grained noise adaptation technique that dynamically allocates higher noise to less relevant attributes. Theoretical analysis shows that the AODP-GNN satisfies privacy guarantees. Extensive experiments conducted on four real-world datasets demonstrate that our approach can achieve up to around 10. 04% and 9. 21% higher accuracy compared to the state-of-the-art centrally differentially private GNN ProGAP and DPDGC, and also shows a higher defense capability against AIAs.


Citation
Yuxin Qi, Xi Lin, Jiani Zhu, Ningyi Liao, Jianhua Li. "Hiding in the Network: Attribute-Oriented Differential Privacy for Graph Neural Networks." In IEEE Transactions on Information Forensics and Security 20(2025): 7998–8013. 2025.