CISPA
Browse

Link Stealing Attacks Against Inductive Graph Neural Networks

Download (1.15 MB)
journal contribution
posted on 2024-07-25, 12:36 authored by Yixin WuYixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael BackesMichael Backes, Neil Zhenqiang Gong, Yang ZhangYang Zhang
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data. Typically, GNNs can be implemented in two settings, including the transductive setting and the inductive setting. In the transductive setting, the trained model can only predict the labels of nodes that were observed at the training time. In the inductive setting, the trained model can be generalized to new nodes/graphs. Due to its flexibility, the inductive setting is the most popular GNN setting at the moment. Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks. However, a comprehensive privacy analysis of inductive GNN models is still missing. This paper fills the gap by conducting a systematic privacy analysis of inductive GNNs through the lens of link stealing attacks. We propose two types of link stealing attacks, i.e., posterior-only attacks and combined attacks. We define threat models of the posterior-only attacks with respect to node topology and the combined attacks by considering combinations of posteriors, node attributes, and graph features. Extensive evaluation on six real-world datasets demonstrates that inductive GNNs leak rich information that enables link stealing attacks with advantageous properties. Even attacks with no knowledge about graph structures can be effective. We also show that our attacks are robust to different node similarities and different graph features. As a counterpart, we investigate two possible defenses and discover they are ineffective against our attacks, which calls for more effective defenses.

History

Primary Research Area

  • Trustworthy Information Processing

Journal

Proceedings on Privacy Enhancing Technologies

Volume

2024

Page Range

818-839

Publisher

Privacy Enhancing Technologies Symposium Advisory Board

Open Access Type

  • Not Open Access

Sub Type

  • Article

BibTeX

@article{Wu:He:Berrang:Humbert:Backes:Gong:Zhang:2024, title = "Link Stealing Attacks Against Inductive Graph Neural Networks", author = "Wu, Yixin" AND "He, Xinlei" AND "Berrang, Pascal" AND "Humbert, Mathias" AND "Backes, Michael" AND "Gong, Neil Zhenqiang" AND "Zhang, Yang", year = 2024, month = 10, journal = "Proceedings on Privacy Enhancing Technologies", number = "4", pages = "818--839", publisher = "Privacy Enhancing Technologies Symposium Advisory Board", issn = "2299-0984", doi = "10.56553/popets-2024-0143" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC