CISPA
Browse
- No file added yet -

Are Defenses for Graph Neural Networks Robust?

Download (2.66 MB)
conference contribution
posted on 2023-11-29, 18:23 authored by Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.

History

Preferred Citation

Felix Mujkanovic, Simon Geisler, Stephan Günnemann and Aleksandar Bojchevski. Are Defenses for Graph Neural Networks Robust?. In: Conference on Neural Information Processing Systems (NeurIPS). 2022.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

Conference on Neural Information Processing Systems (NeurIPS)

Legacy Posted Date

2022-10-13

Open Access Type

  • Green

BibTeX

@inproceedings{cispa_all_3812, title = "Are Defenses for Graph Neural Networks Robust?", author = "Mujkanovic, Felix and Geisler, Simon and Günnemann, Stephan and Bojchevski, Aleksandar", booktitle="{Conference on Neural Information Processing Systems (NeurIPS)}", year="2022", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC