CISPA
Browse
cispa_all_3485.pdf (2.07 MB)

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Download (2.07 MB)
conference contribution
posted on 2023-11-29, 18:19 authored by Yugeng LiuYugeng Liu, Rui WenRui Wen, xinlei.he, Ahmed Salem, Zhikun Zhang, Michael BackesMichael Backes, Emiliano De Cristofaro, Mario FritzMario Fritz, Yang ZhangYang Zhang
Inference attacks against Machine Learning (ML) models allow adversaries to learn information about training data, model parameters, etc. While researchers have studied these attacks thoroughly, they have done so in isolation. We lack a comprehensive picture of the risks caused by the attacks, such as the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of defense techniques. In this paper, we fill this gap by presenting a first-of-its-kind holistic risk assessment of different inference attacks against machine learning models. We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing - and establish a threat model taxonomy. Our extensive experimental evaluation conducted over five model architectures and four datasets shows that the complexity of the training dataset plays an important role with respect to the attack's performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also show that defenses like DP-SGD and Knowledge Distillation can only hope to mitigate some of the inference attacks. Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models, and equally serves as a benchmark tool for researchers and practitioners.

History

Preferred Citation

Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano Cristofaro, Mario Fritz and Yang Zhang. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. In: Usenix Security Symposium (USENIX-Security). 2022.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

Usenix Security Symposium (USENIX-Security)

Legacy Posted Date

2021-10-05

Open Access Type

  • Unknown

BibTeX

@inproceedings{cispa_all_3485, title = "ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models", author = "Liu, Yugeng and Wen, Rui and He, Xinlei and Salem, Ahmed and Zhang, Zhikun and Backes, Michael and Cristofaro, Emiliano De and Fritz, Mario and Zhang, Yang", booktitle="{Usenix Security Symposium (USENIX-Security)}", year="2022", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC