CISPA
Browse
- No file added yet -

Differential Privacy Defenses and Sampling Attacks for Membership Inference

Download (679.01 kB)
conference contribution
posted on 2023-11-29, 18:17 authored by Shadi RahimianShadi Rahimian, Tribhuvanesh Orekondy, Mario FritzMario Fritz
Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.

History

Preferred Citation

Shadi Rahimian, Tribhuvanesh Orekondy and Mario Fritz. Differential Privacy Defenses and Sampling Attacks for Membership Inference. In: ACM Workshop on Artificial Intelligence and Security (AISEC). 2021.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

ACM Workshop on Artificial Intelligence and Security (AISEC)

Legacy Posted Date

2021-12-07

Open Access Type

  • Gold

BibTeX

@inproceedings{cispa_all_3524, title = "Differential Privacy Defenses and Sampling Attacks for Membership Inference", author = "Rahimian, Shadi and Orekondy, Tribhuvanesh and Fritz, Mario", booktitle="{ACM Workshop on Artificial Intelligence and Security (AISEC)}", year="2021", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC