CISPA
Browse

File(s) not publicly available

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

conference contribution
posted on 2023-11-29, 18:19 authored by Dingfan Chen, Ning Yu, Mario FritzMario Fritz
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models. Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model's vulnerability to MIAs. Motivated by existing results, we propose a novel training framework based on a relaxed loss (RelaxLoss) with a more achievable learning target, which leads to narrowed generalization gap and reduced privacy leakage. RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead. Through extensive evaluations on five datasets with diverse modalities (images, medical data, transaction records), our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs as well as model utility. Our defense is the first that can withstand a wide range of attacks while preserving (or even improving) the target model's utility.

History

Preferred Citation

Dingfan Chen, Ning Yu and Mario Fritz. RelaxLoss: Defending Membership Inference Attacks without Losing Utility. In: International Conference on Learning Representations (ICLR). 2022.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Learning Representations (ICLR)

Legacy Posted Date

2022-03-11

Open Access Type

  • Gold

BibTeX

@inproceedings{cispa_all_3583, title = "RelaxLoss: Defending Membership Inference Attacks without Losing Utility", author = "Chen, Dingfan and Yu, Ning and Fritz, Mario", booktitle="{International Conference on Learning Representations (ICLR)}", year="2022", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC