CISPA
Browse

Do Parameters Reveal More than Loss for Membership Inference?

Download all (1.5 MB)
Version 2 2025-10-07, 09:43
Version 1 2025-10-07, 08:36
journal contribution
posted on 2025-10-07, 09:43 authored by Anshuman Suri, Xiao ZhangXiao Zhang, David Evans
Membership inference attacks aim to infer whether an individual record was used to train a model. They are used as a key tool for disclosure auditing. While such evaluations are useful to demonstrate risk, they are computationally expensive and often make strong assumptions about potential adversaries' access to models and training environments, and thus do not provide very tight bounds on leakage from potential attacks. We show how prior claims around black-box access being sufficient for optimal membership inference do not hold for stochastic gradient descent, and that optimal membership inference indeed requires white-box access. Our theoretical results suggest a new white-box inference attack, IHA (Inverse Hessian Attack), that explicitly uses model parameters by taking advantage of computing inverse-Hessian vector products. Our results show that both auditors and adversaries may be able to benefit from access to model parameters, and we advocate for further research into white-box methods for membership privacy auditing.

History

Primary Research Area

  • Trustworthy Information Processing

CISPA Affiliation

  • Yes

Journal

Transactions on Machine Learning Research (TMLR)

Open Access Type

  • Unknown

Sub Type

  • Article

BibTeX

@article{Suri:Zhang:Evans:2025, title = "Do Parameters Reveal More than Loss for Membership Inference?", author = "Suri, Anshuman" AND "Zhang, Xiao" AND "Evans, David", year = 2025, month = 12, journal = "Transactions on Machine Learning Research (TMLR)" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC