CISPA
Browse

Understanding the intrinsic robustness of image distributions using conditional generative models

Download (974.75 kB)
conference contribution
posted on 2024-10-14, 13:58 authored by Xiao ZhangXiao Zhang, Jinghui Chen, Quanquan Gu, David Evans
Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the stateof-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under `2 perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models.

History

Name of Conference

International Conference on Artificial Intelligence and Statistics (AISTATS)

CISPA Affiliation

  • No

BibTeX

@conference{Zhang:Chen:Gu:Evans:2020, title = "Understanding the intrinsic robustness of image distributions using conditional generative models", author = "Zhang, Xiao" AND "Chen, Jinghui" AND "Gu, Quanquan" AND "Evans, David", year = 2020, month = 6 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC