CISPA
Browse

Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness

Download (418.61 kB)
conference contribution
posted on 2024-10-15, 13:16 authored by Saeed Mahloujifar, Xiao ZhangXiao Zhang, Mohammad Mahmoody, David Evans
Many recent works have shown that adversarial examples that fool classifiers can be found by minimally perturbing a normal input. Recent theoretical results, starting with Gilmer et al. (2018b), show that if the inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable. A concentrated space has the property that any subset with Ω(1) (e.g., 1/100) measure, according to the imposed distribution, has small distance to almost all (e.g., 99/100) of the points in the space. It is not clear, however, whether these theoretical results apply to actual distributions such as images. This paper presents a method for empirically measuring and bounding the concentration of a concrete dataset which is proven to converge to the actual concentration. We use it to empirically estimate the intrinsic robustness to L-infinity and L2 perturbations of several image classification benchmarks. Code for our experiments is available at https://github.com/xiaozhanguva/Measure-Concentration.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

Conference on Neural Information Processing Systems (NeurIPS)

CISPA Affiliation

  • No

BibTeX

@conference{Mahloujifar:Zhang:Mahmoody:Evans:2019, title = "Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness", author = "Mahloujifar, Saeed" AND "Zhang, Xiao" AND "Mahmoody, Mohammad" AND "Evans, David", year = 2019, month = 12 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC