CISPA
Browse

Understanding Intrinsic Robustness using Label Uncertainty

Download (1.38 MB)
conference contribution
posted on 2024-10-14, 14:03 authored by Xiao ZhangXiao Zhang, David Evans
A fundamental question in adversarial machine learning is whether a robust classifier exists for a given task. A line of research has made some progress towards this goal by studying the concentration of measure, but we argue standard concentration fails to fully characterize the intrinsic robustness of a classification problem since it ignores data labels which are essential to any classification task. Building on a novel definition of label uncertainty, we empirically demonstrate that error regions induced by state-of-the-art models tend to have much higher label uncertainty than randomly-selected subsets. This observation motivates us to adapt a concentration estimation algorithm to account for label uncertainty, resulting in more accurate intrinsic robustness measures for benchmark image classification problems.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Learning Representations (ICLR)

CISPA Affiliation

  • No

BibTeX

@conference{Zhang:Evans:2022, title = "Understanding Intrinsic Robustness using Label Uncertainty", author = "Zhang, Xiao" AND "Evans, David", year = 2022, month = 1 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC