CISPA
Browse

Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-based Selection

Download (901.91 kB)
conference contribution
posted on 2024-10-14, 10:39 authored by somrita Ghosh, Yuelin Xu, Xiao ZhangXiao Zhang
Compared to standard learning, adversarially robust learning is widely recognized to require a much larger training dataset. Recent works utilize external or synthetically generated unlabeled data in adversarial training using self-supervised learning. Despite achieving enhanced robustness, these methods typically require a considerable amount of additional data, leading to substantial memory consumption and convergence time. To address the space and computational challenges, we propose a novel Latent Clustering-based Selection scheme (LCS) to strategically select a small core subset of unlabeled data critical for obtaining better robustness. In particular, our method prioritizes selecting unlabeled data that are close to the model’s decision boundary, while balancing the ratio between the boundary and the remaining data points to avoid overfitting. Our experiments show that when incorporated into self-supervised adversarial training, our LCS scheme can significantly reduce the memory and time complexities while achieving comparable model robustness.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

ICML-Workshop (ICMLW)

BibTeX

@conference{Ghosh:Xu:Zhang:2024, title = "Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-based Selection", author = "Ghosh, somrita" AND "Xu, Yuelin" AND "Zhang, Xiao", year = 2024, month = 6 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC