CISPA
Browse
Test-Time Poisoning Attacks Against Test-Time Adaptation Models.pdf (12.96 MB)

Test-Time Poisoning Attacks Against Test-Time Adaptation Models

Download (12.96 MB)
conference contribution
posted on 2024-02-09, 09:21 authored by Tianshuo Cong, X He, Yun Shen, Yang ZhangYang Zhang
Deploying machine learning (ML) models in the wild is challenging as it suffers from distribution shifts, where the model trained on an original domain cannot generalize well to unforeseen diverse transfer domains. To address this challenge, several test-time adaptation (TTA) methods have been proposed to improve the generalization ability of the target pre-trained models under test data to cope with the shifted distribution. The success of TTA can be credited to the continuous fine-tuning of the target model according to the distributional hint from the test samples during test time. Despite being powerful, it also opens a new attack surface, i.e., test-time poisoning attacks, which are substantially different from previous poisoning attacks that occur during the training time of ML models (i.e., adversaries cannot intervene in the training process). In this paper, we perform the first test-time poisoning attack against four mainstream TTA methods, including TTT, DUA, TENT, and RPL. Concretely, we generate poisoned samples based on the surrogate models and feed them to the target TTA models. Experimental results show that the TTA methods are generally vulnerable to test-time poisoning attacks. For instance, the adversary can feed as few as 10 poisoned samples to degrade the performance of the target model from 76.20% to 41.83%. Our results demonstrate that TTA algorithms lacking a rigorous security assessment are unsuitable for deployment in real-life scenarios. As such, we advocate for the integration of defenses against test-time poisoning attacks into the design of TTA methods.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

IEEE Symposium on Security and Privacy (S&P)

Journal

IEEE Symposium on Security and Privacy (S&P)

Publisher

IEEE

BibTeX

@conference{Cong:He:Shen:Zhang:2023, title = "Test-Time Poisoning Attacks Against Test-Time Adaptation Models", author = "Cong, Tianshuo" AND "He, X" AND "Shen, Yun" AND "Zhang, Y", year = 2023, month = 8, journal = "IEEE Symposium on Security and Privacy (S&P)", publisher = "IEEE" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC