CISPA
Browse

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

Download (2.61 MB)
conference contribution
posted on 2025-11-10, 13:17 authored by Dayong YeDayong Ye, Tianqing Zhu, Shang WangShang Wang, Bo Liu, Leo Yu Zhang, Wanlei Zhou, Yang ZhangYang Zhang
<p dir="ltr">Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity. However, these same capabilities can be exploited by adversaries for malicious purposes. While existing research on adversarial applications of generative AI predominantly focuses on cyberattacks, less attention has been given to attacks targeting deep learning models. In this paper, we introduce the use of generative AI for facilitating model-related attacks, including model extraction, membership inference, and model inversion. Our study reveals that adversaries can launch a variety of model-related attacks against both image and text models in a data-free and black-box manner, achieving comparable performance to baseline methods that have access to the target models' training data and parameters in a white-box manner. This research serves as an important early warning to the community about the potential risks associated with generative AI-powered attacks on deep learning models.</p>

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC