CISPA
Browse
- No file added yet -

Instruction Backdoor Attacks Against Customized LLMs

Download (1.31 MB)
conference contribution
posted on 2024-10-01, 12:10 authored by Rui Zhang, Hongwei Li, Rui WenRui Wen, Wenbo Jiang, Yuan Zhang, Michael BackesMichael Backes, Yun Shen, Yang ZhangYang Zhang
The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by designing prompts with backdoor instructions, outputting the attacker's desired result when inputs contain the predefined triggers. Our attack includes 3 levels of attacks: word-level, syntax-level, and semantic-level, which adopt different types of triggers with progressive stealthiness. We stress that our attacks do not require fine-tuning or any modification to the backend LLMs, adhering strictly to GPTs development guidelines. We conduct extensive experiments on 6 prominent LLMs and 5 benchmark text classification datasets. The results show that our instruction backdoor attacks achieve the desired attack performance without compromising utility. Additionally, we propose two defense strategies and demonstrate their effectiveness in reducing such attacks. Our findings highlight the vulnerability and the potential risks of LLM customization such as GPTs.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

Usenix Security Symposium (USENIX-Security)

Journal

33rd USENIX Security Symposium (USENIX Security 24)

Page Range

1849-1866

Publisher

USENIX Association

BibTeX

@conference{Zhang:Li:Wen:Jiang:Zhang:Backes:Shen:Zhang:2024, title = "Instruction Backdoor Attacks Against Customized LLMs", author = "Zhang, Rui" AND "Li, Hongwei" AND "Wen, Rui" AND "Jiang, Wenbo" AND "Zhang, Yuan" AND "Backes, Michael" AND "Shen, Yun" AND "Zhang, Yang", year = 2024, month = 8, journal = "33rd USENIX Security Symposium (USENIX Security 24)", pages = "1849--1866", publisher = "USENIX Association" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC