Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks
conference contribution
posted on 2023-11-29, 18:15authored byTribhuvanesh Orekondy, Bernt Schiele, Mario FritzMario Fritz
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker's error rate up to a factor of 85x with minimal impact on the utility for benign users.
History
Preferred Citation
Tribhuvanesh Orekondy, Bernt Schiele and Mario Fritz. Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks. In: International Conference on Learning Representations (ICLR). 2020.
Primary Research Area
Trustworthy Information Processing
Secondary Research Area
Threat Detection and Defenses
Name of Conference
International Conference on Learning Representations (ICLR)
Legacy Posted Date
2022-10-13
Open Access Type
Gold
BibTeX
@inproceedings{cispa_all_3831,
title = "Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks",
author = "Orekondy, Tribhuvanesh and Schiele, Bernt and Fritz, Mario",
booktitle="{International Conference on Learning Representations (ICLR)}",
year="2020",
}