Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline and potentially vulnerable components. Similar studies have helped in other security fields to discover root causes or improve risk communication. Our study reveals two facets of practitioners' mental models of machine learning security. Firstly, practitioners often confuse machine learning security with threats and defences that are not directly related to machine learning. Secondly, in contrast to most academic research, our participants perceive security of machine learning as not solely related to individual models, but rather in the context of entire workflows that consist of multiple components. Jointly with our additional findings, these two facets provide a foundation to substantiate mental models for machine learning security and have implications for the integration of adversarial machine learning into corporate workflows, decreasing practitioners' reported uncertainty, and appropriate regulatory frameworks for machine learning security.
History
Preferred Citation
Lukas Bieringer, Kathrin Grosse, Michael Backes, Battista Biggio and Katharina Krombholz. Industrial practitioners' mental models of adversarial machine learning. In: Symposium on Usable Privacy and Security (SOUPS). 2022.
Primary Research Area
Empirical and Behavioral Security
Secondary Research Area
Trustworthy Information Processing
Name of Conference
Symposium on Usable Privacy and Security (SOUPS)
Legacy Posted Date
2022-08-05
Open Access Type
Green
BibTeX
@inproceedings{cispa_all_3742,
title = "Industrial practitioners' mental models of adversarial machine learning",
author = "Bieringer, Lukas and Grosse, Kathrin and Backes, Michael and Biggio, Battista and Krombholz, Katharina",
booktitle="{Symposium on Usable Privacy and Security (SOUPS)}",
year="2022",
}