File(s) not publicly available

VENOMAVE: Targeted Poisoning Against Speech Recognition

conference contribution
posted on 2023-11-29, 18:25 authored by Hojjat Aghakhani, Lea SchönherrLea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten HolzThorsten Holz, Christopher Kruegel, Giovanni Vigna
Despite remarkable improvements, automatic speech recognition is susceptible to adversarial perturbations. Compared to standard machine learning architectures, these attacks against speech recognition are significantly more challenging, especially since the inputs to a speech recognition system are time series that contain both acoustic and linguistic properties of speech. Extracting all recognition-relevant information requires more complex pipelines and an ensemble of specialized components. Consequently, an attacker needs to consider the entire pipeline. In this paper, we present VENOMAVE, the first training time poisoning attack against speech recognition. Similar to the pre-dominantly studied evasion attacks, we pursue the same goal: leading the system to an incorrect, and attacker-chosen transcription of a target audio waveform. In contrast to evasion attacks, however, we assume that the attacker can only manipulate a small part of the training data without altering the target audio waveform at run time. We evaluate our attack on two datasets: TIDIGITS and Speech Commands. When poisoning less than 0.17 % of the dataset, VENOMAVE achieves attack success rates of over 80.0 %, without access to the victim’s network architecture or hyperparameters. In a more realistic scenario, when the target audio waveform is played over the air in different rooms, VENOMAVE maintains a success rate of up to 73.3%. Finally, VENOMAVE achieves an attack transferability rate of 36.4 % between two different model architectures


Preferred Citation

Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel and Giovanni Vigna. VENOMAVE: Targeted Poisoning Against Speech Recognition. In: Conference on Secure and Trustworthy Machine Learning (SaTML). 2023.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

Conference on Secure and Trustworthy Machine Learning (SaTML)

Legacy Posted Date


Open Access Type

  • Unknown


@inproceedings{cispa_all_3888, title = "VENOMAVE: Targeted Poisoning Against Speech Recognition", author = "Aghakhani, Hojjat and Schönherr, Lea and Eisenhofer, Thorsten and Kolossa, Dorothea and Holz, Thorsten and Kruegel, Christopher and Vigna, Giovanni", booktitle="{Conference on Secure and Trustworthy Machine Learning (SaTML)}", year="2023", }

Usage metrics


    No categories selected


    Ref. manager