CISPA
Browse
- No file added yet -

When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS

Download (1.18 MB)
conference contribution
posted on 2024-10-01, 12:10 authored by Giovanni Apruzzese, Aurore FassAurore Fass, Fabio Pierazzi
We scrutinize the effects of “blind” adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker – unable to access and hence unaware that the ML-NIDS is weakened by concept drift – attempts to evade the ML-NIDS with data perturbations. It is currently unknown if the cumulative effect of such adversarial perturbations and concept drift leads to a greater or lower impact on ML-NIDS. In this “open problem” paper, we seek to investigate this unusual, but realistic, setting—we are not interested in perfect knowledge attackers. We begin by retrieving a publicly available dataset of documented network traces captured in a real, large (>300 hosts) organization. Overall, these traces include several years of raw traffic packets—both benign and malicious. Then, we adversarially manipulate malicious packets with “problem-space” perturbations, representing a physically realizable attack. Finally, we carry out the first exploratory analysis focused on comparing the effects of our “adversarial examples” with their respective unperturbed malicious variants in concept drift scenarios. Through two case studies (a “short-term” one of 8 days; and a “long-term” one of 4 years) encompassing 48 detectors, we find that, although our perturbations induce a lower detection rate in concept-drift scenarios, some perturbations yield adverse-effects for the attacker in intriguing use-cases. Overall, our study shows that the topics we covered are an still an open problem which require a re-assessment from future research.

History

Primary Research Area

  • Empirical and Behavioral Security

Name of Conference

ACM Workshop on Artificial Intelligence and Security (AISec)

Journal

The 17th ACM Workshop on Artificial Intelligence Security (AISec)

BibTeX

@conference{Apruzzese:Fass:Pierazzi:2024, title = "When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS", author = "Apruzzese, Giovanni" AND "Fass, Aurore" AND "Pierazzi, Fabio", year = 2024, month = 9, journal = "The 17th ACM Workshop on Artificial Intelligence Security (AISec)" }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC