Adversarial Attacks in Cybersecurity: A Machine Learning Perspective
Abstract
Adversarial machine learning (AML) presents a critical threat to the integrity of machine learning (ML) systems deployed in cybersecurity, where adversarial examples can maintain malicious functionality while evading detection. This literature review synthesizes findings from 35 peer-reviewed sources to investigate the taxonomy, attack strategies, and defense mechanisms associated with AML in cybersecurity domains such as intrusion detection systems (IDS), malware analysis, industrial control systems (ICS), and reinforcement learning in cyber-physical systems. We categorize attacks based on knowledge level, timing, and specificity, and highlight the unique challenges of functionality-preserving adversarial inputs in discrete, protocol-constrained environments. The review further evaluates defensive techniques—including adversarial training, detection frameworks, model hardening, and secure lifecycle integration—and identifies key limitations such as domain-specific overfitting, poor generalizability, and lack of standardized benchmarks. We conclude by advocating for robust, adaptive defenses, attacker-aware datasets, and security-by-design approaches that embed adversarial resilience into the entire ML development lifecycle.
Keywords:
Adversarial Machine Learning, Cybersecurity, Evasion Attacks, Intrusion Detection Systems, Malware Detection, Industrial Control Systems, Model Robustness, Adversarial Training, Functionality-Preserving Attacks, Secure Machine Learning LifecycleDownloads
ACCESSES
Published
Issue
Section
License
Copyright (c) 2025 Fatima Rilwan Ododo, Ridwan Rahmat Sadiq (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.