Adversarial Threats in Industrial Control Systems: A Machine Learning Approach to Securing the U.S. Energy Grid
Abstract
As the U.S. energy sector undergoes rapid digital transformation, its reliance on industrial control systems (ICS) such as SCADA has introduced new cybersecurity vulnerabilities. While machine learning (ML) models have become essential tools for intrusion detection and predictive maintenance in these systems, they are increasingly being targeted by adversarial attacks designed to deceive or disable them. This paper presents a structured literature review that explores the intersection of adversarial machine learning and critical infrastructure security, focusing on how malicious actors can exploit ML models embedded in power grid operations. We categorize common attack types—such as evasion, poisoning, and model inversion—and demonstrate their potential impact on ICS environments. Drawing on recent developments in adversarial defense, we propose strategies to harden ML-based security systems through adversarial training, robust feature selection, and anomaly-aware architectures. The review also identifies existing research gaps and discusses policy implications. This study is limited in scope to the U.S. power sector and does not include experimental validation.
Keywords:
Adversarial machine learning, industrial control systems, SCADA security, critical infrastructure protection, machine learning-based, intrusion detection, evasion and poisoning attacks, resilient AI models, U.S. energy sectorDownloads
ACCESSES
Published
Issue
Section
License
Copyright (c) 2025 Fatima Rilwan Ododo, Ridwan Rahmat Sadiq, Nicholas Addotey (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.