Hardware Flaw Opens Door to AI Training Data Attacks
- MM24 News Desk
- 1 day ago
- 3 min read

Researchers at North Carolina State University have identified the first-ever hardware vulnerability that enables attackers to compromise the privacy of AI users by exploiting the physical hardware running the AI systems.
“What we’ve discovered is an AI privacy attack,” explains Joshua Kalyanapu, Ph.D. student at NC State and first author of the study. “Unlike traditional security attacks that steal data stored in a system’s memory, privacy attacks extract information not directly stored—such as the data used to train AI models or attributes of input data—by observing the model’s behavior. This is the first known instance where hardware itself can be used to successfully attack AI privacy.”
The vulnerability, dubbed GATEBLEED, affects machine learning (ML) accelerators—specialized hardware components on computer chips designed to speed up AI computations while reducing power usage. ML accelerators are increasingly integrated into general-purpose CPUs to handle both AI and standard workloads, making this vulnerability relevant to a wide range of modern systems.
GATEBLEED works by monitoring the timing of software-level functions on hardware, bypassing conventional malware detection. It allows attackers with access to a server using an ML accelerator to infer the data used to train AI models and extract other private information.
“The purpose of ML accelerators is to reduce costs by improving AI efficiency,” says Samira Mirbagher Ajorpaz, assistant professor of electrical and computer engineering at NC State and corresponding author of the study. “Since these accelerators are becoming widespread in CPUs, we investigated whether they could introduce new security risks.”
READ ALSO: https://www.modernmechanics24.com/post/china-s-lightweight-3d-printed-jet-engine-takes-flight
The team focused on Intel’s Advanced Matrix Extensions (AMX), the AI accelerator integrated into 4th Generation Intel Xeon Scalable CPUs. The vulnerability exploits power gating, a technique where different parts of a chip are powered up based on demand to save energy.
“Powering up different segments of an accelerator creates observable timing variations,” says Darsh Asher, co-author and Ph.D. student. “AI algorithms may behave differently when processing data they were trained on, which produces a measurable timing channel for attackers.”
In essence, by analyzing the fluctuating usage of AI accelerators, attackers can determine whether specific data was part of a model’s training set. Remarkably, this can be done with a custom program requiring no special permissions.
The vulnerability is even more pronounced with deep neural networks and newer architectures like Mixtures of Experts (MoEs), which use multiple sub-networks to process queries. GATEBLEED can reveal which “experts” respond to a given input, leaking sensitive information about the AI system’s design and training data.
“Traditional defenses—like output monitoring or power analysis—are ineffective against GATEBLEED,” Mirbagher Ajorpaz notes. “Because it exploits hardware-level behaviors, patching software alone isn’t enough. Mitigating this risk requires hardware redesign, which could take years. Interim solutions like microcode or OS-level defenses significantly slow down performance and increase energy use, which isn’t feasible in production AI systems.”
READ ALSO: https://www.modernmechanics24.com/post/filmbase-flying-display-awarded-guinness-world-records-title
Hardware vulnerabilities like GATEBLEED are particularly concerning because they bypass all higher-level security measures, including encryption and sandboxing. Beyond privacy implications, such attacks could expose AI companies to liability if training data usage violates legal or contractual agreements.
The researchers stress that their work is a proof-of-concept, demonstrating that even without physical access, these hardware vulnerabilities can be exploited. Their findings indicate that many similar vulnerabilities may exist, highlighting the urgent need to develop mitigation strategies without compromising the efficiency benefits of AI accelerators.
Comments