AI-Powered Cough Detection Could Transform Personal Health Monitoring
- MM24 Multimedia Desk
- Oct 16
- 3 min read

Researchers have developed a new way to help wearable health devices more accurately detect when a person is coughing — a breakthrough that could improve the monitoring of chronic respiratory conditions and even help predict health risks such as asthma attacks.
The advance, led by scientists at North Carolina State University, marks a significant step forward for cough-detection technology, which has historically struggled to differentiate cough sounds from speech and other nonverbal noises such as sneezes or throat clearing.
“Coughing serves as an important biomarker for tracking a variety of health conditions,” said Edgar Lobaton, corresponding author of the study and professor of electrical and computer engineering at NC State. “For instance, the frequency and intensity of coughs can help us monitor the progression of respiratory diseases or recognize when an asthma patient’s symptoms are worsening. That’s why accurate and continuous cough tracking is so important.”
Overcoming the Sound Confusion Problem
Wearable devices are an ideal platform for cough monitoring because they can continuously collect sound data. Machine learning models can then analyze these audio inputs to recognize when a person coughs. But in practice, this task has proven far more complex than anticipated.
While algorithms have become adept at distinguishing coughs from ambient background noises, they often struggle when faced with human-generated sounds that share similar acoustic features. “Models can confuse coughs with speech, sneezes, or even groans,” Lobaton explained. “This happens because real-world sounds are messy — the model frequently encounters audio patterns it hasn’t been trained on.”
READ ALSO: https://www.modernmechanics24.com/post/the-spy-who-came-in-from-the-wifi-radio-network-surveillance
Traditional models rely on pre-recorded sound libraries for training. These datasets label which sounds are coughs and which are not. However, when the system hears a new sound type, its accuracy declines sharply.
Adding Motion to the Mix
To solve this challenge, the NC State team took a novel approach: they enhanced the training data by using wearable sensors to collect both sound and motion information. The study utilized health monitors worn on the chest that gathered two key data streams — audio recordings and accelerometer readings, which capture subtle body movements.
“Whenever someone coughs, there’s not just sound — there’s also a distinct, quick motion of the chest,” said Yuhan Chen, first author of the paper and a recent Ph.D. graduate from NC State. “Movement data alone isn’t enough, because actions like laughing or coughing can look similar. But when you combine sound and movement, you give the model a richer picture that helps it make more accurate decisions.”
The fusion of these two data types proved highly effective. By linking the physical act of coughing with its corresponding sound signature, the model learned to better identify true coughs while ignoring other noises.
Sharper Accuracy, Fewer False Alarms
When tested in laboratory conditions, the upgraded cough-detection model outperformed previous technologies by producing significantly fewer false positives. In other words, the system was much more likely to identify a real cough correctly, rather than mistake another noise for one.
“This is a meaningful step forward,” Lobaton said. “We’ve gotten very good at distinguishing coughs from speech, and now our model performs substantially better at separating coughs from nonverbal sounds as well. There’s still room for improvement, but we know where to focus next.”
The research points toward a future where wearable devices — powered by smart algorithms and multi-sensor inputs — could play a major role in continuous, noninvasive health monitoring, offering early warning signs for respiratory flare-ups and helping physicians track patient well-being in real time.



Comments