AI SYSTEM DETECTS PARKINSON’S THROUGH VOICE, WALKING, AND HANDWRITING

Category: Newsworthy Notes

A new study published in Frontiers in Digital Health highlights how artificial intelligence (AI) may significantly improve early detection of Parkinson’s. By analyzing subtle changes in a person’s voice, walking pattern, and handwriting, researchers have developed a multimodal, explainable AI system designed to make Parkinson’s screening more accurate, objective, and clinically useful.

Currently, diagnosing Parkinson’s is largely clinical, relying on neurological exams and physician judgment. Because early symptoms can be subtle and vary widely, misdiagnosis or delayed diagnosis is not uncommon.

AI-based tools offer a promising solution. Researchers have previously shown that analyzing speech patterns, gait abnormalities, or handwriting changes individually can detect Parkinson’s with high accuracy in controlled settings. However, these single-modality systems often struggle in real-world environments. Speech analysis may be affected by accent or background noise, gait monitoring depends on sensor quality, and handwriting studies are frequently conducted under ideal laboratory conditions. Additionally, many AI systems function as “black boxes,” offering predictions without explaining how decisions are made — an issue that limits clinical trust.

To address these limitations, this new study introduced a trimodal AI framework that combines speech, gait, and handwriting data into a single model. If one data source is noisy or unreliable, the others help strengthen the overall prediction. For speech analysis, the model examined vocal instability, pitch variation, and other acoustic features linked to Parkinson’s. Gait analysis focused on stride irregularities and asymmetry using wearable sensor data. Handwriting evaluation used digitized spiral drawings to detect tremor-related distortions and micrographia (abnormally small handwriting), a common Parkinson’s feature.

The system incorporated explainable AI techniques, such as SHAP (Shapley Additive Explanations) and Grad-CAM (Gradient-weighted Class Activation Mapping), that allow clinicians to see which features influenced the diagnosis. This transparency is critical for building trust and encouraging adoption in healthcare settings.

When tested using established public datasets, the multimodal model achieved 92 percent accuracy, outperforming single-modality systems for speech, gait, or handwriting alone. It correctly identified roughly nine out of ten cases while maintaining balanced sensitivity and specificity.

Despite these encouraging results, researchers caution that the model has limitations. It was evaluated retrospectively using benchmark datasets rather than prospective clinical trials. It also focused only on distinguishing Parkinson’s patients from healthy controls, without assessing disease stages or subtypes. Additionally, real-world variability — such as differences in recording conditions — remains a challenge.

Even so, this research represents an important step forward. By merging multiple digital biomarkers into a transparent, interpretable AI framework, scientists are moving closer to reliable, scalable Parkinson’s screening tools. With further validation and clinical collaboration, such systems could one day support earlier diagnosis, guide treatment decisions, and ultimately improve outcomes for people living with Parkinson’s.

Share This Article:

Google+

Contact Us

Address
Parkinson's Resource Organization
74785 Highway 111
Suite 208
Indian Wells, CA 92210

Local Phone
(760) 773-5628

Toll-Free Phone
(877) 775-4111

General Information
info@parkinsonsresource.org

 

Like! Subscribe! Share!

Did you know that you can communicate with us through Facebook, Twitter, LinkedIn, YouTube, and now Instagram?

PRIVACY POLICY TEXT

 

Updated: August 16, 2017