An Exploration of Interpretable Deep Learning Models for the Assessment of Mild Cognitive Impairment

Emma C.L. Leschly, Oliver Roesler, Michael Neumann, Jackson Liscombe, Abhishek Hosamath, Lakshmi Arbatti, Line H. Clemmensen, Melanie Ganz, Vikram Ramanarayanan

Abstract

Early diagnosis and intervention are crucial for mild cognitive impairment (MCI), as MCI often progresses to more severe neurodegenerative conditions. In this study, we explore utilizing deep learning for MCI detection without loosing the interpretability provided by feature-based approaches. We used a dataset consisting of 90 MCI patients and 91 controls collected via a remote assessment platform and analyzed the participants' spontaneous speech responses to the Patient Report of Problems (PROP) which asks patients to report their most bothersome general health problems. The proposed deep neural network, which features a bottleneck layer including 13 interpretable symptom domains, achieved an AUC of 0.62, thereby outperforming a set of feature-based classifiers while ensuring interpretability due to the bottleneck layer. We further illustrated the model's interpretability by examining how the predicted PROP domains influence final predictions using Shapley values.

OriginalsprogEngelsk
TitelInterspeech 2025
Antal sider5
Publikationsdato2025
Sider271-275
DOI
StatusUdgivet - 2025
Begivenhed26th Interspeech Conference 2025 - Rotterdam, Holland
Varighed: 17 aug. 202521 aug. 2025

Konference

Konference26th Interspeech Conference 2025
Land/OmrådeHolland
ByRotterdam
Periode17/08/202521/08/2025
SponsorMeta

Fingeraftryk

Dyk ned i forskningsemnerne om 'An Exploration of Interpretable Deep Learning Models for the Assessment of Mild Cognitive Impairment'. Sammen danner de et unikt fingeraftryk.

Citationsformater