TY - GEN
T1 - An Exploration of Interpretable Deep Learning Models for the Assessment of Mild Cognitive Impairment
AU - Leschly, Emma C.L.
AU - Roesler, Oliver
AU - Neumann, Michael
AU - Liscombe, Jackson
AU - Hosamath, Abhishek
AU - Arbatti, Lakshmi
AU - Clemmensen, Line H.
AU - Ganz, Melanie
AU - Ramanarayanan, Vikram
N1 - Publisher Copyright:
© 2025 International Speech Communication Association. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Early diagnosis and intervention are crucial for mild cognitive impairment (MCI), as MCI often progresses to more severe neurodegenerative conditions. In this study, we explore utilizing deep learning for MCI detection without loosing the interpretability provided by feature-based approaches. We used a dataset consisting of 90 MCI patients and 91 controls collected via a remote assessment platform and analyzed the participants' spontaneous speech responses to the Patient Report of Problems (PROP) which asks patients to report their most bothersome general health problems. The proposed deep neural network, which features a bottleneck layer including 13 interpretable symptom domains, achieved an AUC of 0.62, thereby outperforming a set of feature-based classifiers while ensuring interpretability due to the bottleneck layer. We further illustrated the model's interpretability by examining how the predicted PROP domains influence final predictions using Shapley values.
AB - Early diagnosis and intervention are crucial for mild cognitive impairment (MCI), as MCI often progresses to more severe neurodegenerative conditions. In this study, we explore utilizing deep learning for MCI detection without loosing the interpretability provided by feature-based approaches. We used a dataset consisting of 90 MCI patients and 91 controls collected via a remote assessment platform and analyzed the participants' spontaneous speech responses to the Patient Report of Problems (PROP) which asks patients to report their most bothersome general health problems. The proposed deep neural network, which features a bottleneck layer including 13 interpretable symptom domains, achieved an AUC of 0.62, thereby outperforming a set of feature-based classifiers while ensuring interpretability due to the bottleneck layer. We further illustrated the model's interpretability by examining how the predicted PROP domains influence final predictions using Shapley values.
KW - interpretability
KW - mild cognitive impairment
KW - multimodal dialog system
KW - remote patient monitoring
UR - http://www.scopus.com/inward/record.url?scp=105020043720&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2025-2225
DO - 10.21437/Interspeech.2025-2225
M3 - Article in proceedings
AN - SCOPUS:105020043720
SP - 271
EP - 275
BT - Interspeech 2025
T2 - 26th Interspeech Conference 2025
Y2 - 17 August 2025 through 21 August 2025
ER -