A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer's disease.

El-Sappagh, Shaker; Alonso, Jose M; Islam, S M Riazul; Sultan, Ahmad M; Kwak, Kyung Sup
Scientific reports
2021Jan ; 11 ( 1 ) :2660.
ÀúÀÚ »ó¼¼Á¤º¸
El-Sappagh, Shaker -
Alonso, Jose M -
Islam, S M Riazul -
Sultan, Ahmad M -
Kwak, Kyung Sup -
ABSTRACT
Alzheimer's disease (AD) is the most common type of dementia. Its diagnosis and progression detection have been intensively studied. Nevertheless, research studies often have little effect on clinical practice mainly due to the following reasons: (1) Most studies depend mainly on a single modality, especially neuroimaging; (2) diagnosis and progression detection are usually studied separately as two independent problems; and (3) current studies concentrate mainly on optimizing the performance of complex machine learning models, while disregarding their explainability. As a result, physicians struggle to interpret these models, and feel it is hard to trust them. In this paper, we carefully develop an accurate and interpretable AD diagnosis and progression detection model. This model provides physicians with accurate decisions along with a set of explanations for every decision. Specifically, the model integrates 11 modalities of 1048 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) real-world dataset: 294 cognitively normal, 254 stable mild cognitive impairment (MCI), 232 progressive MCI, and 268 AD. It is actually a two-layer model with random forest (RF) as classifier algorithm. In the first layer, the model carries out a multi-class classification for the early diagnosis of AD patients. In the second layer, the model applies binary classification to detect possible MCI-to-AD progression within three years from a baseline diagnosis. The performance of the model is optimized with key markers selected from a large set of biological and clinical measures. Regarding explainability, we provide, for each layer, global and instance-based explanations of the RF classifier by using the SHapley Additive exPlanations (SHAP) feature attribution framework. In addition, we implement 22 explainers based on decision trees and fuzzy rule-based systems to provide complementary justifications for every RF decision in each layer. Furthermore, these explanations are represented in natural language form to help physicians understand the predictions. The designed model achieves a cross-validation accuracy of 93.95% and an F1-score of 93.94% in the first layer, while it achieves a cross-validation accuracy of 87.08% and an F1-Score of 87.09% in the second layer. The resulting system is not only accurate, but also trustworthy, accountable, and medically applicable, thanks to the provided explanations which are broadly consistent with each other and with the AD medical literature. The proposed system can help to enhance the clinical understanding of AD diagnosis and progression processes by providing detailed insights into the effect of different modalities on the disease risk.
keyword
na
MESH
¸µÅ©

ÁÖÁ¦ÄÚµå
ÁÖÁ¦¸í(Target field)
¿¬±¸´ë»ó(Population)
¿¬±¸Âü¿©(Sample size)
´ë»ó¼ºº°(Gender)
Áúº´Æ¯¼º(Condition Category)
¿¬±¸È¯°æ(Setting)
¿¬±¸¼³°è(Study Design)
¿¬±¸±â°£(Period)
ÁßÀç¹æ¹ý(Intervention Type)
ÁßÀç¸íĪ(Intervention Name)
Å°¿öµå(Keyword)
À¯È¿¼º°á°ú(Recomendation)
The proposed ML model is accurate and explainable. However, it is worth noting that even if we achieved promising results from an academic point of view, we are still far from applying the model in a real-world clinical scenario. Therefore, to translate the outcomes of this study into full-scale clinical practice, further investigations are required to determine its performance characteristics by applying the model to other relevant datasets.
¿¬±¸ºñÁö¿ø(Fund Source)
±Ù°Å¼öÁØÆò°¡(Evidence Hierarchy)
ÃâÆdz⵵(Year)
Âü¿©ÀúÀÚ¼ö(Authors)
´ëÇ¥ÀúÀÚ
DOI
10.1038/s41598-021-82098-3
KCDÄÚµå
ICD 03
°Ç°­º¸ÇèÄÚµå