Skip to main content
Fig. 1 | Alzheimer's Research & Therapy

Fig. 1

From: An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks

Fig. 1

Quantitative comparisons between the multi-input VGG16 model with Grad-CAM (red) and our multi-input Conv-Att model with soft label (blue), as measured by the IoUs between the heat maps and two types of ROIs, (a) whole-drawing ROIs and (b) expert ROIs, as a function of the percentage of the number of pixels used in the heat maps. Example images with corresponding ROIs are shown at the top of each panel. Our proposed model is more similar to both whole-drawing and expert ROIs than Grad-CAM model and the higher similarity is consistent over broad range of % total number of pixels from the models’ outputs (10%-80%)

Back to article page