Skip to main content

Novel instructionless eye tracking tasks identify emotion recognition deficits in frontotemporal dementia

Abstract

Background

Current tasks measuring social cognition are usually ‘pen and paper’ tasks, have ceiling effects and include complicated test instructions that may be difficult to understand for those with cognitive impairment. We therefore aimed to develop a set of simple, instructionless, quantitative, tasks of emotion recognition using the methodology of eye tracking, with the subsequent aim of assessing their utility in individuals with behavioural variant frontotemporal dementia (bvFTD).

Methods

Using the Eyelink 1000 Plus eye tracker, 18 bvFTD and 22 controls completed tasks of simple and complex emotion recognition that involved viewing four images (one target face (simple) or pair of eyes (complex) and the others non-target) followed by a target emotion word and lastly the original four images alongside the emotion word. A dwell time change score was then calculated as the main outcome measure by subtracting the percentage dwell time for the target image before the emotion word appeared away from the percentage dwell time for the target image after the emotion word appeared. All participants also underwent a standard cognitive battery and volumetric T1-weighted magnetic resonance imaging.

Results

Analysis using a mixed effects model showed that the average (standard deviation) mean dwell time change score in the target interest area was 35 (27)% for the control group compared with only 4 (18)% for the bvFTD group (p < 0.05) for the simple emotion recognition task, and 15 (26)% for the control group compared with only 2 (18)% for the bvFTD group (p < 0.05) for the complex emotion recognition task. Worse performance in the bvFTD group correlated with atrophy in the right ventromedial prefrontal and orbitofrontal cortices, brain regions previously implicated in social cognition.

Conclusions

In summary, eye tracking is a viable tool for assessing social cognition in individuals with bvFTD, being well-tolerated and able to overcome some of the problems associated with standard psychometric tasks.

Introduction

Behavioural variant frontotemporal dementia (bvFTD) is a neurodegenerative disorder characterised by a progressive decline in behaviour and executive function [1, 2]. One of the key early features is an impairment in social cognition, a set of skills that underlies our interactions with others [3], and includes emotion recognition, the ability to identify the emotions of others, e.g. from their facial expression.

Emotions are often split into simple or basic ones, which are universally recognised cross-culturally and include happiness, sadness, fear, disgust, anger and surprise, and complex ones such as regret and distrust. Individuals with bvFTD have been found to recognise emotions less accurately than healthy controls [4, 5], especially those of negative valence such as anger, sadness, fear and disgust [6,7,8], as well as more complex ones [9,10,11,12,13]. However, traditional emotion recognition tasks are often ‘pen and paper’ and use complex instructions with high working memory load that may be difficult for those with cognitive impairment to understand. We therefore aimed to develop novel tasks of emotion recognition using the methodology of eye tracking [14,15,16]. This has previously been used to investigate oculomotor function in FTD [17,18,19], and more recently, cognition as well [20, 21]. Importantly, it can provide a quantitative output and, potentially, a more sensitive way to detect impairment within a cognitive domain than traditional tasks can. Furthermore, it can remove much of the cognitive demand of the tasks by limiting the instructions required [22].

This study therefore set out to, firstly, develop simple and complex emotion recognition instructionless eye tracking tasks which have the potential to quantitatively detect earlier and more subtle social cognition deficits than previous tests and then, secondly, explore the utility of these novel tasks in individuals with bvFTD relative to a healthy control group, as well as determining their cognitive and neuroanatomical correlates.

Methods

Participants

Forty participants were recruited from the longitudinal FTD studies at the Dementia Research Centre, University College London: 18 people meeting diagnostic criteria for bvFTD [2] of whom 9 had genetic FTD (mutations in C9orf72 = 5, GRN = 2 and MAPT = 2), and 22 healthy controls. The groups were of similar age, but compared to controls, a greater proportion of the bvFTD group were male and the educational level was slightly higher in the controls compared with the bvFTD group (Table 1).

Table 1 Demographic, behavioural and neuropsychometric data for the control and bvFTD participants. Behavioural symptoms are scored as 0 (absent), 0.5 (very mild or questionable), 1 (mild), 2 (moderate) and 3 (severe), with mean (standard deviation) scores shown for the bvFTD group. Significant differences between groups are highlighted in bold. SD standard deviation, N/A not applicable, s seconds

All participants underwent a clinical and cognitive assessment including the Clinical Dementia Rating Scale with the National Alzheimer’s Coordinating Centre Frontotemporal Lobar Degeneration component (CDR with NACC FTLD), the Mini-Mental State Examination (MMSE), WMS-R Digit span forwards and backwards, Phonemic fluency, D-KEFS Color-Word Interference Test (part 3), Trail Making Test parts A and B, Graded Naming Test and the Mini-Social and Emotional Assessment (mini-SEA, which includes two subtests, a Faux-Pas task and an Emotion Recognition task). The bvFTD group performed significantly worse on all tests than the control group (Table 1).

Eye tracking tasks

All eye tracking tasks were performed on the Eyelink 1000 Plus (SR research) with the participant’s chin on a head mount to ensure stability within a dark room to keep consistent lighting conditions. The 18″ display screen had a resolution of 1920 × 1080 pixels and was positioned 70 cm from the participant. Viewing was binocular but only the right eye was tracked. A 9-point calibration was carried out prior to the start of the tasks, followed by a drift correct procedure between each trial in order to maintain accuracy of the eye tracker throughout the task. If the accuracy was poor, recalibration was performed.

Initially, a pro-saccade task (with 8 trials) was performed to assess basic oculomotor function and therefore participant’s ability to perform the emotion recognition tasks [19, 23]. A red cross was shown in the middle of the screen, and then once the participant had fixated on the cross, there was a gap of 200 ms followed by the appearance of a green dot at either 8° visual angle in the horizontal direction or 5° visual angle in the vertical direction either side of the target fixation cross. Participants were asked to look as quickly and as accurately as they could to the green dot when it appeared. Saccade latency, the time taken for an individual to generate the initial saccade after the target has appeared; amplitude error, i.e. how close to the target an initial saccade amplitude is; and peak velocity, the maximum velocity reached for the saccade, were all calculated.

Two tasks were developed to assess simple and complex emotion recognition. For both of these emotion recognition tasks, participants were presented with a fixation cross. Once they had looked at this, four images exhibiting particular emotions (faces for the simple task and eyes for the complex task) appeared in each of the corners of the screen for 10 s. A target emotion word then appeared in the centre of the screen for 1 s. This emotion word matched one of the four previous images. Lastly, the original four images reappeared on the screen for 5 s alongside the target word (Fig. 1). Display timings were guided by the visual world paradigm literature [24]. Participants were told only to look at the images on the screen with no other instructions. In total, the test took 10–15 min to complete.

Fig. 1
figure 1

Examples of the stimuli for the a simple emotion recognition task and b complex emotion recognition task

For the simple emotion recognition task, the images used were selected from the NimStim Face Stimuli Set (https://www.macbrain.org/resources.htm) and included faces displaying the six basic emotions of happiness, surprise, sadness, disgust, anger and fear. There was a total of 24 trials with each of the emotions being the target image on four occasions (Fig. 1a). For the complex emotion recognition task, images were selected from the Reading the Mind in the Eyes Task [25], a test containing pictures of eyes with associated complex emotion labels such as contemplative and suspicious (Fig. 1b). There was a total of 20 trials for this task.

To analyse the data after the tasks had been performed, each of the four images was selected as an interest area and the participant’s dwell time within each interest area (i.e. how long they had spent looking at that image) was measured both before and after the emotion word was presented on the screen. As the length of image presentation was different before (10 s) and after (5 s) the emotion word was presented, a percentage dwell time was calculated as:

$$ \mathrm{Dwell}\ \mathrm{time}\ \left(\%\right)=\frac{\mathrm{dwell}\ \mathrm{time}}{\mathrm{presentation}\ \mathrm{time}}\times 100 $$

Performance on each trial was measured by the difference between percentage dwell time in the interest area of the image showing the target emotion after presentation of the emotion word compared to before it was presented. This measure was calculated as:

$$ \mathrm{dwell}\ \mathrm{time}\ \mathrm{change}\ \mathrm{score}=\mathrm{dwell}\ \mathrm{time}\ \left(\%\right)\ \mathrm{post}-\mathrm{dwell}\ \mathrm{time}\ \left(\%\right)\ \mathrm{pre} $$

The hypothesis was that controls would look approximately equally at all four images before the emotion word appeared but then spend more time looking at the target image and less time at the other three images after the emotion word appeared, i.e. a positive dwell time change score for the target, whereas people with an impairment of emotion recognition would look more equally at all four images after the emotion word appeared (as well as before), i.e. the dwell time change score would be near to zero.

A dwell time change score was also calculated for the other three images. These images were chosen to consist of one ‘similar’ image of the same valence as the target (i.e. a positive emotion if the target was positive, or a negative emotion if the target was negative) and two ‘distractor’ images of the opposite valence to the target (i.e. two negative emotions if the target was positive, and vice versa). The two distractor change scores were averaged together to give one total distractor dwell time change score. The hypothesis was that controls would have a negative dwell time change score for these non-target interest areas whereas people with emotion recognition problems would again have a score close to zero.

For each participant, the dwell time change scores were averaged across all of the trials, giving a mean dwell time change score as a summary measure of performance for target, similar and distractor images within each task.

For each group, we then calculated the mean of the participants’ mean dwell time change scores. To avoid double use of the word mean, and therefore for easier readability, we use the word average here, i.e. the overall group result is the average mean dwell time change score.

Structural brain imaging

All participants underwent volumetric T1-weighted imaging in a Siemens Prisma 3T magnetic resonance imaging scanner. An automated atlas segmentation propagation and label fusion strategy known as Geodesic Information Flow [26] was used to parcellate the T1-weighted scans from each participant to generate specific regions of interest (ROI): orbitofrontal cortex; dorsolateral prefrontal cortex (DLPFC); ventromedial prefrontal cortex (VMPFC); temporal, parietal and occipital cortices; striatum and amygdala. All of the ROI volumes are expressed as a percentage of total intracranial volume, computed with SPM12 (Statistical Parametric Mapping, Welcome Trust Centre for Neuroimaging, London, UK) running under Matlab R20014b (Mathworks, USA) [27].

Statistical analysis

All eye tracking data was loaded into the Eyelink 1000 Plus Data Viewer (SR Research) for pre-processing and then exported to Stata (version 14.2) for statistical analysis. Normality was assessed using Q-Q plots.

For the pro-saccade task, a saccade report was generated, and the first saccade that met the following criteria was used for the analysis: the first saccade that did not contain a blink, did not start before the onset of the target, went in the same direction as the target and started at the fixation cross. Linear regression models were used to compare saccade latency, amplitude error and peak velocity between groups (bootstrapping with 1000 replications was used for the latter two measures as they were not normally distributed).

For both the simple and complex emotion recognition tasks, a mixed effects model was used to compare the mean dwell time change scores between the two groups for each of the types of interest area (i.e. target, similar or distractor). The model therefore included participant group, type of interest area and their interaction, with the dwell time change score for the interest areas on each trial as the outcome variable. Age, sex and education were included as covariates in the analysis. Crossed random effects for participant and trial number (i.e. 1–24 for the simple emotion recognition task and 1–20 for the complex emotion recognition task) were included to allow for correlations between repeated measures on the same participant and correlations between responses to the same trial by different participants. As the data were not normally distributed, bootstrapping with 1000 replications, clustered on participant, was used to provide non-parametric bias-corrected accelerated confidence intervals for statistical inference.

For the simple emotion recognition task only, similar mixed effects models with bootstrapping were performed to investigate whether the mean dwell time change score for the target interest area differed both between and within groups for each of the different emotions.

To investigate the cognitive and neuroanatomical correlates of the simple and complex emotion recognition tasks, a correlation analysis with inference based on bootstrap standard errors from 1000 replications was performed in the bvFTD group between the dwell time change score for the target interest area and (i) the neuropsychological tests (including cognitive domains that potentially may correlate with the eye tracking tasks, i.e. social cognition, executive function, speed of processing and language) and (ii) the MRI ROI volumes (including specific neuroanatomical regions that have previously been implicated as being part of a social cognition network, with the inclusion therefore of specific frontal subregions).

Results

No differences were observed between the bvFTD group and the controls in any of the measures on the pro-saccade task (Supplementary Table 1).

In the simple emotion recognition task, the control group spent significantly more time looking at the target image after the emotion word was presented than the bvFTD group (p < 0.05): the average (standard deviation) mean dwell time change score in the target interest area was 35 (27)% for the control group compared with only 4 (18)% for the bvFTD group (Table 2, Figs. 2 and 3). The control group also spent significantly less time looking at the similar and distractor images after the emotion word was presented than the bvFTD group: the average (standard deviation) mean dwell time change score in the similar interest area was − 10 (15)% for the control group compared with − 3 (15)% for the bvFTD group, and in the distractor interest area was − 10 (16)% for the control group compared with − 2 (13)% for the bvFTD group (Table 2, Fig. 2).

Table 2 Comparison of the average (standard deviation) mean dwell time change scores between control and bvFTD groups in the simple and complex emotion recognition tasks for the target, similar and distractor interest areas. Significant differences between groups are shown in bold
Fig. 2
figure 2

Mean dwell time change scores for bvFTD and control groups in the simple and complex emotion recognition tasks. Black significance lines indicate between-group differences, whilst orange and blue significance lines indicate within-group differences (bvFTD and controls respectively)

Fig. 3
figure 3

Heat maps showing average performance of controls and bvFTD participants on example trials from the a simple emotion recognition task and b complex emotion recognition task. The colour bar shows the time spent looking at a particular area in milliseconds after the emotion word is presented, where red is the most time spent. The controls look significantly more at the target image after the emotion word is presented, whereas the bvFTD participants look to a lesser extent at the target image

Within the control group, there was a significant difference in the mean dwell time change scores between the target and similar interest areas (45%), and target and distractor interest areas (44%), but not the similar and distractor interest areas (0%) (Table 3, Fig. 2). Within the bvFTD group, there was a similar pattern but to a lesser extent: there was a significant difference in the mean dwell time change scores between the target and similar interest areas (7%), and target and distractor interest areas (6%), but not the similar and distractor interest areas (− 1%) (Table 3, Fig. 2).

Table 3 Comparison within each of the control and bvFTD groups of the average mean dwell time change scores across interest areas (target vs. similar, target vs. distractor, and similar vs. distractor) in the simple and complex emotion recognition tasks. Significant differences between groups are shown in bold

A similar pattern of results was seen in the complex emotion recognition task, with the control group spending significantly more time looking at the target image after the emotion word was presented than the bvFTD group (p < 0.05): the average (standard deviation) mean dwell time change score in the target interest area was 15 (26)% for the control group compared with only 2 (18)% for the bvFTD group (Table 2, Fig. 2). However, there was no difference between the groups in the similar or distractor interest areas.

Also similarly to the simple emotion recognition task, there was a significant difference in the mean dwell time change scores between the target and similar interest areas (17%), and target and distractor interest areas (18%), but not the similar and distractor interest areas (0%) in the control group for the complex emotion recognition task (Table 3, Fig. 2). In the bvFTD group, these differences were also significant between the target and similar interest areas (4%), and target and distractor interest areas (5%), but not the similar and distractor interest areas (1%) (Table 3, Fig. 2).

The bvFTD group had a significantly lower average mean dwell time change score on all six basic emotions than the control group in the simple emotion recognition task (Fig. 4, Supplementary Table 2). Within the control group, the performance was similar across all emotions, except for fear where the average mean dwell time change score was significantly less than all of the other emotions (Fig. 4, Supplementary Table 3). No significant differences were seen in the bvFTD group across the emotions (Fig. 4, Supplementary Table 3).

Fig. 4
figure 4

Mean dwell time change scores for bvFTD and control groups for the target interest area in the individual emotions in the simple emotion recognition task. Blue significance lines represent within the control group differences across the emotions, whilst the black significance lines represent significant differences between the control and bvFTD groups on each emotion

The mean dwell time change score for the target interest area in the bvFTD group significantly (negatively) correlated with performance on the D-KEFS Color-Word Inteference test (rho = − 0.42, p = 0.042) for the complex emotion recognition task (Supplementary Table 4), but not for the simple emotion recognition task (rho = − 0.34, p = 0.128). Although the rho was similar for the correlations of the eye tracking tasks with the social cognition test used (mini-SEA), neither was significant (for simple emotion recognition task, rho = 0.38, p = 0.178; for complex emotion recognition task, rho = 0.36, p = 0.195). There were no other significant correlations with the neuropsychological tasks including with the language task (Supplementary Table 4).

The mean dwell time change score for the target interest area in the bvFTD group significantly (positively) correlated with the volume of the right ventromedial prefrontal cortex (rho = 0.33, p = 0.022) and right orbitofrontal cortex (rho = 0.33, p = 0.031) in the complex emotion recognition task (Table 4), and although there were no significant correlations in the simple emotion recognition task, the rho value was highest in the same regions of interest: the right ventromedial prefrontal cortex (rho = 0.26, p = 0.079) and right orbitofrontal cortex (rho = 0.26, p = 0.107).

Table 4 Correlations between the mean dwell time change scores for the target interest area and the neuroanatomical regional volumes within the bvFTD group in the simple and complex emotion recognition tasks. Bold indicates a significant correlation

Discussion

In this study, we show that instructionless eye tracking tasks are able to detect simple and complex emotion recognition deficits in individuals with bvFTD and that lower mean dwell time change scores correlate with atrophy of the right orbitofrontal and ventromedial prefrontal cortex.

We have developed a short, simple, test of social cognition with essentially no test instructions, hence reducing difficulties that occur in more difficult tasks due to impaired comprehension. Importantly, controls do not score at a ceiling level, unlike many standard social cognition tasks, and furthermore, all of the individuals with bvFTD, who in this study were mildly to moderately impaired, were able to complete the tests. Future studies examining whether there are practice effects and the validity of the tasks over time will be important.

In both tasks, as hypothesised, the control group had a positive dwell time change score for the target interest area (35% for the simple and 15% for the complex task) and a negative dwell time change score for the non-target interest areas (− 10 for the simple and − 2 for the complex task). We predicted that the bvFTD group would have a dwell time change score approaching zero for both target and non-target interest areas. Instead, as a group, the dwell time change score for the target interest area was 4% for the simple task and 2% for the complex task, and for the non-target interest areas was − 1 to − 3%. Whilst significantly lower than the control group, it can be seen from Fig. 2 that there are a small number of individuals who seem to be able to perform the task well, their score overlapping with that of the control group. Further work in larger groups, and on a longitudinal basis, will be needed to study such participants, in order to understand differential performance and its underlying pathophysiology.

A larger overlap in mean dwell time change scores for bvFTD compared with controls was seen in the complex emotion recognition task compared with the simple task. Whilst this suggests the simple emotion recognition tasks may be more helpful in this bvFTD population in diagnosing social cognitive impairment, this may not be the case for those with either very early bvFTD or who are presymptomatic (i.e. those who are in the prodromal stage of genetic FTD). The increased difficulty of the complex task means that it may potentially be more sensitive to subtle changes at this stage of the disease when performance may remain normal on the simple emotion recognition task. Investigation of presymptomatic genetic FTD mutation carriers will be helpful to understand this better, and particularly studying people longitudinally as they phenoconvert.

When looking at performance across the individual emotions in the simple emotion recognition task, the control group had a decrease in their ability to identify fearful expressions when compared to the other emotions. This is consistent with prior literature showing fear is one of the most difficult of the basic emotions to recognise [28]. However, there were no significant differences observed between the emotions in the bvFTD group, which is different than a number of other prior studies in FTD which show worse performance on negative emotions compared with positive emotions [6,7,8].

The only significant correlation seen with standard ‘pen and paper’ cognitive tests was the complex emotion recognition task with the D-KEFS Color-Word Interference Test, although even this was a relatively weak correlation. There was a similar trend with the simple emotion recognition task, suggesting that both tests may have an executive function component to them. In contrast, although the rho value was similar, we did not find evidence of a significant correlation with scores on the mini-SEA (or its individual subtests), the standard social cognition test performed in all of the participants. The association with executive function but not social cognition may be due to a number of reasons: firstly, the tasks may assess more subtle deficits than picked up through the standard social cognition test (as they were in fact designed to do, and was seen in a previous novel eye tracking test that was able to identify more individuals as having deficits than the standard pen and paper task [22]); secondly, there is a close interrelationship between executive function and many aspects of social cognition as highlighted by previous studies [29,30,31]; thirdly, the small sample size may not be able to pick up a significant correlation with the mini-SEA (as nonetheless the rho value was 0.38 for the simple task and 0.36 for the complex task); lastly, and as with many psychometric tests, the tasks may well tap into multiple cognitive components in brain function even if primarily a social cognition task.

The neuroimaging analysis demonstrated an association of lower mean dwell time change score with atrophy of the right ventromedial prefrontal cortex. This is consistent with previous findings that the right ventromedial prefrontal cortex plays a central role in social cognition and the recognition of emotions in faces [32, 33]. An association was also seen with the right orbitofrontal cortex, an area involved in social decision-making [34], and previously identified as linked to emotion recognition deficits in individuals with bvFTD [11, 35]. Whilst these findings provide some support that the novel eye tracking tasks may be measuring social cognition, these regions are also implicated in other cognitive domains.

Overall, we have developed a novel set of tasks which allow detection of impaired social cognition in FTD. The study adds to the literature showing the presence of emotion recognition deficits in FTD but the nature of these novel tasks means that more subtle deficits may be detectable compared to prior tests. Further studies in presymptomatic genetic FTD populations such as the GENFI (www.genfi.org) or ALLFTD (www.allftd.org) studies will be important to see how early social cognition difficulties can be seen in the disease process. This has implications for future clinical trials in terms of stratifying participants, but also in detecting deficits that might help make earlier diagnoses of bvFTD. We believe that this initial exploratory study also provides the theoretical basis for developing further instructionless eye tracking tasks that could detect other subdomains of social cognition impairment such as theory of mind and moral reasoning.

Limitations

The study has a number of limitations. Firstly, whilst the sample size used in this study is typical of those investigating bvFTD, given the rarity of the condition, the study would benefit from a replication in a larger cohort. Secondly, as with all neuropsychometric tests, it is difficult to assess whether or not a task is assessing a specific cognitive domain or whether other abilities are influencing one’s performance on a task, for example executive function having an impact on social cognitive abilities as mentioned above. The tasks in this study have been developed to remove as many confounding factors as possible by keeping them simple and instructionless, but further studies in other disorders that have impaired social cognition but intact executive function and other cognitive domains will be helpful to understand the task further. Thirdly, it is possible that the individuals with bvFTD, who were impaired on a language task compared with controls, are having trouble comprehending the emotional words for the complex task which may be limiting their ability to do the eye tracking tests. However, there was no correlation of scores on either of the tests with the language task. Fourthly, there was no significant difference in scores for the ‘similar’ and ‘distractor’ items on either the simple or complex tasks. This suggests that both are acting just as non-target items and future analyses should focus on ‘target’ and ‘non-target’ interest areas only. Lastly, a better understanding of longitudinal performance and the effects of repeated testing is needed, although the correct answers are never given to participants, potentially limiting any practice effects.

Conclusions

In summary, the results suggest that instructionless eye tracking tests are a viable tool for assessing social cognition in bvFTD. Further work in a larger control population and other disorders with social cognition deficits will be needed to better understand the replicability and reliability of the task but these novel tasks open the opportunity for a quantitative measure of social cognition that may well be helpful as outcome measures in future trials.

Availability of data and materials

The datasets generated during and/or analysed during the current study are not publicly available, as the conditions of our ethical approval do not permit public archiving of individual anonymised data, but are available from the corresponding author on reasonable request.

References

  1. Warren JD, Rohrer JD, Rossor MN. Frontotemporal dementia. BMJ. 2013;347:f4827.

    Article  Google Scholar 

  2. Rascovsky K, et al. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain. 2011;134(9):2456–77.

    Article  Google Scholar 

  3. Adolphs R. The social brain: neural basis of social knowledge. Annu Rev Psychol. 2009;60:693.

    Article  Google Scholar 

  4. Gossink F, et al. Social cognition differentiates behavioral variant frontotemporal dementia from other neurodegenerative diseases and psychiatric disorders. Am J Geriatr Psychiatry. 2018;26(5):569–79.

    Article  Google Scholar 

  5. Kumfor F, et al. Beyond the face: how context modulates emotion processing in frontotemporal dementia subtypes. Brain. 2018;141(4):1172–85.

    Article  Google Scholar 

  6. Fernandez-Duque D, et al. Empathy in frontotemporal dementia and Alzheimer’s disease. J Clin Exp Neuropsychol. 2010;32(3):289–98.

    Article  Google Scholar 

  7. Lavenu I, et al. Perception of emotion in frontotemporal dementia and Alzheimer disease. Alzheimer Dis Assoc Disord. 1999;13(2):96–101.

    Article  CAS  Google Scholar 

  8. Kipps CM, Mioshi E, Hodges JR. Emotion, social functioning and activities of daily living in frontotemporal dementia. Neurocase. 2009;15(3):182–9.

    Article  Google Scholar 

  9. Baez S, et al. Primary empathy deficits in frontotemporal dementia. Front Aging Neurosci. 2014;6:262.

    Article  Google Scholar 

  10. Sedeno L, et al. Brain network organization and social executive performance in frontotemporal dementia. J Int Neuropsychol Soc. 2016;22(2):250–62.

    Article  Google Scholar 

  11. Couto B, et al. Structural neuroimaging of social cognition in progressive non-fluent aphasia and behavioral variant of frontotemporal dementia. Front Hum Neurosci. 2013;7:467.

    Article  Google Scholar 

  12. Buhl C, Stokholm J, Gade A. Clinical utility of short social cognitive tests in early differentiation of behavioral variant frontotemporal dementia from Alzheimer’s disease. Dement Geriatr Cogn Disord Extra. 2013;3(1):376–85.

    Article  Google Scholar 

  13. Schroeter ML, et al. A modified Reading the Mind in the Eyes test predicts behavioral variant frontotemporal dementia better than executive function tests. Front Aging Neurosci. 2018;10:11.

    Article  Google Scholar 

  14. Schwartzman JS, et al. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study. Arq Neuropsiquiatr. 2015;73(5):402–7.

    Article  Google Scholar 

  15. Bortolon C, et al. Self-face recognition in schizophrenia: an eye-tracking study. Front Hum Neurosci. 2016;10:3.

    Article  Google Scholar 

  16. Marx S, et al. Validation of mobile eye-tracking as novel and efficient means for differentiating progressive supranuclear palsy from Parkinson’s disease. Front Behav Neurosci. 2012;6:88.

    PubMed  PubMed Central  Google Scholar 

  17. Boxer AL, et al. Medial versus lateral frontal lobe contributions to voluntary saccade control as revealed by the study of patients with frontal lobe degeneration. J Neurosci. 2006;26(23):6354–63.

    Article  CAS  Google Scholar 

  18. Douglass A, et al. Behavioral variant frontotemporal dementia performance on a range of saccadic tasks. J Alzheimers Dis. 2018;65(1):231–42.

    Article  Google Scholar 

  19. Garbutt S, et al. Oculomotor function in frontotemporal lobar degeneration, related disorders and Alzheimer’s disease. Brain. 2008;131(5):1268–81.

    Article  Google Scholar 

  20. Merck C, et al. Overreliance on thematic knowledge in semantic dementia: evidence from an eye-tracking paradigm. Neuropsychology. 2020;34(3):331–49.

    Article  Google Scholar 

  21. Ungrady MB, et al. Naming and knowing revisited: eyetracking correlates of anomia in progressive aphasia. Front Hum Neurosci. 2019;13:354.

    Article  Google Scholar 

  22. Primativo S, et al. Eyetracking metrics reveal impaired spatial anticipation in behavioural variant frontotemporal dementia. Neuropsychologia. 2017;106:328–40.

    Article  Google Scholar 

  23. Shakespeare TJ, et al. Abnormalities of fixation, saccade and pursuit in posterior cortical atrophy. Brain. 2015;138(7):1976–91.

    Article  Google Scholar 

  24. Huettig F, Rommers J, Meyer AS. Using the visual world paradigm to study language processing: a review and critical evaluation. Acta Psychol. 2011;137(2):151–71.

    Article  Google Scholar 

  25. Baron-Cohen S, et al. The “Reading the Mind in the Eyes” test revised version: a study with normal adults, and adults with Asperger syndrome or high-functioning autism. J Child Psychol Psychiatry. 2001;42(2):241–51.

    Article  CAS  Google Scholar 

  26. Cardoso MJ, et al. Geodesic information flows: spatially-variant graphs and their application to segmentation and fusion. IEEE Trans Med Imaging. 2015;34(9):1976–88.

    Article  Google Scholar 

  27. Malone IB, et al. Accurate automatic estimation of total intracranial volume: a nuisance variable with less nuisance. Neuroimage. 2015;104:366–72.

    Article  Google Scholar 

  28. Ruffman T, et al. A meta-analytic review of emotion recognition and aging: implications for neuropsychological models of aging. Neurosci Biobehav Rev. 2008;32(4):863–81.

    Article  Google Scholar 

  29. Van Overwalle F. Social cognition and the brain: a meta-analysis. Hum Brain Mapp. 2009;30(3):829–58.

    Article  Google Scholar 

  30. Northoff G, et al. Reciprocal modulation and attenuation in the prefrontal cortex: an fMRI study on emotional–cognitive interaction. Hum Brain Mapp. 2004;21(3):202–12.

    Article  Google Scholar 

  31. Amodio DM, Frith CD. Meeting of minds: the medial frontal cortex and social cognition. Nat Rev Neurosci. 2006;7(4):268–77.

    Article  CAS  Google Scholar 

  32. Grimm S, et al. Altered negative BOLD responses in the default-mode network during emotion processing in depressed subjects. Neuropsychopharmacology. 2009;34(4):932–43.

    Article  Google Scholar 

  33. Tranel D, Bechara A, Denburg NL. Asymmetric functional roles of right and left ventromedial prefrontal cortices in social conduct, decision-making, and emotional processing. Cortex. 2002;38(4):589–612.

    Article  Google Scholar 

  34. O’Doherty J, et al. Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia. 2003;41(2):147–55.

    Article  Google Scholar 

  35. Multani N, et al. Emotion detection deficits and changes in personality traits linked to loss of white matter integrity in primary progressive aphasia. NeuroImage: Clin. 2017;16:447–54.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the research participants for their contribution to the study.

Funding

The Dementia Research Centre is supported by Alzheimer’s Research UK, Alzheimer’s Society, Brain Research UK and The Wolfson Foundation. This work was supported by the NIHR UCL/H Biomedical Research Centre, the Leonard Wolfson Experimental Neurology Centre (LWENC) Clinical Research Facility, and the UK Dementia Research Institute, which receives its funding from UK DRI Ltd., funded by the UK Medical Research Council, Alzheimer’s Society and Alzheimer’s Research UK. JDR is supported by an MRC Clinician Scientist Fellowship (MR/M008525/1) and has received funding from the NIHR Rare Disease Translational Research Collaboration (BRC149/NS/MH). JW received funding support from the Alzheimer’s Society and NIHR UCLH Biomedical Research Centre. This work was also supported by the MRC UK GENFI grant (MR/M023664/1), the Bluefield Project and the JPND GENFI-PROX grant (2019-02248). The GIF template database includes volumetric MRI scans from the University College London Genetic FTD Initiative (GENFI) study (www.genfi.org.uk) which is funded by the Medical Research Council UK GENFI grant (MR/M023664/1). No funding organisation had a role in the design of the study nor the collection, analysis or interpretation of the data.

Author information

Authors and Affiliations

Authors

Contributions

LR was involved with the conceptualisation and the methodological development of the study. LR collected and analysed the data and was responsible for the writing of the manuscript. CG and RC supported the administration of the project, as well as carrying out the collection of the data. JN assisted with the analysis of the data and the writing of the manuscript. JW provided funding, as well as supported the supervision and administration of the study. DK was involved in the conceptualisation and methodology of the study and assisted with the writing of the manuscript. JDR provided funding for the project, as well as being involved with the conceptualisation and methodology of the project. JDR was also involved with the writing of the manuscript and administration of the project. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lucy L. Russell.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for the study was gained from the local Research Ethics Committees (NRES Committee London – Queen Square, HRA NRES Centre Manchester: 140377 and London - Camden & Kings Cross Research Ethics Committee: 150805). All participants provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Russell, L.L., Greaves, C.V., Convery, R.S. et al. Novel instructionless eye tracking tasks identify emotion recognition deficits in frontotemporal dementia. Alz Res Therapy 13, 39 (2021). https://doi.org/10.1186/s13195-021-00775-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13195-021-00775-x

Keywords