Sensitivity to change and prediction of global change for the Alzheimer’s Questionnaire
© Malek-Ahmadi et al.; licensee BioMed Central. 2015
Received: 21 July 2014
Accepted: 22 December 2014
Published: 8 January 2015
Longitudinal assessment of cognitive decline in amnestic mild cognitive impairment (aMCI) and Alzheimer’s disease (AD) often involves the use of both informant-based and objective cognitive assessments. As efforts have focused on identifying individuals in pre-clinical stages, instruments that are sensitive to subtle cognitive changes are needed. The Alzheimer’s Questionnaire (AQ) has demonstrated high sensitivity and specificity in identifying aMCI and AD; however its ability to measure longitudinal change has not been assessed. The aims of this study are to assess the sensitivity to change of the AQ and to determine whether the AQ predicts change in global cognition and function in cognitively normal (CN), aMCI, and AD subjects.
Data from 202 individuals participating in a brain and body donation program were utilized for this study (101 CN, 62 aMCI, 39 AD). AD and aMCI individuals were matched on age, education, and gender to CN individuals. Sensitivity to change of the AQ was assessed in addition to the AQ’s ability to predict change in global cognition and function. The Mini Mental State Exam (MMSE) and Functional Activities Questionnaire (FAQ) were used as gold standard comparisons of cognition and function. Sample size calculations for a 25% treatment effect were also carried out for all three groups.
The AQ demonstrated small sensitivity to change in the aMCI and CN groups (d = 0.33, d = 0.23, respectively) and moderate sensitivity to change in the AD group (d = 0.43). The AQ was associated with increases in the Clinical Dementia Rating Global Score (OR = 1.20 (1.09, 1.32), P <0.001). Sample size calculations found that the AQ would require substantially fewer subjects than the MMSE given a 25% treatment effect.
Although the AQ demonstrated small sensitivity to change in aMCI and CN individuals in terms of effect size, the AQ may be superior to objective cognitive tests in terms of required sample size for a clinical trial. As clinicians and researchers continue to identify and treat individuals in earlier stages of AD, there is a need for instruments that are sensitive to cognitive changes in these earlier stages.
Longitudinal assessment of cognitive decline in amnestic mild cognitive impairment (aMCI) and Alzheimer’s disease (AD) often involves the use of both informant-based and patient-based assessments to measure the degree of change in cognition and function [1,2]. In both clinical and research settings, the two methods are often used in conjunction in order to glean a more accurate picture of an individual’s current cognitive status relative to baseline or other prior time points. A major issue that both clinicians and researchers grapple with is the degree to which a particular instrument is sensitive to change over time. For clinicians, determining the significance of change from one time to the next has implications for decisions regarding treatment and resource use (that is, assisted living, in-home care, and so on.). Clinicians may also benefit from instruments that are sensitive to change over time in order to satisfy the Affordable Care Act’s cognitive screening requirement for Medicare recipients. For researchers and clinical trialists, the issue of sensitivity to change for a particular instrument has significant ramifications for whether or not a meaningful treatment effect will be detected between placebo and treatment groups.
The need to identify individuals as early as possible in the AD disease process has prompted researchers to begin conducting studies with individuals who are classified as having pre-symptomatic AD. Although no formal diagnostic criteria currently exist for this classification, it is used to classify individuals whose biological markers are consistent with the pathological presence of AD, but who are cognitively normal and are considered to be at risk for eventually developing clinical AD. An interesting study by Riley et al.  compared cognitively normal individuals who, at autopsy, met National Institute on Aging (NIA)-Reagan criteria for no- and low-likelihood of AD with cognitively normal individuals who met criteria for intermediate- and high-likelihood of AD. This study found that the intermediate- and high-likelihood groups had a steeper rate of decline on several cognitive measures across several domains, although all individuals in the study were within normal limits on cognitive testing. Riley et al.  suggest that rates of longitudinal cognitive decline may be informative in identifying individuals with pre-symptomatic AD, even when cognitive testing falls within normal limits. Gavett et al.  found that informant-reported cognitive symptoms on the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) correlated well with longitudinal neuropsychological performance and that informant-reported changes in cognition were a robust predictor of cognitive decline in a high-functioning, cognitively normal group. Both of these studies demonstrate that cognitive decline in cognitively normal individuals can be reliably detected and may be used to predict subsequent development of clinical AD.
The Alzheimer’s Questionnaire (AQ) was originally introduced in 2010  and has been validated as an accurate informant-based measure of cognition and function for both aMCI and AD [5-7]. The AQ also correlates well with established measures of cognition and global function . Although the AQ has demonstrated its validity in cross-sectional studies, its ability to accurately measure change in cognition over time has not been assessed. Instruments such as the Mini Mental State Exam (MMSE)  and the Functional Activities Questionnaire (FAQ)  are commonly used to assess changes in cognition and function in aMCI and AD. Clark et al.  report that although the MMSE may be sufficient to use as a screening instrument for cognitive impairment, its utility as an instrument to assess change over time accurately is limited by high measurement error and high variability of annual change between individuals. A recent study by Costa et al.  found that the Montreal Cognitive Assessment (MoCA) yielded small sensitivity to change in prodromal AD and moderate sensitivity to change in mild AD. Recent studies suggest that the FAQ is a significant predictor of conversion to AD from aMCI  and has also been associated with longitudinal decreases in glucose metabolism associated with aMCI and AD . Rizk-Jackson et al.  found that the FAQ was able to detect functional decline in cognitively normal individuals prior to the presence of impairment on objective cognitive tests.
The first aim of this study was to assess the sensitivity to change of the AQ through the use of effect size and sample size calculations for a hypothetical placebo-controlled clinical trial. For comparison, the MMSE and FAQ were also used in order to gauge the AQ’s performance against instruments that have been more widely used. The second aim of the study was to determine how well one-year change in AQ total score predicts global change as measured by the Functional Assessment Staging Test (FAST) , Global Deterioration Scale (GDS) , and the Clinical Dementia Rating Global Score (CDR-GS) .
Data from the two most recent annual visits for 202 individuals participating in a brain and body donation program  were utilized for this study. Participants in this program were recruited predominantly from the northwest region of the Phoenix, Arizona metropolitan area. Approval for the brain and body donation program was granted by the Banner Health Institutional Review Board and informed consent was obtained from all individuals prior to enrolling in the program. The sample for this study ranged in age from 57 to 97 years with a mean of 81.70 ± 7.25 and had a mean education level of 14.74 ± 2.54 years and included 95 women and 107 men.
Of the 202 individuals, 101 were classified as cognitively normal (CN), 62 were classified as amnestic mild cognitive impairment (aMCI), and 39 were classified as Alzheimer’s disease (AD) at the first visit. Each aMCI and AD individual was matched on age, education, and gender to a CN individual, without replacement. When an exact match could not be found, a tolerance of ± 2 years was used for age and education in order to obtain an appropriate match. Both single and multiple domain aMCI cases were categorized as aMCI and both possible and probable AD were categorized as AD. The AD cases met National Institute of Neurological and Communicable Disorders and Stroke – Alzheimer’s Disease and Related Disorders Association (NINCDS-ADRDA) criteria  for a clinical diagnosis of probable or possible Alzheimer’s disease. aMCI cases were diagnosed as such based on Petersen criteria . The CN cases were defined as having no limitations of activities of daily living by informant report and were within normal limits on neuropsychological testing.
Consensus diagnosis with a neurologist, geriatric psychiatrist and neuropsychologist was used to determine the clinical status of each individual. Consensus diagnoses were made based on neuropsychological testing results, neurological and physical exam, and interviews with an informant that assessed global cognitive status, functional status, and mood and behavioral status.
AQ [5,6] – A 21-item, informant-based dementia assessment designed for ease of use in a primary care setting. AQ items are divided into five domains including Memory, Orientation, Functional Ability, Visuospatial Ability, and Language. Items are posed in a yes/no format with the sum of ‘yes’ items equaling the total AQ score (0-27). Six items known to be predictive of a clinical AD diagnosis are weighted more heavily in the total score by each being worth two points rather than one.
FAQ  – An informant-based measure of instrumental activities of daily living (IADLs) which scores 10 items on a 0 to 3 scale, with higher scores corresponding to greater impairment.
MMSE  – A brief, 30-item cognitive screening instrument that includes items on Orientation, Memory, Attention, Language and Visuospatial functions.
FAST  – A dementia staging instrument that classifies individuals as Normal Aging, Possible Mild Cognitive Impairment, Mild Cognitive Impairment, Mild Dementia, Moderate Dementia, Moderately Severe Dementia and Severe Dementia using a 1 to 7 scale where higher ratings indicate greater severity.
GDS  – A dementia staging instrument divided into seven different stages with increasing impairment corresponding with higher stages (No Cognitive Decline, Age-Associated Memory Impairment, Mild Cognitive Impairment, Mild Dementia, Moderate Dementia, Moderately Severe Dementia, Severe Dementia).
CDR  –A semi-structured, informant-based clinical staging instrument that characterizes six domains of cognitive and functional performance: Memory, Orientation, Judgment and Problem Solving, Community Affairs, Home and Hobbies, and Personal Care. The CDR provides a global score which is a composite score based on an algorithm that gives different weights to the scores for each of the domains. The global score (GS) is used to grade the severity of dementia and is measured using 0, 0.5, 1, 2, and 3 to denote no impairment, very mild dementia, mild dementia, moderate dementia, and severe dementia, respectively.
The Shapiro-Wilk test was performed on the data to determine the normality of distribution for the continuous variables. Non-parametric tests for group comparisons and correlations were used, as the data for all continuous variables were not normally distributed. The Kruskall-Wallis test was used to verify that the three groups were not significantly different in terms of age and education. Chi-square analysis was used to examine the distribution of men and women among the three groups.
The analyses investigating the sensitivity to change utilized a method similar to that of Costa et al. . Middel and von Sonderen [22,23] described these methods and their rationale in detail. The sensitivity to change assessment was completed through the calculation of an effect size (ES) to quantify the magnitude of change. Since this study used a correlated design, the pooled standard deviation was used to calculate the ES which was taken from the individual standard deviation values for Year 1 and Year 2 for each measure (pooled standard deviation = √(((Year 1 sd)2 + (Year 2 sd)2)/2); (ES = mean change score/pooled standard deviation)). The final effect size measure, d, included a correction for reliability (d = ES/√2(1-r)) where r is the correlation between the scores at Year 1 and Year 2. The interpretation for d utilized the following scheme proposed by Cohen : <0.20 = trivial change; 0.20 to 0.50 = small change; 0.50 to 0.80 = moderate change; ≥0.80 = large change.
In order to provide a more practical interpretation of the sensitivity to change, a series of sample size calculations were carried out to show how many individuals would be needed for a clinical trial using a particular measure as its outcome. The sample size calculations assumed a 25% treatment effect on the mean change score for each measure at 80% power with a two-tailed significance level of 0.05 for a randomized clinical trial with a treatment arm and a placebo arm. These parameters were used as they have been utilized by several previous studies  and have also been used to estimate sample sizes for pre-dementia trials using data from the Alzheimer’s Disease Neuroimaging Initiative . Sample size calculations were carried out using G*Power 3 . The reported sample sizes are the number per arm. For each of the clinical groups, varying trial lengths were used in the sample size calculations: AD = two years, MCI = three years, CN = five years.
To further examine the ability of each instrument to detect clinically significant change, a reliable change index (RCI) was calculated for each instrument. For this study, two different RCI methods were utilized as the AQ and FAQ are informant-based assessments and the MMSE is an objective performance-based assessment. For the AQ and FAQ, RCI calculations that corrected for inter-test reliability were used  while the MMSE RCI calculation utilized a method that corrects for both inter-test reliability and practice effects . The most common convention for interpreting RCI scores is that scores that are ≥ ± 1.645 are interpreted as demonstrating clinically significant change . This was used to obtain 90% confidence intervals for estimates of clinically significant change for each instrument from Year 1 to Year 2. In this study, we report the percent of individuals who demonstrated annual score changes outside the range of the 90% confidence interval for each instrument.
An additional set of analyses were carried out to determine the extent to which the mean change scores of the AQ, FAQ and MMSE predicted global change as measured by increases in FAST, GDS and CDR-GS values. The CN, AD and aMCI groups were analyzed separately. An analysis with the entire sample was also carried out. All individuals were dichotomized based on whether their individual FAST, GDS and CDR-GS values increased from Year 1 to Year 2 (1 = increase, 0 = no increase) as increases on these scales represent clinically meaningful changes in disease severity. Logistic regression analyses were used to assess the predictive value of the AQ, FAQ and MMSE change scores on increases in FAST, GDS or CDR-GS. A False Discovery Rate (FDR) significance level of 0.006 was used to correct for multiple comparisons within each of the groups.
Spearman correlation analyses were carried out to assess the linear associations between AQ, FAQ and MMSE scores with the FAST, GDS and CDR-GS for Year 1 and Year 2 separately. Spearman correlation was also used to assess the associations between the change scores on the AQ, FAQ, MMSE and MoCA. The correlations used as the measures of test-retest reliability are also Spearman values. Statistical analyses were carried out using Systat 12.0 (Systat, Inc., San Jose, CA, USA).
81.76 ± 7.23
81.57 ± 7.59
81.82 ± 6.92
81.71 ± 7.25
14.69 ± 2.50
15.18 ± 2.56
14.15 ± 2.55
14.74 ± 2.54
Sensitivity to change comparison for the AQ, FAQ, and MMSE in amnestic mild cognitive impairment, Alzheimer’s disease, and cognitively normal cases
SD of mean change
95% CI of mean change
Required sample size a
In the AD group, the AQ demonstrated small sensitivity to change (d = 0.43),; however, the FAQ showed large sensitivity to change (d = 0.84) and the MMSE demonstrated moderate sensitivity to change (d = 0.52). In terms of required sample size the FAQ yielded the lowest value (n = 119) while the AQ yielded a value that was substantially higher (n = 232). This result may be explained by the reliability values for each instrument as the FAQ had a higher reliability value (r = 0.81) than the AQ (r = 0.64). The MMSE yielded a required sample size that was between that of the AQ and FAQ (n = 157).
In the CN group all three measures demonstrated trivial sensitivity to change. However, sample size calculations demonstrated that the MMSE would require substantially more subjects than both the AQ and FAQ.
Reliable change index results based on data from cognitively normal individuals
90% CI for RCI
Percent outside of 90% CI
aMCI = 24%; AD = 16%
aMCI = 17%; AD = 12%
aMCI = 17%; AD = 17%
AQ, FAQ, and MMSE mean change as predictors of global change in mild cognitive impairment, Alzheimer’s disease, cognitively normal cases, and all groups combined
CDR Global score increase
1.07 (0.96, 1.20); P = 0.23
1.09 (0.97, 1.22); P = 0.15
1.09 (0.97, 1.24); P = 0.16
1.07 (0.96, 1.19); P = 0.25
0.97 (0.87, 1.08); P = 0.52
1.08 (0.96, 1.22); P = 0.22
1.02 (0.83, 1.26); P = 0.83
0.81 (0.64, 1.02); P = 0.07
0.91 (0.72, 1.15); P = 0.43
1.17 (0.97, 1.29); P = 0.14
1.26 (1.05, 1.52); P = 0.01
1.16 (1.00, 1.35); P = 0.06
1.09 (0.95, 1.24); P = 0.22
1.17 (1.00, 1.38); P = 0.05
1.12 (0.97, 1.29); P = 0.12
0.93 (0.77, 1.14); P = 0.50
0.97 (0.79, 1.19); P = 0.77
0.91 (0.74, 1.11); P = 0.35
1.09 (0.93, 1.28); P = 0.29
1.04 (0.88, 1.23); P = 0.67
1.26 (1.00, 1.59); P = 0.05
0.91 (0.68, 1.22); P = 0.52
1.02 (0.77, 1.34); P = 0.92
1.61 (1.10, 2.36); P = 0.02
0.88 (0.66, 1.17); P = 0.38
0.94 (0.70, 1.25); P = 0.66
0.72 (0.35, 1.49); P = 0.37
All Groups Combined
1.11 (1.03, 1.20); P = 0.008
1.08 (1.00, 1.16); P = 0.05
1.20 (1.09, 1.32); P <0.001
1.09 (1.01, 1.18); P = 0.03
1.16 (1.06, 1.26); P = 0.001
1.21 (1.11, 1.33); P <0.001
0.91 (0.81, 1.03); P = 0.15
0.84 (0.74, 0.95); P = 0.006
0.91 (0.74, 1.11); P = 0.35
Correlation values for AQ, FAQ, MMSE, FAST, GDS and CDR Global Score for Year 1
FAST Year 1
GDS Year 1
CDR-GS Year 1
AQ Year 1
FAQ Year 1
MMSE Year 1
Correlation values for AQ, FAQ, MMSE, FAST, GDS and CDR Global Score for Year 2
FAST Year 2
GDS Year 2
CDR-GS Year 2
AQ Year 2
FAQ Year 2
MMSE Year 2
The mean change score for the AQ correlated weakly with the mean FAQ change score (r = 0.22, P = 0.002) while the MMSE mean change score demonstrated no correlation with the AQ mean change score (r = -0.02, P = 0.83).
Within the aMCI and AD groups the AQ demonstrated small sensitivity to change while its sensitivity to change in the CN group was trivial. In aMCI individuals the AQ, FAQ and MMSE all demonstrated small sensitivity to change. In the AD group, the MMSE and FAQ demonstrated greater sensitivity to change relative to the AQ. The AQ was also significantly associated with global change as measured by CDR-GS increase and correlated strongly with other established measures of global cognition and function. Although the effect sizes reported in this study are relatively small, they are consistent with the notion that cognitive changes associated with aMCI and AD are often subtle and difficult to detect from a psychometric standpoint. This point is a major challenge for researchers and clinical trialists as the variability of cognitive tests is often numerically similar to the rate of change . Informant-based instruments that assess functional ability are also prone to high degrees of variability due to varying pre-morbid levels of function and gender differences in the degree of participation in many of the functional activities that are assessed . The result is that when objective cognitive tests and informant-based instruments are used as endpoints in clinical trials the inherent variability of these measures often makes it difficult to detect true differences between placebo and treatment groups. However, others have suggested that lack of decline in placebo groups  and disease severity at baseline  can also significantly impact a trial’s ability to detect a significant treatment effect. The degree to which a particular cognitive or functional measure is responsive to changes in disease status is extremely important, particularly in pre-symptomatic and aMCI populations where cognitive decline is slower and more subtle .
The sample size calculations in the aMCI group demonstrate that the AQ is superior to the MMSE in terms of sensitivity to change; however, the AQ required a larger sample size than the FAQ. The sample size calculations highlight some important methodological issues in aMCI and AD studies that have been problematic. The first issue involves whether or not objective cognitive tests and informant-based instruments are sensitive enough to detect changes, particularly in earlier stages of aMCI and AD. Based on the results from the MMSE, our results suggest that the AQ may be superior to objective cognitive measures in detecting longitudinal change when compared on sample sizes required to detect a treatment effect. Although informant-based and objective cognitive assessments are often used in conjunction to assess drug efficacy, these results suggest that the MMSE is less sensitive to change over time than informant-based instruments.
Another issue these results highlight is that of instrument reliability as it relates to the required sample size needed to detect a treatment effect. There is a direct relationship between instrument reliability and sensitivity to change as instruments that are prone to higher variability between assessments may not detect significant longitudinal change as accurately as instruments with lower between-assessment variability. This imprecision ultimately leads to larger sample size requirements for clinical trials. Knopman and Caselli  point out that between-assessment variability is an inherent challenge when using patient-based objective cognitive tests to assess change, and longitudinal differences may be related to non-pathological factors, such as chance and regression toward the mean. Practice effects due to repeat administration of cognitive tests within relatively short periods of time also pose a significant threat to the ability to detect change associated with progression of aMCI/AD . Others have also suggested that some objective cognitive tests are inherently insensitive to cognitive changes  and that variability between examiners using these instruments  is also a detrimental factor that prevents treatment effects from being observed. Although informant-based measures are more robust to some of these challenges than objective cognitive tests, they are still prone to some degree of measurement error, particularly in the area of inter-rater reliability .
In this study, the issue of reliability and its relationship to effect size was demonstrated in the AD group where the AQ yielded moderate sensitivity to change and the FAQ yielded large sensitivity to change. In this case, the effect size (corrected for reliability) for the FAQ was almost twice as large as that of the AQ. Some of this difference may be attributable to the higher reliability value of the FAQ which underscores the importance of not only an instrument’s psychometric ability to detect change, but also the ability of the examiner to administer the instrument in a way that can detect meaningful change. The importance of inter-rater reliability is highlighted by Kobak  who points out that reductions in inter-rater reliability, as measured by intra-class correlation, can result in significantly larger required sample sizes for clinical trials which stems from the increased measurement variability that reduces statistical power. This issue is also highlighted by Cummings et al.  who report that insufficient training and monitoring of examiners may lead to increased measurement variability which decreases the chance of detecting significant treatment effects. Connor and Sabbagh  also note that increases in measurement error may lead to decreases in instrument reliability, which results in a decreased ability to detect treatment effects.
The divergent sample size calculations for the AQ and FAQ may also be due to some of the inherent psychometric properties of each instrument. The FAQ captures not only the presence of impaired functioning, but also severity where the AQ only captures the presence of reported impairment in cognition and function. Thus, the inclusion of severity of impairment on the FAQ may account for the smaller required sample size calculation as a result of increased statistical power.
The results from the RCI calculations showed that the AQ identified clinically significant change in a larger percentage of individuals than did the FAQ and MMSE for aMCI individuals. The advantage that RCI scores provide is the ability to assess intra-individual change, which has been shown to have good predictive value in terms of cognitive decline . The use of RCI scores in this context may provide a novel and more informative way to determine endpoints for aMCI and AD clinical trials. Since the majority of clinical trials for aMCI and AD rely on methods and analyses that simply assess group differences (for example, drug versus placebo) on a particular measure (for example, Alzheimer’s Disease Assessment Scale – cognition (ADAS-Cog)), it might be possible for drug efficacy to be assessed based on the percent of individuals showing clinically significant change on a measure, rather than just demonstrating a certain amount of change (for example, 25%) on an outcome measure.
One drawback to the current study is the relatively small sample size. Given that clinical trials often enroll hundreds of individuals, replication of these findings in a larger sample is needed in order to strengthen the argument for the AQ’s ability to detect longitudinal change. Autopsy confirmation of the clinical status for each individual would lend further support to the AQ’s ability to detect longitudinal change. Although the individuals participating in this study have agreed to an autopsy, many of them were still living at the time of the analysis so neuropathological confirmation of their clinical status was not available.
The results of this study indicate that the AQ demonstrated small sensitivity to longitudinal cognitive changes associated with aMCI and AD. The AQ’s sensitivity to change in aMCI was comparable to the FAQ while both instruments outperformed the MMSE in terms of effect size and required sample size. The AQ was also significantly associated with longitudinal decreases in global cognition and function and was able to identify a greater proportion of aMCI individuals with clinically significant change when compared to other established measures. As clinicians and researchers continue to identify and treat individuals in earlier stages of AD, there is a need to utilize instruments that are sensitive to subtle cognitive changes over time. Although the AQ’s sensitivity to change was small, it is possible that its sensitivity to change may be enhanced when used in conjunction with sensitive objective cognitive tests and validated biomarkers of disease progression. In addition, the recent changes in mandatory screening measures for Medicare recipients as part of the Affordable Care Act may provide the opportunity for the AQ to be used by clinicians in order to satisfy the requirement for cognitive screening and might be helpful in detecting change over time in clinical settings.
amnestic mild cognitive impairment
Clinical Dementia Rating Global Score
Functional Activities Questionnaire
Functional Assessment Staging Test
Global Deterioration Scale
Mini Mental State Exam, Assessment
Montreal Cognitive Assessment
reliable change index
We are grateful to the Banner Sun Health Research Institute Brain and Body Donation Program of Sun City, Arizona. The Brain and Body Donation Program is supported by the National Institute of Neurological Disorders and Stroke (U24 NS072026 National Brain and Tissue Resource for Parkinson’s Disease and Related Disorders), the National Institute on Aging (P30 AG19610 Arizona Alzheimer’s Disease Core Center), the Arizona Department of Health Services (contract 211002, Arizona Alzheimer’s Research Center), the Arizona Biomedical Research Commission (contracts 4001, 0011, 05- 901 and 1001 to the Arizona Parkinson's Disease Consortium) and the Michael J. Fox Foundation for Parkinson’s Research. The funding sources had no involvement in the writing of this article or in the decision to submit it for publication.
- Blacker D, Lee H, Muzikansky A, Martin EC, Tanzi R, McArdle JJ, et al. Neuropsychological measures in normal individuals that predict subsequent cognitive decline. Arch Neurol. 2007;64:862–71.PubMedView ArticleGoogle Scholar
- Johnson DK, Storandt M, Morris JC, Galvin JE. Longitudinal study of the transition from healthy aging to Alzheimer disease. Arch Neurol. 2009;66:1254–9.PubMed CentralPubMedView ArticleGoogle Scholar
- Riley KP, Jicha GA, Davis D, Abner EL, Cooper GE, Stiles N, et al. Prediction of preclinical Alzheimer’s disease: longitudinal rates of change in cognition. J Alzheimers Dis. 2011;25:707–17.PubMed CentralPubMedGoogle Scholar
- Gavett RA, Dunn JE, Stoddard A, Harty B, Weintraub S. The Cognitive Change in Women Study (CCW): informant ratings of cognitive change but not self ratings are associated with neuropsychological performance over three years. Alzheimer Dis Assoc Disord. 2011;25:305–11.PubMed CentralPubMedView ArticleGoogle Scholar
- Sabbagh MN, Malek-Ahmadi M, Kataria R, Belden CM, Connor DJ, Pearson C, et al. The Alzheimer’s Questionnaire: a proof of concept study for a new informant-based dementia assessment. J Alzheimers Dis. 2010;22:1015–21.PubMed CentralPubMedGoogle Scholar
- Malek-Ahmadi M, Davis K, Laizure B, Jacobson SA, Yaari R, Singh U, et al. Validation and diagnostic accuracy of the Alzheimer’s Questionnaire (AQ). Age Ageing. 2012;41:396–9.PubMed CentralPubMedView ArticleGoogle Scholar
- Malek-Ahmadi M, Davis K, Belden CM, Jacobson SA, Sabbagh MN. Informant-reported cognitive symptoms that predict amnestic mild cognitive impairment. BMC Geriatr. 2012;12:3.PubMed CentralPubMedView ArticleGoogle Scholar
- Malek-Ahmadi M, Davis K, Belden C, Sabbagh MN. Comparative analysis of the Alzheimer’s Questionnaire (AQ) with the CDR Sum of Boxes, MoCA, and MMSE. Alzheimer Dis Assoc Dis. 2014;28:296–8.View ArticleGoogle Scholar
- Folstein MF, Folstein SE, McHugh PR. ”Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–98.PubMedView ArticleGoogle Scholar
- Pfeffer RI, Kurosaki TT, Harrah CH, Chance JM, Filos S. Measurement of functional activities in older adults in the community. J Gerontol. 1982;37:323–9.PubMedView ArticleGoogle Scholar
- Clark CM, Sheppard L, Fillenbaum GG, Galasko D, Morris JC, Koss E, et al. Variability in the annual Mini-Mental State Examination score in patients with probable Alzheimer’s disease. Arch Neurol. 1999;56:857–62.PubMedView ArticleGoogle Scholar
- Costa AS, Reich A, Fimm B, Ketteler ST, Schulz JB, Reetz K. Evidence of the sensitivity of the MoCA alternate forms in monitoring cognitive change in early Alzheimer’s disease. Dement Geriatr Cogn Disord. 2014;37:95–103.PubMedView ArticleGoogle Scholar
- Mackin RS, Insel P, Aisen PS, Geda YE, Weiner MW, Alzheimer’s Disease Neuroimaging Initiative. Longitudinal stability of subsyndromal symptoms of depression in individuals with mild cognitive impairment: relationship to conversion to dementia after three years. Int J Geriatr Psychiatry. 2012;27:355–63.PubMed CentralPubMedGoogle Scholar
- Landau SM, Harvey D, Madison CM, Koeppe RA, Reiman EM, Foster NL, et al. Associations between cognitive, functional, and FDG-PET measures of decline in AD and MCI. Neurobiol Aging. 2011;32:1207–18.PubMed CentralPubMedView ArticleGoogle Scholar
- Rizk-Jackson A, Insel P, Petersen R, Aisen P, Jack C, Weiner M. Early indications of future cognitive decline: stable versus declining controls. PLoS One. 2013;8:e74062.PubMed CentralPubMedView ArticleGoogle Scholar
- Reisberg B. Functional assessment staging (FAST). Psychopharmacol Bull. 1988;24:653–9.PubMedGoogle Scholar
- Reisberg B, Ferris SH, de Leon MJ, Crook T. The Global Deterioration Scale for assessment of primary degenerative dementia. Am J Psychiatry. 1982;139:1136–9.PubMedView ArticleGoogle Scholar
- Morris JC. The Clinical Dementia Rating (CDR): current version and scoring rules. Neurology. 1993;43:2412–4.PubMedView ArticleGoogle Scholar
- Beach TG, Sue LI, Walker DG, Roher AE, Lue L, Vedders L, et al. The Sun Health Research Institute Brain Donation Program: description and experience, 1987-2007. Cell Tissue Bank. 2008;9:229–45.PubMed CentralPubMedView ArticleGoogle Scholar
- McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM. Clinical diagnosis of Alzheimer’s disease: report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer’s Disease. Neurology. 1984;34:939–44.PubMedView ArticleGoogle Scholar
- Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol. 1999;56:303–8.PubMedView ArticleGoogle Scholar
- Middel B, van Sonderen E. Statistical significant change versus relevant or important change in (quasi) experimental design: some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research. Int J Integr Care. 2002;2:e15.PubMed CentralPubMedGoogle Scholar
- Middel B, van Sonderen E. Responsiveness and validity of 3 outcome measures of motor function after stroke rehabilitation. Stroke. 2010;41:e463–4.PubMedView ArticleGoogle Scholar
- Cohen J. Statistical power analysis for the behavioral sciences. Hilsdale: Lawrence Erlbaum Associates; 1988.Google Scholar
- Ard MC, Edland SD. Power calculations for clinical trials in Alzheimer’s disease. J Alzheimer Dis. 2011;26:369–77.Google Scholar
- Grill JD, Di L, Lu PH, Lee C, Ringman J, Apostolova LG, et al. Estimating sample sizes for pre-dementia Alzheimer’s trials based on the Alzheimer’s Disease Neuroimaging Initiative. Neurobiol Aging. 2013;34:62–72.PubMed CentralPubMedView ArticleGoogle Scholar
- Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39:175–91.PubMedView ArticleGoogle Scholar
- Jacobson NS, Truax P. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. J Consult Clin Psychol. 1991;59:12–9.PubMedView ArticleGoogle Scholar
- Chelune GJ, Naugle RI, Luders H, Sedlak J, Awad IA. Individual change after epilepsy surgery: practice effects and base-rate information. Neuropsychology. 1993;7:41–52.View ArticleGoogle Scholar
- Duff K. Evidence-based indicators of neuropsychological change in the individual patient: relevant concepts and methods. Arch Clin Neuropsychol. 2012;27:248–61.PubMed CentralPubMedView ArticleGoogle Scholar
- Knopman D. Clinical trial design issues in mild to moderate Alzheimer’s disease. Cogn Behav Neurol. 2008;21:197–201.PubMed CentralPubMedView ArticleGoogle Scholar
- Jacobson SA, Sabbagh MN. Investigational drugs for the treatment of AD: what can we learn from negative trials? Alzheimers Res Ther. 2011;3:14.PubMed CentralPubMedView ArticleGoogle Scholar
- Hendrix SB. Measuring clinical progression in MCI and pre-MCI populations: Enrichment and optimizing clinical outcomes over time. Alzheimers Res Ther. 2012;4:24.PubMed CentralPubMedView ArticleGoogle Scholar
- Knopman DS, Caselli RJ. Appraisal of cognition in preclinical Alzheimer’s disease: a conceptual review. Neurodegener Dis Manag. 2012;2:183–95.PubMed CentralPubMedView ArticleGoogle Scholar
- Bartels C, Wegrzyn M, Wiedl A, Ackermann V, Ehrenreich H. Practice effects in healthy adults: a longitudinal study on frequent repetitive cognitive testing. BMC Neurosci. 2010;11:118.PubMed CentralPubMedView ArticleGoogle Scholar
- Cummings JL. Controversies in Alzheimer’s disease drug development. Int Rev Psychiatry. 2008;20:389–95.PubMed CentralPubMedView ArticleGoogle Scholar
- Becker RE, Greig NH. Alzheimer’s Disease drug development in 2008 and beyond: problems and opportunities. Curr Alzheimer Res. 2008;5:346–57.PubMed CentralPubMedView ArticleGoogle Scholar
- Becker RE, Greig NH, Giacobini E. Why do so many drugs for Alzheimer’s disease fail in development? Time for new methods and new practices? J Alzheimers Dis. 2008;15:303–25.PubMed CentralPubMedGoogle Scholar
- Kobak KA. Inaccuracy in clinical trials: effects and methods to control inaccuracy. Curr Alzheimer Res. 2010;7:637–41.PubMedView ArticleGoogle Scholar
- Cummings JL, Reynders R, Zhong K. Globalization of Alzheimer’s disease clinical trials. Alzheimers Res Ther. 2011;3:24.PubMed CentralPubMedView ArticleGoogle Scholar
- Connor DJ, Sabbagh MN. Administration and scoring variance on the ADAS-Cog. J Alzheimers Dis. 2008;15:461–4.PubMed CentralPubMedGoogle Scholar
- Tractenberg RE, Pietrzak RH. Intra-individual variability in Alzheimer’s disease and cognitive aging: definitions, context, and effect sizes. PLoS One. 2011;6:e16973.PubMed CentralPubMedView ArticleGoogle Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.