- Open Access
Avoiding methodological bias in studies of amyloid imaging results disclosure
© The Author(s). 2019
- Published: 4 June 2019
We read with interest the paper by Grill et al. , a study of N = 33 participants interviewed about their reactions to learning that their amyloid imaging results were “not elevated” in a preclinical Alzheimer’s disease (AD) trial. We welcome their contribution to the literature and would like to call attention to some related larger-size studies by Wake et al.  (with N = 42 subjects) and by Taswell et al.  (with N = 133 subjects) that were not cited in the paper by Grill et al. .
Neurodegenerative cognitive disorders, including AD, remain challenging to elucidate pathophysiologically and to prevent or treat pharmacologically. Clinicians who study psychological and behavioral health disorders (including those without cognitive decline) may readily argue that it is even more challenging to understand and predict human behavior. Subjective interviews and objective psychometrics using questionnaires completed by research subjects and/or by trained observers cannot be performed or interpreted with any guarantees of infallible reliability for predictions about future behavior. As perhaps the most dramatic example, experienced clinicians in emergency psychiatry recognize and appreciate the critical limitations of both subjective interviews and objective psychometrics when performing suicide risk assessments (Range et al.  and Erford et al. ) for patients evaluated per mental healthcare statutes for possible involuntary hospitalization and a denial of rights to leave the locked care facility.
Due to the known deficiencies of such suicide risk assessments, clinicians in this scenario compensate by relying on a multiplicity of different approaches with different observers, reporters, questionnaires, and tools for the evaluation of the patient in an effort to improve the overall quality (validity and reliability) of the clinical evaluation and risk assessment, thus hopefully reducing the probability of the worst-case outcome, the patient’s death by suicide. Noble et al.  described this methodology in general as “data triangulation, whereby different methods and perspectives help produce a more comprehensive set of findings.”
In the large-size study with N = 133 subjects that we reported on disclosure of amyloid imaging results to patients with mild cognitive impairment (MCI) and early AD (Taswell et al. ), we used this approach with multiple comparison groups (amyloid negative versus amyloid positive, MCI versus AD, younger < 70 versus older ≥ 70), a diverse collection of psychometric questionnaires, as well as observable outcomes that were reportable life events. In particular, we described in our findings that there were “no concerns expressed by any patients, family or caregivers about any real potential risk of harm to patients such as reports of suicidal ideation threats or plans, with or without visits to doctors’ offices, psychiatric emergency rooms or hospitals for any such complaints.” As a consequence, we maintain high confidence in our conclusion that “we consider [ed] it safe, without apparent risk of harm to patients, to disclose amyloid imaging results to patients who have no prior history of neuropsychiatric illness.”
Methodological bias in a research study can be avoided when there is no selection bias on the subjects, no investigator bias on the examinations (or interviews or psychometrics), no bias with the tools used, and no absence of comparison groups, etc. If we have a greater number of methodological approaches in the clinical trial design that counter possible methodological biases, then we can interpret the clinical trial results with greater confidence. If a diverse collection of examination tools yields the same consistent results, then it is more likely that those results are true. While it may be difficult to pursue an ideal clinical trial, we should nevertheless aspire to conduct trials with a larger sample size of subjects, a diversity of examiners and/or examination tools, and hopefully two or more comparison groups so that we can make some kind of comparison. Although we note that the authors of Grill et al.  discussed some limitations of their clinical trial, they did not discuss the absence of a comparison group in their study.
Thus, we encourage Grill et al.  to investigate in their next study of disclosing amyloid imaging results that are not elevated also the comparison group of participants with amyloid imaging results that are elevated. Alternatively, in the absence of two or more comparison groups, each subject can always be compared to self if the subject has been examined at serial time points with pre- and post-disclosure interviews and/or psychometric exams that permit calculation of individual subject change scores (after versus before disclosure) as done in our study by Taswell et al. . Nevertheless, having two or more comparison groups in an amyloid imaging result disclosure study, such as comparing participants from the same AD pre-clinical trial who have either elevated or not elevated amyloid (even if the study has small sample size of insufficient power with other limitations), would nevertheless hopefully yield some basic intuition and anecdotal experience with a comparison between the two different groups of subjects when they participate in the interviews and psychometrics. Are they the same or different? Or are the investigators unable to answer that question of same versus different because of other limitations introduced by methodological biases inherent in the study design?
Readers interested in learning more about the pitfalls of methodological bias in clinical trial design and research should consult the important body of literature available on this topic (Schulz , Smyth et al. , Higgins et al. , Hrobjartsson et al. , Kirkham et al. ). In particular, Weuve et al.  provided some guidelines for evaluating potential bias in dementia research. Clinical trial investigators should also keep in mind the following important principle: the higher the quality of the study design without methodological bias, the greater the probability that results from the study can be considered for inclusion in a systematic review and meta-analysis of the medical-scientific question addressed. We strongly recommend consideration of clinical trial designs that avoid methodological bias with a multiplicity of examiners and/or examination tools, a multiplicity of comparison groups, inclusion of observable life events in addition to interviews or psychometrics as outcome measures, and larger rather than smaller sample size whenever possible.
All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Grill JD et al. Reactions to learning a “not elevated” amyloid PET result in a preclinical Alzheimer’s disease trial. Alzheimers Res Ther 2018;10. doı: https://doi.org/10.1186/s13195-018-0452-1.
- Wake T, et al. The psychological impact of disclosing amyloid status to Japanese elderly: a preliminary study on asymptomatic patients with subjective cognitive decline. Int Psychogeriatr. 2018;30:635–9. doı. https://doi.org/10.1017/s1041610217002204.View ArticlePubMedGoogle Scholar
- Taswell C, et al. Safety of disclosing amyloid imaging results to MCI and AD patients. Ment Health Fam Med. 2018;14:748–56. http://mhfmjournal.com/pdf/MHFM-120.pdf.
- Range LM, Knott EC. Twenty suicide assessment instruments: evaluation and recommendations. Death Stud. 1997;21:25–58. https://doi.org/10.1080/074811897202128.View ArticlePubMedGoogle Scholar
- Erford BT, et al. Selecting suicide ideation assessment instruments: a meta-analytic review. Meas Eval Couns Dev. 2017;51:42–59. https://doi.org/10.1080/07481756.2017.1358062.View ArticleGoogle Scholar
- Noble H, Smith J. Issues of validity and reliability in qualitative research. Evid Based Nurs. 2015;18:34–5. https://doi.org/10.1136/eb-2015-102054.View ArticleGoogle Scholar
- Schulz KF. Empirical evidence of bias. JAMA. 1995;273:408. https://doi.org/10.1001/jama.1995.03520290060030.View ArticlePubMedGoogle Scholar
- Smyth RMD, et al. Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ. 2010;341:c7153. https://doi.org/10.1136/bmj.c7153.View ArticleGoogle Scholar
- Higgins JPT, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. https://doi.org/10.1136/bmj.d5928.View ArticlePubMedPubMed CentralGoogle Scholar
- Hrobjartsson A, et al. Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors. Can Med Assoc J. 2013;185:E201–11. https://doi.org/10.1503/cmaj.120744.View ArticleGoogle Scholar
- Kirkham JJ, et al. Outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews. BMJ. 2018;362:k3802. https://doi.org/10.1136/bmj.k3802.View ArticlePubMedPubMed CentralGoogle Scholar
- Weuve J, et al. Guidelines for reporting methodological challenges and evaluating potential bias in dementia research. Alzheimers Dementia. 2015;11:1098–109. https://doi.org/10.1016/j.jalz.2015.06.1885.View ArticleGoogle Scholar