Introduction

Have you ever been jolted wide awake by—biostatistics? That’s what happened on the last morning of the Clinical Trials on Alzheimer’s Disease conference held November 5-7 in Barcelona, Spain. At 8 a.m., as sleepy CTADeers slowly shuffled in, Suzanne Hendrix of Pentara Corporation, Salt Lake City, Utah, delivered a triple espresso in the form of a keynote calling on trialists to move beyond the scales they currently use to measure drug effects in early Alzheimer’s disease trials. In language that made statistics accessible well beyond the small geekdom of biostatisticians, Hendrix laid out how current power and regulatory factors conspire to prevent trials from showing cleanly in Phase 2 whether a drug works. In effect, the status quo overprotects the null hypothesis that the drug does not work, Hendrix said. For their part, outcome measures are weighed down by items that add noise without reflecting disease progression, and the CDR-sb is clinically meaningful but not granular enough to be a good trials outcome. Hendrix’s solution: build lean composites with items that, in math lingo, add up to a “summed vector” that basically is the direction of the disease getting worse. Hendrix showed examples of how composites developed by different groups compared in a given study population to more-established measures such as the ADAS cog or CDR-sb. Hendrix argued for separating the measurement of a cognitive drug effect in early AD from the assessment of whether that effect is clinically meaningful for early stage patients, who are not impaired or barely functionally impaired. Optimizing tools and statistical requirements for a stepwise process, rather than trying to do both in one go, would open the way to more successful therapy development in early Alzheimer’s. Senior clinicians in the United States and Europe praised the talk.

Media

Slides

Comments

  1. This webinar did an excellent job of presenting an overview of the current landscape for a critically important area of methods development in pre-dementia AD and MCI trials.

    In my view, the work on the ADAS-cog8 is especially important and impactful. It addresses both the requirements of face validity and sensitivity. The ADAS is by far the most frequently selected response measure in AD trials. Normative data from tens of thousands of subjects, on both placebo and investigational therapy, is available to AD researchers via ADNI, NACC, RUSH, ADCS, and other sources.

    Interesting work has been done by Wouters et al., 2008, as Suzanne Hendrix mentions, and more recently also by Huang et al., 2015, and Verma et al., 2015

    Others have addressed the issue of whether the set of items included in the ADAS-cog are optimal for mild and pre-dementia AD study populations, and/or whether adding additional items would improve the instrument as a treatment response measure. In 2013, Hobart and colleagues examined targeting and responsiveness in the standard 11 ADAS-cog item (Hobart et al., 2013; Hobart et al., 2013); we found ceiling effects and limited responsiveness in all but three items among mild AD subjects in ADNI.

    There are still many open questions in this area. For example, it would be interesting to see how a simplified ADAS performs in the available public testing datasets. This could be an ADAS-3, i.e., an optimally weighted sum or non-linear function of the three responsive items identified by Hobart et al. based on word recall, orientation, and word recognition.

    To go a step further, this “optimal” sum could possibly be augmented with the CDR-sob.

    In reviewing the literature referenced in this webinar and the studies mentioned here, it is apparent that there is a wide discrepancy in the methods being applied. For example, Hendrix uses a partial least squares approach to find subsets of tests to group together and then quantifies sensitivity using the MSDR. Is partial least squares the best option, or are there other “machine learning” approaches that might be explored? This is a topic the field might want to address.

    Hobart uses traditional psychometric assessment to evaluate individual items in the ADAS. Its simple result, that only three of the ADAS-11 items are free of ceiling effects and/or skewed distribution issues, is consistent with Huang et al. but seemingly in conflict with the ADAS-8 results Hendrix presents, as well as the RASCH analysis-based results of Wouters et al. Also, Hobart et al. does not consider the additional items in the ADAS-13. The PACC work of Donohue et al. that is being applied in the A4 and A5 trials is based on expert opinion and clinical judgment followed by test data validation, as opposed to the training-data-driven assessments and model building that others have applied.

    There is a lot to consider on this topic. As we are in the midst of running and launching a new generation of pre-dementia trials, we really ought to spend the time and energy to develop the best response measures we can. Until biomarker technology is refined, we will continue to need to rely on the available cognitive and functional assessments—and composites thereof.

    References:

    . The preclinical Alzheimer cognitive composite: measuring amyloid-related decline. JAMA Neurol. 2014 Aug;71(8):961-70. PubMed.

    . An empirically derived composite cognitive test score with improved power to track and evaluate treatments for preclinical Alzheimer's disease. Alzheimers Dement. 2014 Apr 18; PubMed.

    . Putting the Alzheimer's cognitive test to the test I: Traditional psychometric methods. Alzheimers Dement. 2013 Feb;9(1 Suppl):S4-9. PubMed.

    . Putting the Alzheimer's cognitive test to the test II: Rasch Measurement Theory. Alzheimers Dement. 2013 Feb;9(1 Suppl):S10-20. PubMed.

    . New scoring methodology improves the sensitivity of the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) in clinical trials. Alzheimers Res Ther. 2015 Nov 12;7(1):64. PubMed.

    . Development of a straightforward and sensitive scale for MCI and early AD clinical trials. Alzheimers Dement. 2014 Jul 8; PubMed.

    . Revising the ADAS-cog for a more accurate assessment of cognitive impairment. Alzheimer Dis Assoc Disord. 2008 Jul-Sep;22(3):236-44. PubMed.

Make a Comment

To make a comment you must login or register.

References

No Available References

Further Reading

No Available Further Reading