Regulatory agencies in the U.S. and Europe have endorsed a quantitative simulation tool that allows researchers to model clinical trials in mild to moderate Alzheimer’s disease. The freely available tool will allow researchers to optimize the design of new trials. The Tucson, Arizona-based Critical Path Institute (C-Path), an applied research organization that facilitates precompetitive drug development, spearheaded collaboration between academic scientists, industry, and regulatory groups to develop the tool. Drawing upon data from past clinical trials and from observational studies of disease progression, scientists created a computerized model that predicts the most efficient trial design, size, and length for detecting clinical improvement with a given drug. “This tool ensures that if a trial fails, it fails because the drug is flawed, and not because the trial design was flawed,” said Klaus Romero at C-Path, who co-led the development effort.
The Food and Drug Administration (FDA) declared the simulator a “fit-for-purpose” drug development tool on June 12, and the European Medicines Agency (EMA) qualified it for use on July 1 (see C-Path press release). The FDA coined the term “fit-for-purpose” for quantitative drug development tools, while reserving the term “qualified” for biomarkers and patient-reported outcome scales. “This is the first such tool for any indication to be put through this formal regulatory review process,” Romero noted. He expects that models for other diseases, such as Parkinson’s, will follow suit.
“[The positive regulatory decisions] open the door for disease progression modeling in general, across all indications. AD is leading the way for the whole field of pharmacometrics,” said Brian Corrigan at Pfizer, who co-led development of the model with Romero.
Pharmaceutical companies typically run simulations using in-house data before deciding on trial designs. The new tool integrates data and modeling from numerous sources to provide a fuller picture, said Diane Stephenson, who directs the Coalition Against Major Diseases (CAMD), C-Path’s AD and PD initiative. The C-Path Online Data Repository (CODR) (see ARF related news story) was one of the main sources for developing the simulation tool. CAMD launched CODR in 2010. Pharmaceutical partners contribute anonymous, individual, patient-level data from the placebo arms of their AD clinical trials to the database. The trial simulation tool used data from the first 3,000 patients in the database. “The field hasn’t had access to this clinical trial data until now,” Stephenson noted. Other sources included summary-level clinical trial data from the literature, as well as observational studies from the Alzheimer’s Disease Neuroimaging Initiative (ADNI).
Using this data, the researchers modeled three aspects of AD: The natural disease progression, drug effects, and clinical trial parameters such as probability patient will drop out and the duration, magnitude, and variability of the placebo effect (see Rogers et al., 2012). Data for the disease progression component came from both ADNI and the CAMD database, while drug effect data were taken from the literature. The trial parameters drew on both the literature and the CAMD database. The modeling effort used approaches developed by pharmacometrics researchers such as Nick Holford at the University of Auckland, New Zealand, Mahesh Samtani at Johnson & Johnson, and Kaori Ito at Pfizer, as well as modeling work done at the FDA (see, e.g., Lockwood et al., 2006; Miller et al., 2010; and Samtani et al., 2013). “We integrated those previous efforts and fit them into a comprehensive tool that looked at all the potential aspects that could affect trial design,” Romero said.
The researchers developed their model to simulate trials of symptomatic or disease-modifying drugs in mild to moderate AD. The model uses AD Assessment Scale Cognitive Test Battery (ADAS-Cog) scores as the clinical endpoint, because that was the only metric that was consistently collected across all sources of data, Stephenson said. To use the tool, researchers must enter what they know about how their candidate drug works from preclinical studies. This could be, for example, how much the treatment may be expected to slow disease progression. The model then compares different trial designs, such as parallel, delayed start, or crossover, to identify the one that reveals the greatest difference in the ADAS-Cog. Users can run multiple simulations, varying factors such as number of participants, trial length, and patient population, to find the most efficient parameters, while making sure there is adequate power to detect a drug effect. Importantly, the tool does not replace clinical trials. “Simulated trials are a resource to better design the actual trials, which are still very much needed to demonstrate efficacy for any new product,” Romero said.
“[The simulation tool] should help study designers make informed decisions about critical design issues. By doing that based on real data, they can be much more confident they’re going to have an interpretable study,” said Richard Mohs at Eli Lilly and Company, who worked on the project.
To encourage a positive decision from regulators, the development team followed the steps laid out in the FDA’s 2010 draft guidance on qualifying drug development tools.
Based on lessons learned during development of this first simulation tool, the FDA has now set forth additional recommendations for sponsors who want to submit future quantitative tools. “In many challenging therapeutic areas, no one company has enough data or resources. The FDA encourages these kinds of pre-competitive collaboration among multiple companies to build a shared tool,” said Yaning Wang at the FDA, who worked on the project. Other researchers said that the FDA and EMA rulings should encourage broad use of the simulation tool, since companies can feel confident that regulatory agencies will understand the basis for trial design decisions made with the model. “It improves the odds that a company will get a quick and positive regulatory response to a proposed research plan,” Mohs said.
The new tool could benefit patients, as well. Jean Georges, director of the advocacy group Alzheimer Europe, helped review the model for the EMA. “The qualification of this tool… is an important step. It should reassure patients that they will be able to enroll in more efficiently designed clinical trials in the future,” he wrote to Alzforum (see full comment below). In addition, Stephenson pointed out that patients who participated in trials of unsuccessful drugs now have something to show for their efforts, since their data contributed to this model.
The trial simulation tool will be publicly available via C-Path’s website. It is written in R language, a software package widely used by statisticians. Romero said that the CAMD team and their collaborators will provide assistance in navigating the tool and running simulations.
The model has limitations. Because it is based on legacy trial data from the last two decades, it includes minimal biomarker data. As newer data become available, biomarkers will be incorporated into the model, Romero said. The CODR database has more than doubled since the simulation tool was developed. It now comprises data from 6,500 patients from 24 clinical trials, said Jon Neville, who helps manage the database for C-Path. The next big evolution of the model will likely occur through addition of data from the bapineuzumab and solanezumab studies, which were large phase 3 trials that collected baseline and longitudinal biomarker data for subgroups.
The simulation tool models only mild to moderate AD. To extend the model to mild cognitive impairment (MCI), the researchers will need data from earlier-stage trials. These trials must include biomarkers and use a common clinical outcome measure, Stephenson noted. The field has not yet reached consensus on cognitive outcome measures for early-stage patients, although several are in development (see ARF related webinar).
Large projects like C-Path’s trial simulation tool and CODR would not be possible without standardized data, which allows researchers to combine data from many trials. C-Path collaborates with the Clinical Data Interchange Standards Consortium (CDISC) to develop data standards for AD (see ARF related news story). The CDISC AD data standard is currently being updated to focus on early AD and MCI. It will accommodate new cognitive and functional scales, as well as imaging and cerebrospinal fluid biomarker analysis. In addition, C-Path is working with CDISC to develop standards for several other therapeutic areas, said Enrique Avilés, director of data standards at C-Path. By 2017, the FDA will require standardized data for any new drug submitted to the agency, Stephenson noted. —Madolyn Bowman Rogers.
- DC: Shared Pain Is Lessened—Open-Trial Data Gain AD Model
- New AD Data Standard: FDA Wants It; Will Trial Groups Use It?
- Rogers JA, Polhamus D, Gillespie WR, Ito K, Romero K, Qiu R, Stephenson D, Gastonguay MR, Corrigan B. Combining patient-level and summary-level data for Alzheimer's disease modeling and simulation: a beta regression meta-analysis. J Pharmacokinet Pharmacodyn. 2012 Oct;39(5):479-98. PubMed.
- Lockwood P, Ewy W, Hermann D, Holford N. Application of clinical trial simulation to compare proof-of-concept study designs for drugs with a slow onset of effect; an example in Alzheimer's disease. Pharm Res. 2006 Sep;23(9):2050-9. PubMed.
- Miller R, Ito K, Lalovic B. Models for event-driven data: longitudinal daily adverse event and dropout. J Clin Pharmacol. 2010 Sep;50(9 Suppl):58S-62S. PubMed.
- Samtani MN, Raghavan N, Shi Y, Novak G, Farnum M, Lobanov V, Schultz T, Yang E, Dibernardo A, Narayan VA, . Disease Progression Model in MCI Subjects from Alzheimer's Disease Neuroimaging Initiative: CSF Biomarkers Predict Population Subtypes. Br J Clin Pharmacol. 2012 Apr 26; PubMed.