The Literature Isn’t Just Biased, It’s Also Late to the Party

by

Journal-Banner

Animal studies of drug efficacy are an important resource for designing and performing clinical trials. They provide evidence of a drug’s potential clinical utility, inform the design of trials, and establish the ethical basis for testing drugs in human. Several recent studies suggest that many preclinical investigations are withheld from publication. Such nonreporting likely reflects that private drug developers have little incentive to publish preclinical studies. However, it potentially deprives stakeholders of complete evidence for making risk/benefit judgments and frustrates the search for explanations when drugs fail to recapitulate the promise shown in animals.

In a future issue of The British Journal of Pharmacology, my co-authors and I investigate how much preclinical evidence is actually available in the published literature, and when it makes an appearance, if at all.

Although we identified a large number of preclinical studies, the vast majority was reported only after publication of the first trial. In fact, for 17% of the drugs in our sample, no efficacy studies were published before the first trial report. And when a similar analysis was performed looking at preclinical studies and clinical trials matched by disease area, the numbers were more dismal. For more than a third of indications tested in trials, we were unable to identify any published efficacy studies in models of the same indication.

There are two possible explanations for this observation, both of which have troubling implications. Research teams might not be performing efficacy studies until after trials are initiated and/or published. Though this would seem surprising and inconsistent with ethics policies, FDA regulations do not emphasize the review of animal efficacy data when approving the conduct of phase 1 trials. Another explanation is that drug developers precede trials with animal studies, but withhold them or publish them only after trials are complete. This interpretation also raises concerns, as delay of publication circumvents mechanisms—like peer review and replication—that promote systematic and valid risk/benefit assessment for trials.

The take home message is this: animal efficacy studies supporting specific trials are often published long after the trial itself is published, if at all. This represents a threat to human protections, animal ethics, and scientific integrity. We suggest that animal care committees, ethics review boards, and biomedical journals should take measures to correct these practices, such as requiring the prospective registration of preclinical studies or by creating publication incentives that are meaningful for private drug developers.

BibTeX

@Manual{stream2014-542,
    title = {The Literature Isn’t Just Biased, It’s Also Late to the Party},
    journal = {STREAM research},
    author = {Carole Federico},
    address = {Montreal, Canada},
    date = 2014,
    month = jun,
    day = 30,
    url = {http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/}
}

MLA

Carole Federico. "The Literature Isn’t Just Biased, It’s Also Late to the Party" Web blog post. STREAM research. 30 Jun 2014. Web. 05 Dec 2024. <http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/>

APA

Carole Federico. (2014, Jun 30). The Literature Isn’t Just Biased, It’s Also Late to the Party [Web log post]. Retrieved from http://www.translationalethics.com/2014/06/30/the-literature-isnt-just-biased-its-also-late-to-the-party/


Uncaging Validity in Preclinical Research

by

Knockout_Mice5006-300

High attrition rates in drug development bedevil drug developers, ethicists, health care professionals, and patients alike.  Increasingly, many commentators are suggesting the attrition problem partly relates to prevalent methodological flaws in the conduct and reporting of preclinical studies.

Preclinical efficacy studies involve administering a putative drug to animals (usually mice or rats) that model the disease experienced by humans.  The outcome sought in these laboratory experiments is efficacy, making them analogous to Phase 2 or 3 clinical trials.

However, that’s where the similarities end.  Unlike trials, preclinical efficacy studies employ a limited repertoire of methodological practices aimed at reducing threats to clinical generalization.  These quality-control measures, including randomization, blinding and the performance of a power calculation, are standard in the clinical realm.

This mismatch in scientific rigor hasn’t gone unnoticed, and numerous commentators have urged better design and reporting of preclinical studies.   With this in mind, the STREAM research group sought to systematize current initiatives aimed at improving the conduct of preclinical studies.  The results of this effort are reported in the July issue of PLoS Medicine.

In brief, we identified 26 guideline documents, extracted their recommendations, and classified each according to the particular validity type – internal, construct, or external – that the recommendation was aimed at addressing.   We also identified practices that were most commonly recommended, and used these to create a STREAM checklist for designing and reviewing preclinical studies.

We found that guidelines mainly focused on practices aimed at shoring up internal validity and, to a lesser extent, construct validity.  Relatively few guidelines addressed threats to external validity.  Additionally, we noted a preponderance of guidance on preclinical neurological and cerebrovascular research; oddly, none addressed cancer drug development, an area with perhaps the highest rate of attrition.

So what’s next?  We believe the consensus recommendations identified in our review provide a starting point for developing preclinical guidelines in realms like cancer drug development.  We also think our paper identifies some gaps in the guidance literature – for example, a relative paucity of guidelines on the conduct of preclinical systematic reviews.  Finally, we suggest our checklist may be helpful for investigators, IRB members, and funding bodies charged with designing, executing, and evaluating preclinical evidence.

Commentaries and lay accounts of our findings can be found in PLoS Medicine, CBC News, McGill Newsroom and Genetic Engineering & Biotechnology News.

BibTeX

@Manual{stream2013-300,
    title = {Uncaging Validity in Preclinical Research},
    journal = {STREAM research},
    author = {Valerie Henderson},
    address = {Montreal, Canada},
    date = 2013,
    month = aug,
    day = 5,
    url = {http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/}
}

MLA

Valerie Henderson. "Uncaging Validity in Preclinical Research" Web blog post. STREAM research. 05 Aug 2013. Web. 05 Dec 2024. <http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/>

APA

Valerie Henderson. (2013, Aug 05). Uncaging Validity in Preclinical Research [Web log post]. Retrieved from http://www.translationalethics.com/2013/08/05/uncaging-validity-in-preclinical-research/


Tea Leaves: Predicting Risk and Benefit in Translation

by


Every early phase trial begins with a series of predictions: that a new drug will show clinical utility down to road, that risks to study volunteers will be manageable, and perhaps, that patients in trials will benefit. Make a bad prediction here, and people potentially get hurt and resources wasted. So how good a job do we do with these predictions?


Hard to know, but given the high rate of failure in clinical translation, there are grounds for believing that various stakeholders go into early phase trials with an excess of optimism. In the current issue of PLoS Medicine, Alex London and I posit two problems with the way decision-makers make predictions in early phase trials. First, they underattend frequent and systematic flaws in the preclinical evidence base. Secondly, they draw on an overly narrow evidence base (what we call “evidential conservatism”) that obscures an assessment of whether preclinical studies in a given research area are a reliable indicator of agent promise.

As an open access journal, readers are invited to view our article here. The article has garnered a decent amount of press- digestible summaries can also be found at the Scientist and Pittsburgh Gazette. Also check out a commentary commissioned by the journal editors. (photo credit: canopic 2010)

BibTeX

@Manual{stream2011-54,
    title = {Tea Leaves: Predicting Risk and Benefit in Translation},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2011,
    month = mar,
    day = 21,
    url = {http://www.translationalethics.com/2011/03/21/tea-leaves-predicting-risk-and-benefit-in-translation/}
}

MLA

Jonathan Kimmelman. "Tea Leaves: Predicting Risk and Benefit in Translation" Web blog post. STREAM research. 21 Mar 2011. Web. 05 Dec 2024. <http://www.translationalethics.com/2011/03/21/tea-leaves-predicting-risk-and-benefit-in-translation/>

APA

Jonathan Kimmelman. (2011, Mar 21). Tea Leaves: Predicting Risk and Benefit in Translation [Web log post]. Retrieved from http://www.translationalethics.com/2011/03/21/tea-leaves-predicting-risk-and-benefit-in-translation/


Conditions of Collaboration: Protecting the Integrity of the Scientific Enterprise

by

So what does it take to keep medical research a well-oiled enterprise that efficiently and effectively delivers cures? Lots of cooperation–or so I argue, along with co-authors Alex John London and Marina Emborg in a piece appearing in Science [a publicly accessible version of the essay is available at Science Progress]. Unfortunately, we argue, the way or system of drug development currently thinks about the ethics of clinical research does not presently place sufficient emphasis on the conditions necessary to sustain this cooperation.


Right now, oversight of clinical research is focused almost exclusively on protecting the personal interests of human subjects by obtaining valid informed consent and ensuring that risks are reasonable in relation to benefits. We suggest that this ostensibly private transaction between investigators and patient-volunteers has a public dimension in at least three ways. First, such private transactions inevitably draw on public resources. Second, such transactions have externalities- adverse events occurring on one trial have potential to disrupt collaborations elsewhere in the research system. Third, lax oversight of such private transactions creates conditions where consumers have difficulty identifying (and hence rewarding) producers of high quality goods (namely, trials that are well designed).

We suggest that, when considering whether to initiate highly innovative clinical trials that draw on such public goods, proper oversight and analysis must take into consideration factors that lie beyond the personal interests of human volunteers. (photo credit: McKillaboy, Cataglyphis velox 22, 2009)

BibTeX

@Manual{stream2010-65,
    title = {Conditions of Collaboration: Protecting the Integrity of the Scientific Enterprise},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2010,
    month = may,
    day = 18,
    url = {http://www.translationalethics.com/2010/05/18/conditions-of-collaboration-protecting-the-integrity-of-the-scientific-enterprise/}
}

MLA

Jonathan Kimmelman. "Conditions of Collaboration: Protecting the Integrity of the Scientific Enterprise" Web blog post. STREAM research. 18 May 2010. Web. 05 Dec 2024. <http://www.translationalethics.com/2010/05/18/conditions-of-collaboration-protecting-the-integrity-of-the-scientific-enterprise/>

APA

Jonathan Kimmelman. (2010, May 18). Conditions of Collaboration: Protecting the Integrity of the Scientific Enterprise [Web log post]. Retrieved from http://www.translationalethics.com/2010/05/18/conditions-of-collaboration-protecting-the-integrity-of-the-scientific-enterprise/


Filing Cabinet Syndrome: The Effect of Nonpublication of Preclinical Research

by

Much has already been said about Filing Cabinet syndrome in medical research: the tendency of researchers to publish exciting results from clinical trials, and to stash null or negative findings safely away from public view in a filing cabinet. Nonpublication distorts the medical literature, because it prevents medical practitioners from accessing negative information about drugs. Recall that, back in 2004, attorney-general Eliot Spitzer sued Glaxo Smithkline for suppressing trial results that showed elevated risk of suicide for adolescents taking the antidepressant drug Paxil; this and several similar episodes led FDA, major medical journals, World Health Organization, World Medical Association, and others to require researchers to register clinical trials before they enroll any patients.


Yet important gaps remain. In the March 2010 issue of PLoS Biology, Emily S. Sena and coauthors provide the most detailed analysis yet of one of these gaps: nonpublication of preclinical (animal) studies. They aggregated results of 16 systematic reviews of preclinical studies involving acute ischaemic stroke, and used statistical methods to estimate the degree of publication bias, and the likely effect of publication bias on measured disease responses. Among other things, they found that 16% of animal experiments were not published, leading to a 31% overstatement of efficacy. The authors note: “we estimate that for the interventions described here, experiments involving some 3,600 animals have remained unpublished. We consider this practice to be unethical.”

The authors urge that central registries of preclinical studies be established and maintained– a call that is not likely to go heeded anytime soon by companies that have much at stake in the secrecy in preclinical research. But their proposal ought to be taken seriously by anyone committed not only to respecting animals used in medical research, but also protecting the welfare of human beings who might enroll in possibly unwarranted clinical research. (photo credit: amy allcock 2009)

BibTeX

@Manual{stream2010-66,
    title = {Filing Cabinet Syndrome: The Effect of Nonpublication of Preclinical Research},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2010,
    month = may,
    day = 11,
    url = {http://www.translationalethics.com/2010/05/11/filing-cabinet-syndrome-the-effect-of-nonpublication-of-preclinical-research/}
}

MLA

Jonathan Kimmelman. "Filing Cabinet Syndrome: The Effect of Nonpublication of Preclinical Research" Web blog post. STREAM research. 11 May 2010. Web. 05 Dec 2024. <http://www.translationalethics.com/2010/05/11/filing-cabinet-syndrome-the-effect-of-nonpublication-of-preclinical-research/>

APA

Jonathan Kimmelman. (2010, May 11). Filing Cabinet Syndrome: The Effect of Nonpublication of Preclinical Research [Web log post]. Retrieved from http://www.translationalethics.com/2010/05/11/filing-cabinet-syndrome-the-effect-of-nonpublication-of-preclinical-research/


Mice- Three Different Ones: Towards More Robust Preclinical Experiments

by

One of the most exciting and intellectually compelling talks thus far at the American Society of Gene Therapy meeting was Pedro Lowenstein’s.  A preclinical researcher who works on gene transfer approaches to brain malignancies (among other things), Lowenstein asked the question: why do so many gene transfer interventions that look promising in the laboratory fail during clinical testing? His answer: preclinical studies lack “robustness.”


In short,  first-in-human trials are typically launched on the basis of a pivotal laboratory study showing statistically significant differences between treatment and control arms. In addition to decrying the “p-value” fetish- in which researchers, journal editors, and granting agencies view “statistical significance” as having magical qualities- Lowenstein also urged preclinical researchers to test the “nuances” and “robustness” of their systems before moving into human studies.

He provided numerous provocative examples where a single preclinical study showed very impressive, “significant” effects on treating cancer in mice. When the identical intervention was tried with seemingly small variations (e.g. different mouse strains used, different gene promotors tried, etc.), the “significant effects” vanished.  In short, Lowenstein’s answer to the question of why so many human trials fail to recapitulate major effects seen in laboratory studies is: we aren’t designing and reviewing preclinical studies properly. Anyone (is there one?) who has followed this blog knows: I completely agree. This is an ethical issue in scientific clothing. (photo credit: Rick Eh, 2008)

BibTeX

@Manual{stream2009-98,
    title = {Mice- Three Different Ones: Towards More Robust Preclinical Experiments},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2009,
    month = may,
    day = 29,
    url = {http://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/}
}

MLA

Jonathan Kimmelman. "Mice- Three Different Ones: Towards More Robust Preclinical Experiments" Web blog post. STREAM research. 29 May 2009. Web. 05 Dec 2024. <http://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/>

APA

Jonathan Kimmelman. (2009, May 29). Mice- Three Different Ones: Towards More Robust Preclinical Experiments [Web log post]. Retrieved from http://www.translationalethics.com/2009/05/29/mice-three-different-ones-towards-more-robust-preclinical-experiments/


STAIRing at Method in Preclinical Studies

by

Medical research, we all know, is highly prone to bias. Researchers are, after all, human in their tendencies to mix desire with assessment. So too are trial participants. Since the late 1950s, epidemiologists have introduced a number of practices to clinical research designed to reduce or eliminate sources of bias, including randomization of patients, masking (or “blinding”) of volunteers and physician-investigators, and statistical analysis.


In past entries, I have rallied for extending such methodological rigor to preclinical research. This has three defenses. First, phase 1 human trials predicated on weak preclinical evidence are insufficiently valuable to justify their execution. Second, methodologically weak preclinical research is an abuse of animals. Third, publication of methodologically weak studies is a form of “publication pollution.”

Two recent publications underscore the need for greater rigor in preclinical studies. The first is a paper in the journal Stroke (published online August 14, 2008; also reprinted in Journal of Cerebral Blood Flow and Metabolism). Many of the paper’s authors have doggedly pursued the cause of preclinical methodological rigor in stroke research by publishing a series of meta-analyses of preclinical studies in stroke. In this article, Malcolm Macleod and co-authors outline eight practices that journal editors and referees should look for in reviewing preclinical studies. Many are urged by STAIR (Stroke Therapy Academic Industry Roundtable)– a consortium organized in 1999 to strengthen the quality of stroke research.

Their recommendations are:

1- Animals (precise species, strain, and details should be provided)
2- Sample-size calculation
3- Inclusion and exclusion criteria for animals
4- Randomization of animals
5- Allocation concealment
6- Reporting of animals concealed from analysis
7- Masked outcome assessment
8- Reporting interest conflicts and funding

There’s an interesting, implicit claim in this paper: journal editors and referees partly bear the blame for poor methodological quality in preclinical research. In my next post, I will turn to a related news article about preclinical studies in Amyotrophic Lateral Sclerosis. (photo credit: 4BlueEyes, 2006)

BibTeX

@Manual{stream2008-132,
    title = {STAIRing at Method in Preclinical Studies},
    journal = {STREAM research},
    author = {Jonathan Kimmelman},
    address = {Montreal, Canada},
    date = 2008,
    month = oct,
    day = 6,
    url = {http://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/}
}

MLA

Jonathan Kimmelman. "STAIRing at Method in Preclinical Studies" Web blog post. STREAM research. 06 Oct 2008. Web. 05 Dec 2024. <http://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/>

APA

Jonathan Kimmelman. (2008, Oct 06). STAIRing at Method in Preclinical Studies [Web log post]. Retrieved from http://www.translationalethics.com/2008/10/06/stairing-at-method-in-preclinical-studies/


Search STREAM


All content © STREAM research

admin@translationalethics.com
Twitter: @stream_research
3647 rue Peel
Montreal QC H3A 1X1