Responsibility and Unpublished Research Results

We often hear the economic argument, "taxpayers paid for the research, so they deserve to see the results." This argument usually involves publishers.

But who is to blame if publishers never even see a manuscript to consider? Who is to blame if research is funded, patients are put at risk, and no outcomes are even recorded in a required government database?

A new study suggests that there's something important going on, with potentially two-thirds of clinical trial results in the US going unpublished and undocumented more than two years after the trials have concluded.

Speaking to the presumption that funded research results in published papers, the authors give us this quote:

While seemingly axiomatic that the results of clinical trials led by the faculty at leading academic institutions will undergo peer reviewed publication, our study found that 44% of such trials have not been published more than three, and up to seven, years after study completion.

In other words, can you believe that some of the studies from Dr. Prestigious didn't work?

But publication now differs from registration and from documentation, with databases like ClinicalTrials.gov in existence. Unfortunately, the authors of the paper fail in their discussions to draw a clear distinction between the two types of failure their study covers.

These two types of failure are different in vital ways. Submitting trial results into ClinicalTrials.gov is one type of failure, and one that has a different hurdle and different implications (reflecting different obligations) than getting published in a peer-reviewed journal.

It seems less excusable for data to not be submitted into ClinicalTrials.gov. After all, the only hurdle there is the work involved in reporting the results. However, in speaking with researchers, the ClinicalTrials.gov hurdle is formidable, as the interface and technical implementation of ClinicalTrials.gov makes it a major task to report results there. Imagine a manuscript submission system that's twice as cumbersome. The community seems increasingly disenchanted with making the effort, and there is no carrot and no stick to keep them using it.

Yet, compliance here should be 100%. Instead, it's far lower, with some major academic centers having less than 10% of their clinical trials results reported in ClinicalTrials.gov. For example, Stanford's compliance rate for reporting results in ClinicalTrials.gov was 7.6% between 2007 and 2010. This means 10 trials out of 131 were reported in ClinicalTrials.gov. Meanwhile, 49.6% (65/131) were published. Overall, publication rates were higher than rates of compliance with depositing results in ClinicalTrials.gov.

Maybe publishers have a better carrot . . .

In covering the study, a story on NPR elicited a comment that supports this hypothesis:

    I work at a contract research organization that has a large contract with NIH DAIT. We are required to report the clinical trial results to clinicaltrials.gov within one year of "last patient last visit." It is a challenging task but we have a process in place to accomplish this requirement. I don't think researchers deliberately try to hide findings. It takes experience to write acceptable endpoint descriptions, generate an xml file to report adverse events, and properly organize and format the results. When planning a clinical trial resources must be committed to publishing the results at the conclusion of the trial.

    This is a usability problem, pure and simple, yet one that is clearly depriving scientific researchers and patients of information they may want or need. Where is the outrage over this poor user-interface design? Its effect may be far graver than any subscription barrier when it comes to taxpayer access to study results.

    Then we have the data from the study under discussion here pertaining to the percentage of trials published in peer-reviewed journals. It's surprisingly low. But is this because researchers are too lazy to write up the results and submit them to journals? Or is it because the results underwhelmed?

    There is a trade-off between publication rates and reproducibility, as I discussed recently in a post here. More publications of lower-quality studies (poorly powered, not predictive, weak hypothesis, weak generalizability) means a lower rate of reproducibility. Perhaps the problem here isn't that a low rate of these studies are published -- it may be, instead, that too many unimpressive and unpromising studies are funded, started, and terminated after getting poor results.

    I once participated in a clinical trial that went nowhere. The side-effects of a biological agent were simply intolerable, so most participants dropped out, leaving the researchers with no publishable results. The side-effect was known, but what wasn't known was that patients would stop taking the medication because of the side-effect. So why report it in the literature? It added nothing to the knowledge base, except that a silly side-effect hurt compliance. This isn't big news.

    However, the study I participated in was preliminary, and little funding was squandered in learning what it taught. The authors of the paper discussed here counted papers, but did not calculate the amount of funding spent on trials without registered outcomes or published results. That would have been a more interesting number, and perhaps would have given us something better to chew on. After all, if most of the unpublished/undeposited studies were small, preliminary, and involved fewer patients and less funding, we might have a different potential explanation.

    Studies underperform or disappoint for a number of reasons, some bizarre, some pedestrian, some worth pursuing. Not having published results from most of these is probably not doing damage in the larger scheme of things. However, not submitting the data to ClinicalTrials.gov is another issue entirely, and one we need to address. The usability issues with ClinicalTrials.gov may be scuttling a good idea, slowly but surely. Researchers dislike the site, and the benefits of compliance are elusive.

    Whatever the cause, the discrepancy between publication and deposit is certainly worth contemplating.