Advertisement

Study finds selective reporting of trials of high-risk heart devices

By Rob Goodier

NEW YORK (Reuters Health) - Researchers have found numerous cases of selective reporting after examining a decade's worth of publications on safety and efficacy trials of high-risk cardiovascular devices.

Trial data submitted to the U.S. Food and Drug Administration for premarket approval frequently went unpublished, they reported in the BMJ, online June 10. And when the results did appear in the literature, they often differed from those submitted to the regulator.

"Besides the lack of consistency on important trial features such as the primary endpoint and the number and age and sex of the subjects, between the trial summaries and the published data, another important issue is the information that is missing from the trial summaries such as the investigators' names and locations (and) the name of the trial," Dr. Rita Redberg from the University of California, San Diego, the senior author of the study, told Reuters Health by email.

Over a 10-year period ending in 2010, Dr. Redberg and colleagues identified 177 trials reported to the FDA on 106 high-risk cardiovascular devices such as artificial heart valves and coronary stents. By 2013, fewer than half of those trials, 86 in total, had been published in peer-reviewed journals.

Many of those that were published had discrepancies compared with the trial data submitted to the FDA. More than a quarter had changed the number of trial participants, for example, and only 45% of the primary results were identical.

Some primary endpoints in the FDA data were switched to secondary endpoints in the journals, the researchers found.

In one example, a trial of Abbott Vascular's Xience V Rapid Exchange Everolimus-Eluting Stent System, the team says the FDA refers to two "co-primary endpoints": in-segment late loss at 240 days and target-vessel failure at 270 days. But when the results were published in JAMA in 2008, the co-primary endpoints were broken up into one primary and one secondary endpoint.

"In this case, the primary endpoint in the publication showed superiority of the tested device, whereas the rebranded secondary endpoint showed only non-inferiority," the researchers write.

Dr. Redberg said the effect of renaming the endpoints was to obfuscate the results.

The lead author of the JAMA report, Dr. Gregg Stone of Columbia University in New York, disagrees with that interpretation and says he has submitted a letter to the BMJ explaining his position.

The FDA-approved protocol lists the endpoints as primary and secondary, and Dr. Stone's team and the FDA decided to call them "co-primary" because both had to be met for the trial's success, he told Reuters Health by email.

"No obfuscation was intended, and the JAMA article clearly explained the reason for the use of this terminology. This is clearly a matter of semantics, and Chang et al (Dr. Redberg's team) are trying to make a mountain out of a molehill," Dr. Stone said.

In other cases, objective performance criteria that were included in the FDA summaries were not mentioned in the subsequent peer-reviewed papers, according to the new report.

For example, a cryoablation catheter failed to meet performance criteria for two out of three endpoints according to the FDA summary, but the paper published in Heart and Rhythm in 2004 made no mention of any such criteria for any endpoint. Instead it reports a success rate of 83 percent and says that catheter cryoablation is safe and effective.

Two authors of the paper in Heart Rhythm did not return requests to comment.

The new findings add to a growing body of research showing bias in drug and other clinical trial reporting. In an editorial in the same journal, Dr. Sidney Wolfe, founder of the Public Citizen's Health Research Group in Washington, D.C., called this kind of selective reporting a betrayal of trial participants that harms patients.

It is not enough that the data are publicly available on the FDA's site, Dr. Wolfe says. Most people do not know that the data exists, and it is hard to find even for those who do, he notes.

Dr. Redberg can attest to the difficulty she and her colleagues had in tracking down the trial data for their research.

"The FDA website is indeed not what one would call user-friendly and it takes a lot of patience and poking around to find information like these trial summaries," Dr. Redberg said. "Most clinicians will never see this clinical trial information on approved medical devices if it is not published in a medical journal."

In response, journals should make FDA reviews available to readers, Dr. Redberg and her colleagues write.

Journals could also implement a policy similar to that of the BMJ, which since January 2013 has required study authors to make anonymized patient data available on request.

But there should also be a public database of trial results, according to Dr. Wolfe. He gives the example of clinicaltrials.gov, which reports on trials by mentioning things like study design, outcome measures and eligibility for participant inclusion, but leaves out the results. A site that also includes the results could help solve this problem of reporting bias, Dr. Wolfe writes.

In the absence of these kinds of solutions, publication bias prevents patients from knowing information that could affect their health, he adds.

SOURCE: http://bit.ly/1CvKtZV and http://bit.ly/1HjvjfG

BMJ 2015.

(c) Copyright Thomson Reuters 2015. Click For Restrictions - http://about.reuters.com/fulllegal.asp