Search This Blog

Saturday, August 5, 2023

Derek Lowe: Too Many Bad Clinical Trials

 Looking behind the scenes at clinical trial data can be disturbing, as a paper published in 2020 in the journal Anaesthesia shows, and as this new follow-up article continues to show. The author of the 2020 paper, John Carlisle, is an expert in dealing with such trial data, and several years ago he worked his way through 526 manuscripts that reported doing a real randomized clinical trial (2017-2020). For 153 of these, he was able to get access to the actual data behind these, the (anonymized) “individual participant data” (IPD). Initially, he requested these numbers from the authors of manuscripts where the data looked problematic, then in 2019-2020 he began requesting these automatically from authors submitting from the countries sending in the most manuscripts overall, namely Egypt, China, India, Iran, Japan, South Korea, and Turkey.

He believes that 44% (73) of the 153 papers with IPD available had false data in them, but keep in mind that this ranges from unintentional mistakes (duplicated data entries and the like) up to what seems to be outright fabrication (Carlisle has seen this before!) He was able to pick out false data in 6 of the other 373 manuscripts. As a percentage of total manuscripts, China and Egypt were by far the worst offenders for false data. Carlisle uses the phrase “zombie trials” for ones where the problems were great enough that the trial would have (or should have!) been retracted if its flaws were only noticed after publication. 43 of the manuscripts fell into this category, and 20 of these were from China. Trials can sink to this level either by dishonesty or incompetence, and let’s not rule out both at once, either.

Digging this stuff out is not light work:

There were some zombie trials that I did not detect when I initially inspected their spreadsheets, even though I spent hours editing two of these after they had been provisionally accepted for publication. The spreadsheet for one trial contained repeated sequences of numbers in columns for one group, which I had not initially noticed, and I did not notice until the fourth revision of that paper. The authors explained that an uncredited medical student had made the data up for them. We published a different trial in March 2019, which we subsequently identified as zombie (trial number 198; see online Supporting Information, Appendix S1). Anaesthesia had provisionally accepted it for publication in September 2018 and I had edited it to its published version after five revisions. Anaesthesia had not requested individual patient data, but did so the same month it was published, when analyses of individual patient data became routine. The spreadsheet had 536 rows and 91 columns, which exhibited multiple copied segments that became apparent after I ordered the rows by age, height and weight.

Carlisle notes that “you can gauge whether your threshold for disbelief matched mine” by inspecting the detailed list here. But he believes that his findings are (unfortunately) representative of the broader medical literature; there’s no reason to think that submissions to Anaesthesia are outliers. The new follow-up piece at Nature linked in the first paragraph agrees with that assessment, and shows that there are efforts underway in several other fields to do similar estimates (such as the work of Ben Mol). The results so far are not cheerful - that is, they line up well with Carlisle’s results. Reported trials from Iran, Egypt, Turkey, and China feature prominently in these efforts as well.

What to do? You’ll see many suggestions in the Nature piece. A big one is tightening up criteria for considering a published trial in reviews and meta-analyses, since these things contaminate those papers in turn. There’s also having journals ask (much) more often for anonymized trial data as part of the manuscript review process, although that means a lot more work in the editorial process. And back at the source, there’s trying to get countries with data-trust problems to alter the policies that lead to this stuff in the first place. Egypt, for example, has recently passed a law trying to regulate clinical trials in that country. Universities and other employers who just count up publications or trials as a criterion for promotion are a big part of the problem, as you’d imagine.

What we shouldn’t do is just agree not to think about it too much, and sadly that’s what people who are trying to fix these problems can encounter. Some authors never respond when asked for more data, some editors don’t seem too worked up about pushing for retractions, some review authors don’t care so much about revising their work when it turns out to include flawed studies, and some institutions don’t seem to think that investigating problematic work is much of a priority, either. But we should take the time, and we should give a damn now and then.


https://www.science.org/content/blog-post/too-many-bad-clinical-trials

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.