Search This Blog

Monday, August 5, 2019

Psychiatry Abstracts: Abundance of Spin

Over 50% of trials in psychiatry/psychology journals contained some reporting bias

More than half of abstracts from randomized clinical trials (RCTs) published in top psychiatric journals from 2012 to 2017 included some form of spin, according to a qualitative analysis.
Among 116 RCTs with primary endpoints failing to reach statistical significance, 56% still designated the experimental therapy to be beneficial, or steered the focus away from the primary outcome to make the results appear more favorable, reported Samuel Jellison, of Oklahoma State University Center for Health Sciences in Tulsa, and colleagues.
Specifically, a quarter of the abstract results sections focused on a positive secondary outcome in lieu of the primary measure, 21% omitted a nonsignificant primary endpoint to highlight an alternative outcome that was significant, and 13% claimed non-inferiority for a nonsignificant outcome, they wrote in BMJ Evidence-Based Medicine.
Others used phrases like “trend toward significance” (13%) or directed attention towards subgroup analyses instead of the primary outcome (13%), they added.
“Doctors … could be reading a recent study over a drug or therapy and they could believe, based off the way the results are framed, that the study achieved some significant outcome that was super helpful to patients, and in reality it wasn’t,” Jellison told MedPage Today. “That can then translate into changes in clinical practice or follow-up studies to verify these results, which could then lead to a decline in patient care or a waste of research resources.”
In the 1990s, a group of 30 experts created the Consolidated Standards of Reporting Trials (CONSORT) to address suboptimal accuracy in the reporting of clinical trials. The checklist of standards for reporting was updated in 2010, and provides a list of essential items that should be included in abstracts.
“Studies comparing the accuracy of information reported in a journal abstract with that reported in the text of the full publication have found claims that are inconsistent with, or missing from, the body of the full article,” according to the statement. “Conversely, omitting important harms from the abstract could seriously mislead someone’s interpretation of the trial findings.”
The presence of spin in the abstract is particularly important because in many cases, such as when an article is behind a firewall or requires membership to view, this is the only information that clinicians can access, the statement noted.
Jellison emphasized that the current study was not intended to single out psychiatry research as a particularly prevalent source of bias; he noted that it seems to be an issue that “infiltrates” nearly all journals. In prior studies, spin was detected in nearly 50% of reporting in oncology trials, a quarter of reporting in anesthesiology trials, and more than half of abstracts of cardiology trials.
Jellison’s group searched PubMed for RCTs published in JAMA Psychiatry (17 trials), the American Journal of Psychiatry (13), the Journal of Child Psychology and Psychiatry (15), Psychological Medicine (26), the British Journal of Psychiatry (28), and the Journal of the American Academy of Child and Adolescent Psychiatry (17). Jellison and one additional author evaluated the presence of spin in the studies after being trained to detect it.
Overall, spin was mostly found in the abstract results section (21%), the abstract conclusion section (49%), or both (15%), as well as in the title (2%).
Of 57 trials with spin in the abstract conclusions, about one-third claimed benefit from a significant primary outcome but ignored others; 26% said an experimental therapy was beneficial due to a significant secondary endpoint; and 19% asserted non-inferiority when they reached a nonsignificant endpoint, Jellison and colleagues reported.
Others concluded they reached a goal without prespecifying it as an objective (12%), said a therapy was beneficial based on a subgroup analysis (5%), or highlighted the magnitude of difference between two comparators without a significant P-value, they added.
While 12 articles in the analysis reported industry funding, but among them, funding was not associated with the presence of spin (odds ratio 1.0, 95% CI 0.3-3.2), the authors reported. The finding was in line with previous research.
Jellison said this analysis was limited because evaluating the presence of spin across trials was subjective, and the findings may not be generalizable across all psychiatric journals.
His group suggested that journal editors invite reviewers to comment on the presence of spin pre-publication.
Jellison and co-authors disclosed no relevant relationships with industry.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.