- Two research letters found that 3.3% to 5.7% of authors disclosed using artificial intelligence (AI) in preparing research manuscripts that were submitted to journals, but those figures are likely underreported.
- For papers submitted to JAMA Network journals, the most commonly disclosed use of AI was for correction or refinement of language (68%), and for papers submitted to BMJ journals, the most frequently disclosed AI tools were chatbots, especially ChatGPT.
- JAMA editors suggest that all publication editors continue to develop guidance around the use of AI in research and publishing.
A small proportion of authors disclosed using artificial intelligence (AI) in preparing research manuscripts submitted to journals, but that figure is likely underreported, according to two cross-sectional studies.
About 3.3% of manuscripts submitted to JAMA Network journals and 5.7% of those submitted to BMJ journals disclosed use of AI, two research groups reported in research letters to JAMA.
One letter, by Roy Perlis, MD, editor of JAMA+ AI, and colleagues, assessed more than 105,000 manuscripts submitted to 13 JAMA Network journals from Aug. 29, 2023 to Oct. 31, 2025. The other, by Sara Schroter, PhD, of the BMJ Group, and colleagues, assessed more than 25,000 submissions to BMJ journals from April 8 to Nov. 6, 2024.
JAMA Network began requiring authors to disclose use of AI in their manuscripts in August 2023, and BMJ Group implemented mandatory questions about AI use in manuscripts in April 2024.
"I think it's absolutely underreported," Perlis told MedPage Today. "We simply have to trust authors to be honest, as we do for other aspects of their submissions and disclosures."
Perlis said more journals have started asking for disclosure of AI use, which he says is "valuable."
"In particular, authors need to recognize that they are still wholly responsible for the content of the article," Perlis said. "The act of disclosing AI use ensures that they at least momentarily consider this critical point."
Perlis and colleagues found that disclosure of AI use rose significantly over the course of their analysis, from 1.71% at the beginning, to 5.97% by the end.
Uses included correction or refinement of language (67.7%), statistical model development (7.3%), other data analysis (6.3%), manuscript drafting (5.5%), and search and evaluation of literature (4.3%). Use was not classified in 8.8%.
There was a greater likelihood of disclosed AI use for Viewpoints (OR 1.78, 95% CI 1.56-2.03) and Letters to the Editor (OR 1.72, 95% CI 1.50-1.97) compared with Original Investigations, they reported. Other characteristics associated with AI use disclosure included having a corresponding author from a country without English as an official language, as well as manuscripts that were rejected or withdrawn prior to review compared with those that were accepted or asked to revise.
All specialty journal submissions were less likely to be associated with AI use than JAMA submissions except for JAMA Oncology -- which wasn't significantly different from JAMA -- and JAMA Psychiatry, which was more likely to be associated with AI use disclosure (OR 1.18, 95% CI 1.00-1.38).
Disclosure of AI use also rose over the course of the BMJ journals study, from 4.5% in April 2024 to 7.3% in October 2024.
The most frequently disclosed tools were chatbots (56.7%) -- ChatGPT was the tool of choice -- followed by writing assistants (12.7%), primarily Grammarly. A total of 31.4% did not disclose the specific AI tool used.
Most authors (87.2%) reported using AI to improve the quality of their writing, the researchers said.
The difference in disclosure of AI use wasn't statistically significant between The BMJ and other BMJ journals. Authors from South America (OR 1.75, 95% CI 1.22-2.49) and Europe (OR 1.28, 95% CI 1.14-1.45) were more likely to report AI use than those from Asia, they found.
The rate of disclosure reported in their study is "substantially lower than estimates from recent surveys of researchers' general use of AI," but it aligns with the Perlis paper, Schroter and colleagues wrote.
"Taken together, these results suggest underreporting, perhaps reflecting authors' deliberate omission or uncertainty over what AI use needs to be disclosed," they wrote.
A limitation of both studies was that data from 2023 and 2024 may not accurately reflect the rapidly evolving use of AI in manuscript submissions.
In an accompanying Editor's Note, Preeti Malani, MD, and Joseph Ross, MD, both deputy editors of JAMA, agreed that actual use of AI is likely higher due to underreporting, and said the letters are likely to inform future guidance on AI use in research.
"The majority of submissions used AI to improve writing and refine language, but the increasing availability of generative AI tools is likely to lead more investigators to use AI for more advanced tasks typically considered key, intellect-driven aspects of the scientific process, such as summarizing the existing literature, analyzing data, and drafting manuscripts," Malani and Ross wrote.
"Journal editors, publishers, and others should anticipate further increases in AI use and continue to develop guidance and boundaries to promote transparency and maintain research integrity," they concluded.
Disclosures
Perlis reported financial relationships with Circular Genomics, Genomind, Atella, and Alkermes. Co-authors had no disclosures.
Schroter is a full time employee of BMJ Group. Co-authors reported relationships with Springer, Wiley, and BMJ Group.
Editorialist Ross reported relationships with the FDA, NIH, Johnson & Johnson, Greenwall Foundation, and Arnold Ventures. Malani had no disclosures.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.