In a rapidly changing educational landscape, medical students use AI in their residency applications, faculty use it in their letters of recommendation, and some residency program directors use artificial intelligence (AI) to help sift through the high number of applications they receive to fill a limited number of available residency positions.
AI has become increasingly pervasive in residency recruitment, culminating in Match Day.
About Our Research
Medscape continually surveys physicians and other medical professionals about key practice challenges and current issues, creating high-impact analyses. For example, the Medscape AI Adoption in Healthcare Report 2024 found that
- 39% of physicians worry about ethical dilemmas with AI.
- 76% of doctors think AI still faces a reliability issue.
- 57% believe it will improve efficiency.
- 60% say it will help identify patterns that humans might miss.
However, the organization managing the process that matches applicants with compatible residency programs says it isn’t using AI yet and hasn’t advised program directors on its use. The National Resident Matching Program (NRMP) uses its own algorithm — computerized mathematical calculations — that verifies an applicant’s ranking preference to place them in their most favored programs, said Laurie Curtin, chief operating officer.
“AI is a growing presence in the application and recruitment space, and as the NRMP continues to evolve its services and maintain its highly responsive relationships with Match participants, we will consider what role AI might play in the Match,” Curtin told Medscape Medical News. She declined to offer further details.
NRMP recently debunked a fake memo that circulated on social media advising program directors about “integrity screening” applicants, particularly international medical graduates (IMGs), who may have used AI to generate or enhance their applications.
“The NRMP has not taken any stance on the use of AI-generated content or ‘integrity screening,’ nor do we provide guidance to programs on screening candidates in the residency application process,” the agency responded on its website to the phony statement attributed to it.
NRMP added that it tries to ensure that IMG applicants have “equitable access to the Match and are represented in our data and on our Board of Directors.”
The issue plays into a larger debate over the benefits and limitations of using the advanced information aggregator, including how it screens IMGs, whose education and transcripts differ from American standards.
Skeptics also point to AI’s impersonal nature and risks for error, biases that exist in a technology programmed by humans, and the dangers of using computers to make decisions without human oversight.
Other education leaders cite AI’s ability to simplify the application and review process.
“In reality, program directors do not have the time to thoroughly review all of the applications they receive,” Bryan Carmody, MD, who regularly blogs about medical education, told Medscape Medical News. “To work through the pile, they often rely on readily available but imperfect data points, like where the applicant went to medical school or their [test] scores.”
Who Is Using AI for Matching?
Some program directors may be using AI to help screen applicants through an arrangement that began in 2023 between software developer Thalamus and the Electronic Residency Application Service (ERAS) , which students use to submit their applications. ERAS is managed by the Association of American Medical Colleges (AAMC).
Thalamus developed an AI tool that uses keyword searches to help program directors identify whether students have connections to an area that increases their likelihood of wanting to match there, said Jason Reminick, MD, CEO, and founder of the software platform. Location is a major factor for US applicants when ranking residency programs, NRMP revealed in a study of last year’s residency application cycle.
This application season, Thalamus released an AI-assisted software that aggregates medical school grades from transcripts with grade distributions for comparison within and across medical schools. The results make it easier for program directors to decide which residency candidates to interview. While residency and fellowship programs currently pay for the AI software, it will become free in July for programs participating in ERAS through the AAMC arrangement, so more directors are expected to use it to review applicants.
This is the second match cycle in which the AAMC requests students applying with the ERAS program certify that if they use AI for brainstorming, proofreading, or editing — which is considered acceptable — all writing, including a personal statement, represents their own work and accurately reflects their experiences.
Even before AI became so ubiquitous, the AAMC required students to sign a similar certification about seeking help with their application from a mentor, consultant, advisor, or parent, said Dana Dunleavy, AAMC senior director of Admissions and Selection Research and Development.
AAMC Guidance
When AI became widely available to create content several years ago, program directors and admission teams began asking the AAMC whether the use of AI in creating application content should be banned, Dunleavy said. At that time, a small group of medical school admissions officers and residency program directors weighed how likely students would use AI, its pros and cons, and educators’ ability to accurately detect whether applicants used the software.
Applicants wanted to know if AI was being used in the selection process and what was allowed in their own use of AI.
This past year, the AAMC also released its principles for responsible AI use in medical school and residency selection. The principles guide program decision-makers in designing and using AI-based selection systems to protect against biases, align with their objectives, and ensure data privacy.
The guidelines recognize AI as a tool for identifying patterns and improving selection decisions by streamlining operations, standardizing screening, and promoting equity. For example, AI can help predict applicant performance or prioritize applications for review.
“While using AI to predict who to interview is valuable, we see real power in using AI to identify applicants who want to be in a program; who are likely to thrive in a program and perform well; who are likely not to leave; and who will practice in the community.”
But the AAMC also cautions that selection experts still need to offer oversight. “Any use of AI should be balanced with human judgment, insights, and ethical standards. What’s more, significant concerns regarding privacy, fairness, transparency, and validity of AI tools remain. It is critical that AI-driven decision-making tools be subjected to the same scrutiny applied to traditional selection methods.”
The main reason for programs to use AI is to help screen applicants, Dunleavy added. “Many receive an extremely large volume of applicants and it’s not feasible to review them all.” She also noted that many faculty members review applications in their spare time. “AI increases efficiency and improves standardization, evaluating candidates with the same criteria regardless of the person conducting the review.”
Advising Health Systems
A few academic medical centers have come up with their own guidelines, such as the University of Washington School of Medicine, Seattle. When members of the medical school and graduate medical education (GME) programs began creating their guidelines for residency and fellowship applications, the AAMC hadn’t released its principles for responsible AI use.
“Students were using it [AI], and we didn’t want to hide from it,” said Hadar Duman, director of accreditation for GME at UW Medicine , Seattle, who helped create the guidelines. “We wanted to give our programs the message that it’s OK for students to use AI.”
“As you work on your materials, it’s essential to ensure that AI enhances, rather than replaces, your authentic voice and experiences,” their document states. “These guidelines emphasize the importance of data privacy, avoiding plagiarism, and adhering to application and match system policies like NRMP and ERAS, while also encouraging personal growth and readiness to discuss the role of AI in your application.”
Some application systems and programs may require students to disclose their use of AI, as explained in the guidelines. “Recognize that some faculty may have biases against AI. Being transparent about how AI has been used can help mitigate biases and demonstrate responsible use of technology.”
Thalamus frequently checks the accuracy and reliability of its AI system’s conclusions through manual data reviews and trains the system to improve and prevent biases, Reminick said. Thalamus also uses data and analytics to monitor the applicant choices programs make.
Part of the challenge for faculty and program directors is assessing candidates from medical schools with different grading scales, categories, and distributions, he said. For instance, some schools may grade using pass-fail, and others use numeric values. “There’s a lot of variability in how applicants are evaluated. We try to use technology to level the playing field.”
In the future, Reminick envisions AI evolving enough to help program directors feel confident letting the computer aggregator identify applicants for further review, make interview selections, and potentially build their rank lists with less human oversight. Until then, most program directors will still closely monitor and review AI’s conclusions.
https://www.medscape.com/viewarticle/future-residency-ai-reshaping-match-2025a10006go
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.