World News | ChatGPT Clears US Medical Licensing Exam: Study

Get latest articles and stories on World at LatestLY. ChatGPT could score at or around the approximately 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that made coherent, internal sense and contained frequent insights, according to a new study.

LATAM Airlines Plane Hits Vehicle on Runway (Photo Credit: Twitter/@AirCrash_)

Los Angeles, Feb 11 (PTI) ChatGPT could score at or around the approximately 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that made coherent, internal sense and contained frequent insights, according to a new study.

Tiffany Kung and colleagues at AnsibleHealth, California, US, tested ChatGPT's performance on the USMLE, a highly standardized and regulated series of three exams, including Steps 1, 2CK, and 3, required for medical licensure in the US, the study said.

Also Read | US Blacklists Six Chinese Entities Over Spy Balloon Programme.

Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.

After screening to remove image-based questions from the USMLE, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release, the study said.

Also Read | COVID-19 in US: Omicron Subvariant XBB.1.5 Prevalence Jumps to 75%.

The authors found that after indeterminate responses were removed, ChatGPT had scored between 52.4 per cent and 75 per cent across the three USMLE exams, the study published in the journal PLOS Digital Health said.

The passing threshold each year is approximately 60 per cent.

ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming word sequences.

Unlike most chatbots, ChatGPT cannot search the internet, the study said.

Instead, it generates text using word relationships predicted by its internal processes, the study said.

According to the study, ChatGPT also demonstrated 94.6 per cent concordance across all its responses and produced at least one significant insight, something that was new, non-obvious, and clinically valid, for 88.9 per cent of its responses.

ChatGPT also exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8 per cent on an older dataset of USMLE-style questions, the study said.

While the relatively small input size restricted the depth and range of analyses, the authors noted that their findings provided a glimpse into ChatGPT's potential to enhance medical education, and eventually, clinical practice.

For example, they added, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.

"Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation," said the authors.

Kung added that ChatGPT's role in this research went beyond being the study subject.

"ChatGPT contributed substantially to the writing of [our] manuscript... We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress... All of the co-authors valued ChatGPT's input."

(This is an unedited and auto-generated story from Syndicated News feed, LatestLY Staff may not have modified or edited the content body)

Share Now

Share Now