In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions

Knoedler, Leonard and Knoedler, Samuel and Hoch, Cosima C. and Prantl, Lukas and Frank, Konstantin and Soiderer, Laura and Cotofana, Sebastian and Dorafshar, Amir H. and Schenck, Thilo and Vollbach, Felix and Sofo, Giuseppe and Alfertshofer, Michael (2024) In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions. SCIENTIFIC REPORTS, 14 (1): 13553. ISSN 2045-2322

Full text not available from this repository. (Request a copy)

Abstract

ChatGPT has garnered attention as a multifaceted AI chatbot with potential applications in medicine. Despite intriguing preliminary findings in areas such as clinical management and patient education, there remains a substantial knowledge gap in comprehensively understanding the chances and limitations of ChatGPT's capabilities, especially in medical test-taking and education. A total of n = 2,729 USMLE Step 1 practice questions were extracted from the Amboss question bank. After excluding 352 image-based questions, a total of 2,377 text-based questions were further categorized and entered manually into ChatGPT, and its responses were recorded. ChatGPT's overall performance was analyzed based on question difficulty, category, and content with regards to specific signal words and phrases. ChatGPT achieved an overall accuracy rate of 55.8% in a total number of n = 2,377 USMLE Step 1 preparation questions obtained from the Amboss online question bank. It demonstrated a significant inverse correlation between question difficulty and performance with r(s )= -0.306; p < 0.001, maintaining comparable accuracy to the human user peer group across different levels of question difficulty. Notably, ChatGPT outperformed in serology-related questions (61.1% vs. 53.8%; p = 0.005) but struggled with ECG-related content (42.9% vs. 55.6%; p = 0.021). ChatGPT achieved statistically significant worse performances in pathophysiology-related question stems. (Signal phrase = "what is the most likely/probable cause"). ChatGPT performed consistent across various question categories and difficulty levels. These findings emphasize the need for further investigations to explore the potential and limitations of ChatGPT in medical examination and education.

Item Type: Article
Uncontrolled Keywords: ChatGPT; USMLE; USMLE Step 1; OpenAI; Medical Education; Clinical Decision-Making
Subjects: 600 Technology > 610 Medical sciences Medicine
Divisions: Medicine > Zentren des Universitätsklinikums Regensburg > Zentrum für Plastische-, Hand- und Wiederherstellungschirurgie
Depositing User: Dr. Gernot Deinzer
Date Deposited: 15 Jan 2026 07:35
Last Modified: 15 Jan 2026 07:35
URI: https://pred.uni-regensburg.de/id/eprint/65103

Actions (login required)

View Item View Item