Please use this identifier to cite or link to this item: https://repository.monashhealth.org/monashhealthjspui/handle/1/52610
Title: Comparing GPT and Claude visual language models in radiology
Authors: Carrion D.;Nguyen C.;Badawy M.K. 
Monash Health Department(s): Radiology
Institution: (Carrion & Badawy) Imaging, Monash Health, Clayton, Victoria, Asutralia
(Nguyen) Department of Medical Imaging and Radiation Sciences, Monash University, Clayton, Victoria, Australia
Copyright year: 2024
Abstract: Vision Language Models (VLMs) are emerging tools in radiology, with the potential to aid clinical workflows by identifying anatomical regions and imaging modalities from complex datasets. Recent advancements in models like GPT and Claude show promise for improving diagnostic efficiency, particularly in recognising anatomical structures, detecting fractures, and classifying imaging modalities. However, previous studies highlight limitations in the reliability and diagnostic accuracy of these models, especially in nuanced pathology identification. This study seeks to address these gaps by assessing the proficiency of popular publicly available GPT and Claude models in key diagnostic tasks. Improved model performance has the potential to enhance radiology workflows, optimise resource utilisation, and support clinical decision-making, but further evaluation is required to establish their clinical impact.
URI: https://repository.monashhealth.org/monashhealthjspui/handle/1/52610
Type: Conference poster
Subjects: artifical intelligence
radiology
Appears in Collections:Conference Posters

Files in This Item:
File Description SizeFormat 
Daniel Carrion - Comparing GPT and Claude.pdf613.85 kBAdobe PDFThumbnail
View/Open
Show full item record

Page view(s)

52
checked on Nov 21, 2024

Download(s)

26
checked on Nov 21, 2024

Google ScholarTM

Check


Items in Monash Health Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.