Radiologists using generative AI completed reports 15% faster without reducing diagnostic accuracy, and identified clinically significant pneumothorax with 99.9% specificity.
In the prospective cohort study, researchers evaluated the integration at a tertiary academic health system comprising 12 hospitals.
The study included 23,960 radiographs—11,980 interpreted with AI assistance and 11,980 matched without model use. Radiologist documentation time was significantly reduced with model assistance. Mean documentation time was 160 seconds for model-assisted interpretations compared with 189 seconds for traditional reports, representing a 15.5% improvement in documentation efficiency. This reduction remained consistent across chest and nonchest radiographs.
Peer review of 800 randomly sampled studies demonstrated no meaningful difference in clinical accuracy or textual quality between AI-assisted and non–AI-assisted reports. Addenda rates were nearly equivalent—0.14% for model-assisted reports and 0.13% for premodel reports—suggesting preserved report integrity.
The study, led by Jonathan Huang of the Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, and colleagues, also assessed the AI model’s prioritization system, which screened 97,651 radiographs during a real-time shadow deployment. Among 33 radiographs containing clinically significant, unexpected pneumothoraces that required clinical team notification, 24 were correctly flagged by the system, yielding a sensitivity of 72.7% and specificity of 99.9%. Priority flags were generated within a median of 24 seconds after study completion, notably faster than the median radiologist notification time of 24.5 minutes.
Disclosures can be found in the published study.
Source: JAMA Network Open