According to a recent study cited in a JAMA Health Forum editorial, ChatGPT-4 demonstrated superior clinical reasoning abilities compared with participating physicians, but its integration into daily practice did not significantly improve clinical decision-making.
Scott Gottlieb, MD, former FDA commissioner and current senior fellow at American Enterprise Institute, Washington, D.C., discussed the potential consequences of recent U.S. Food and Drug Administration (FDA) policy changes on the integration of artificial intelligence (AI) in clinical practice. The editorial emphasized that evolving FDA regulations introduce uncertainty around AI classification, which could affect its integration into electronic medical records (EMRs) and its role in clinical decision support.
Published in JAMA Health Forum, the editorial reviews the regulatory landscape following the 21st Century Cures Act of 2016, which provided exemptions for clinical decision support software that assisted physicians without overriding clinical judgment. From 2017 to 2019, FDA policies reinforced these exemptions, permitting AI tools that provided nonmandatory recommendations to remain outside the agency's regulatory scope.
However, in September 2022, the FDA revised its guidance, expanding the classification of medical devices to include AI tools capable of integrating diverse clinical data or influencing time-sensitive decisions. The new framework addresses concerns about "automation bias" and "time criticality" - situations where clinicians may not have the opportunity to properly evaluate AI recommendations. Under these guidelines, AI tools that aggregate information from multiple sources—such as laboratory results, imaging studies, and physician notes—would be classified as medical devices, requiring premarket approval.
The editorial warns that EMR developers might respond to these regulatory requirements by intentionally limiting their software's capabilities to avoid costly and uncertain regulation. This could prevent healthcare providers from accessing potentially beneficial AI tools that could enhance medical care productivity and safety.
A study cited in the editorial evaluated ChatGPT-4's clinical reasoning abilities and found that while the AI showed superior capabilities compared to participating physicians, its implementation as a standalone tool, rather than being integrated into EMRs, may have limited its practical impact on clinical decisions. The study suggested that AI tools integrated directly into physician workflows through EMRs could provide better usability outcomes compared with standalone platforms.
Dr. Gottlieb proposes returning to the original intent of the 21st Century Cures Act, suggesting that AI tools designed to augment physician information without making autonomous diagnoses or treatment decisions should not require premarket review. Instead, he advocates for postmarket validation to verify these systems enhance medical decision-making quality while maintaining patient safety.
Full disclosures can be found in the published editorial.