In a forthcoming JAMA Viewpoint, physician-lawyers Daniel Aaron and Christopher Robertson examine Utah’s decision to allow an artificial intelligence (AI) system to prescribe medications without direct physician involvement—an approach they describe as unprecedented in U.S. clinical practice.
Through its regulatory sandbox, Utah granted the company Doctronic authority to prescribe nearly 200 medications, including corticosteroids, antidepressants, anticlotting agents, and hormones. The program is expected to begin with prescription renewals but is designed to conduct a “comprehensive medical assessment” that mirrors the clinical decision-making process of a physician.
Aaron and Robertson do not argue against AI prescribing outright. They note that such systems could help address persistent shortages in primary care and improve medication access and adherence—longstanding challenges associated with substantial health and economic costs.
Their focus, instead, is on what remains uncertain or unestablished. The authors highlight the absence of publicly available, peer-reviewed evidence demonstrating safety and effectiveness in the intended use setting, as well as the lack of a clearly defined liability framework. They also note that FDA review has not been demonstrated; although AI prescribing systems likely meet the statutory definition of a medical device, the company did not seek agency review.
The company has cited internal data showing a 99.2% concordance with clinicians’ treatment decisions, based on approximately 500 urgent care telehealth cases. However, the authors emphasize that this evaluation occurred in a different clinical context than the Utah deployment, was unblinded, involved company-affiliated investigators, and has not been published in a peer-reviewed journal.
The Viewpoint also raises questions about how Utah’s approach aligns with existing federal law. Under longstanding statutes, many prescription drugs must be dispensed based on the authorization of a licensed practitioner. The authors note that, absent such involvement, drugs could be considered misbranded under federal law, though how these provisions will be applied in this context remains unclear.
Questions of liability are similarly unresolved. While Utah’s agreement does not eliminate the possibility of tort claims, courts have yet to determine whether AI systems would be held to the same standard of care as physicians. Product liability claims may also be difficult to pursue when clinical decision-making processes are not transparent.
At the federal level, the likelihood of near-term intervention appears limited in the current policy environment, which has emphasized reducing regulatory barriers to AI innovation.
More broadly, Aaron and Robertson frame the development as an example of the Collingridge dilemma: the challenge of regulating emerging technologies early, when their risks are uncertain, versus later, when they may be too entrenched to control effectively. As they write, “If AI prescribing becomes the industry standard before safety and liability frameworks are established, the power problem may render future regulation infeasible.”
Their recommendation is measured: physicians, health systems, and other stakeholders should require robust evidence of safety and effectiveness—ideally including clinical trial data—before incorporating AI prescribing into routine care.
Disclosures: Dr. Aaron reported grants from Arnold Ventures during the conduct of this work. No other disclosures were reported.
Source: JAMA