Artificial intelligence (AI) is moving quickly from promise to practice in diabetes care, offering new tools for diagnosis, monitoring, and treatment while raising fresh concerns about safety, ethics, and equity, according to correspondence published in The Lancet Diabetes & Endocrinology.
The aurthors, led by Stefan R. Bornstein, MD, director of the Centre for Internal Medicine and the Medical Clinic and Policlinic III at the University Hospital Carl Gustav Carus Dresden, Germany, noted that diabetes is already one of the most advanced clinical areas for AI use, citing insulin pumps, continuous glucose monitoring systems, and closed-loop technologies that adjust insulin delivery automatically. These tools have improved glycemic control and reduced daily burden for many patients. But the next phase, they note, could be even more transformative.
AI systems trained on large data sets could soon help identify people at risk for diabetes earlier and more accurately, including emerging subtypes that are difficult to detect with current methods. Simple, low-cost biomarkers combined with AI analysis may allow early intervention at a population level, particularly in low-resource settings where specialist care is limited. From a public health perspective, the researchers describe this approach as potentially “egalitarian,” with benefits for both individuals and health systems.
To help guide this expansion, the International Diabetes Federation (IDF) has formed an international AI working group to develop strategies and standards for responsible use. The authors call for coordinated global action to move beyond pilot projects and into real-world implementation. At the same time, they warn that AI introduces risks that clinicians and regulators must address. Many diabetes technologies remain relatively new, and long-term effects on patients are not fully understood. The growing ability of AI systems “to influence human biology and lifespan might have implications we are not yet able to grasp,” the researchers noted.
One concern is over-reliance on automated decision-making, they say, which could undermine patients’ independence, confidence, and self-management skills. Another is surveillance. Because AI systems are highly connected, they could expose sensitive health data to governments, insurers or employers, raising questions about privacy and autonomy.
The authors emphasized that AI systems do not have built-in ethics or social awareness unless explicitly designed to do so. To address this, the IDF advocates for a “human-in-the-loop” approach, ensuring clinicians retain final responsibility for care decisions. “While it is time to act, it is equally important to exercise caution and provide the appropriate guidance and leadership,” they wrote.
The commentary authors also call for stronger regulatory guardrails. AI tools should be classified and monitored as medical interventions, they said, with standards that extend beyond accuracy to include transparency, fairness, explainability, and bias mitigation. Privacy-preserving methods such as federated learning should be encouraged.
Education is another priority. The authors noted that diabetes professionals need training in AI literacy, ethics, and digital health so they can interpret algorithmic outputs, recognize limitations, and counsel patients appropriately.
Finally, they emphasized equity. “The true potential of AI will be realized only when it is available, accessible, and affordable for all patients—irrespective of geography, income, or digital literacy."
The commentary authors had no disclosures to report.