A patient arrives with fatigue, shortness of breath, swollen joints, and unexplained weight loss. Traditionally, that's a cardiology consult, a rheumatology consult, and an endocrinology consult — each adding time, each adding friction, none necessarily talking to the others. It's the system working as designed, highlighting the challenge of fragmented care.
In a viewpoint published in BMJ Evidence-Based Medicine, Jiajie Zhang of UTHealth Houston argues that AI offers something the modern healthcare system structurally struggles to achieve: integration across specialties, with systems acting as cognitive collaborators. He describes the resulting practitioner an “AI-augmented generalist,” operating within a distributed cognition framework that combines clinical expertise with AI’s breadth across subspecialties.
Zhang describesthis fragmentation as a consequence of medicine’s evolution as a knowledge enterprise. From the Industrial Revolution’s standardized curricula, through the 20th century’s advances in microbiology and imaging, to the 21st century’s genomics and electronic health records, the volume and complexity of medical knowledge expanded beyond what any individual could master. Specialization was the rational response, but it came with trade-offs: delays, communication gaps, and difficulty integrating care across systems.
Reframing the Role of the Generalist
Zhang is not arguing that AI replaces specialists. The subtler claim is that it could reshape how and when different types of expertise are involved in patient care. His argument suggests that in complex, multisystem conditions, the challenge is often not just expertise, but coordination across domains.
If an AI-assisted generalist can help identify connections between autoimmune inflammation, cardiac dysfunction, and metabolic abnormalities earlier in the process, that could reduce reliance on multiple sequential referrals. This doesn’t eliminate the role of subspecialists – it potentially makes triage and escalation more precise.
“Unlike generalists from previous eras, who relied solely on their memory and limited resources, AI-augmented generalists operate within a distributed cognition framework that combines human expertise with AI's vast knowledge base and reasoning abilities.” wrote Jiajie Zhang of the McWilliams School of Biomedical Informatics, UTHealth Houston.
Zhang highlights several limitations and risks. AI systems carry well-recognized risks: bias from non-representative training data, limited transparency in reasoning, and unresolved questions about liability when human–AI decisions go wrong. Regulatory frameworks like HIPAA, GDPR, and the European AI Act add further complexity that most current medical training does not yet address.
Zhang presents these capabilities as emerging rather than fully established in clinical practice. The promise is clear, but so are the constraints. The open question isn’t simply whether AI can perform under ideal conditions – it’s whether clinicians will be trained to use these systems effectively, and just as importantly, to recognize when not to rely on them.
Clinical Takeaway
For institutions designing residency curricula, this represents a structural challenge, not a simple technological addition. The competencies outlined in the viewpoint – AI system proficiency, collaborative problem-solving, contextual adaptation, and ethical reasoning – are not skills that emerge from a single elective. They require rethinking what it means to train a physician in a system where knowledge is no longer bounded by individual cognition.
If AI supports a shift toward more integrated care, it won’t come from the technology alone. It will depend on whether medical education evolves to produce clinicians who can work alongside these systems — critically, responsibly, and with a clear understanding of both their power and their limits.
The author declared having no competing interests.
Source: BMJ Evidence-Based Medicine