Since models like ChatGPT entered the public arena, medical teaching isn’t the same. How should you train future doctors when tools can summarize articles, draft clinical notes and even simulate patients?
Microsoft Research gathered students and clinicians to discuss exactly that: how to integrate these tools without sacrificing clinical judgment or professional responsibility.
Qué plantea Microsoft Research
In a dedicated episode, Peter Lee talks with Morgan Cheatham and Daniel Chen about real experiences from students and doctors using generative AI in clinical training. The key points: adoption is growing fast among younger learners, there are concrete use cases (note writing, replying to patient messages, helping review evidence) and there’s a real worry about what skills you might lose if AI does too much of the student’s work. (microsoft.com)
"If AI writes the note for me, what am I learning?" — that question runs through the conversation and sums up the daily dilemma.
Usos concretos y por qué importan
-
Práctica con pacientes sintéticos: los
LLM
y plataformas multimodales allow you to create repeatable clinical scenarios to practice communication and reasoning without real risk. This opens access to practice that used to require expensive labs. Studies and projects show these simulations improve deliberate repetition and immediate feedback. (arxiv.org) -
Tutoría personalizada y generación de material: AI can create study paths, exam-style questions and summaries tailored to a student’s level, reducing administrative load for faculty. Research indicates these tools already perform at the level of evaluative tasks in some contexts, though results vary. (medicalxpress.com, mededu.jmir.org)
-
Soporte en la práctica clínica: early deployments integrate generated responses into clinical records to answer patient messages or draft note templates, which promises time savings but raises questions about verification and responsibility. (microsoft.com)
Riesgos y límites que no conviene ignorar
This is not just a technical discussion: it’s ethical and educational. Some major risks:
-
Accuracy and truthfulness: models can generate convincing but incorrect information. Teaching you to verify and ask for sources remains essential. (mededu.jmir.org)
-
Loss of clinical skills: constantly drafting notes or solving cases with AI help can blunt the development of clinical reasoning if there isn’t pedagogical control. (magazine.hms.harvard.edu)
-
Transparency and accountability: who’s responsible if an AI-assisted recommendation fails? Students need training on the tool’s limits and on how to document its use in decision-making. (microsoft.com)
¿Qué cambiar en los planes de estudio?
It’s not about banning, but about teaching how to live with the tool. Some practical proposals:
-
Integrate AI literacy early on: not to turn everyone into engineers, but so you can evaluate outputs, spot biases and use
RAG
(retrieval-augmented generation) or checks when needed. (mededu.jmir.org) -
Design assessments that measure reasoning, not just correct answers: open questions, simulated clinical cases and oral defense of decisions keep the focus on clinical judgment.
-
Use AI to expand practice (simulators, automated feedback) but pair it with human mentorship that corrects, observes and explains.
-
Clear policies on documentation: when AI was used, which parts were assisted and how the information was verified.
¿Y la equidad y el acceso?
One point raised in the conversation is the gap between well-resourced centers and others. The promise is to democratize access to simulators and quality material, but if adoption stays limited to big hospitals, the educational gap could widen. That’s why strategy must include scalable rollout mechanisms and regulatory criteria that promote fair access. (microsoft.com, magazine.hms.harvard.edu)
Conclusión reflexiva
Generative AI offers powerful tools to train more efficient, better-prepared doctors, but it doesn’t replace what makes a good clinician human: judgment, empathy and responsibility. Schools aren’t being asked whether to use AI, but how to use it to enhance training without eroding the profession. What’s the recipe? Practical AI education, assessments centered on reasoning and constant human supervision. That way, technology stops being a shortcut and becomes an amplifier of learning.