Research & Innovation, Vitals Magazine Fall 2025

Dr. AI: Could Artificial Intelligence
Replace Clinicians? 

New tools based on AI technology have the potential to reshape how medicine is practiced—and the role of the provider.

robotic arms

Over the last few years, artificial intelligence (AI) has moved from a nascent futuristic concept to an everyday tool. We’ve seen an AI-generated video of a murder victim make a statement in court, police using AI chatbots to write crime reports, traffic lights made “smart” with AI, and AI-supported apps that tell you why your houseplant is dying.

As AI matures, its ability to process information like humans—but at warp speed— makes it an enticing way to supercharge endeavors that require synthesizing large amounts of information. This is particularly true for medicine and healthcare, which require clinicians to use their knowledge of the vast body of existing and ever-changing medical research to interpret patients’ symptoms.

“This is going to be the biggest wave to hit medicine,” says Asha Zimmerman, MD, a transplant surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center (DHMC) and assistant professor of surgery at the Geisel School of Medicine at Dartmouth, who is working on his own applications for AI in medicine.

Already three in five physicians report that they use AI in their practice, according to a 2024 American Medical Association survey. Physicians reported using AI to help them take visit notes, draft discharge summaries and care plans, and summarize medical research and standards of care.

But AI-driven technologies can increasingly take on more than administrative tasks. As large language models (think ChatGPT) are refined, they’re getting better at more complex tasks, inferring next steps unprompted, and interpreting human interactions. Could we one day see AI replacing clinicians on the frontlines of medicine?

“That’s the direction we’re heading,” says Saeed Hassanpour, PhD, professor of biomedical data science, computer science, and epidemiology at Geisel, and the director of the Center for Precision Health & Artificial Intelligence (CPHAI) at Dartmouth. Hassanpour is one of the stauncher advocates for deploying AI tools in healthcare, but even he says it’s unlikely that medicine will ever become an entirely digitized endeavor without a human in the loop.

What humans’ role will be in this AI-filled future remains to be seen, and leaders across Dartmouth Health and Geisel are positioning themselves at the forefront of that conversation. From investments like hiring machine learning experts to tackling AI’s greatest questions through initiatives like CPHAI, which launched in 2023, both institutions are focused on figuring out how to shape that future to leverage both the best of AI and the best of humanity to make medicine more effective, efficient, precise, and personalized.

What AI Offers to Medicine

AI has a long history at Dartmouth. In fact, the term “artificial intelligence” was coined in 1956 when researchers gathered at Dartmouth to figure out what it would take for machines to simulate human intelligence.

As a term, “AI” doesn’t refer to a specific technology to achieve that goal. Rather, it is a way to describe the concept of computing that emulates human thinking. One way to achieve this is through “machine learning,” which articulates how a machine can be trained to learn patterns and make inferences from vast datasets without being explicitly programmed to do so. Machine learning caught the public’s attention a decade ago when the AI program AlphaGo beat a human champion at the board game Go.

This is similar to how doctors make diagnoses—identifying patterns in a patient’s symptoms and vital signs, and comparing those to patterns they learned in training, from the latest research, or over time from diagnosing hundreds of patients.

“It’s not very farfetched that, if these models train correctly on a large amount of data, on the body of current knowledge of medicine, they could mimic what your typical physician would do in the same situation,” Hassanpour says.

What Would a “Dr. AI” Look Like?

That’s not to say that AI will take over every task for a clinician. “When we talk about replacing somebody with a generalized AI, we are thinking about their job as if it were a simple task, a single task, that they do repeatedly. And that’s not what a clinician, or most any healthcare provider, does,” says Brandon Hill, MS, a machine learning specialist and cofounder of the Center for AI Research in Orthopaedics at DHMC.

Hassanpour agrees that there likely won’t be a “self-governing, independent AI model that works on its own.” Instead, future patients might not interact directly with a human. This scenario could be realized with technology already at, or close to, our fingertips, though it might require what Hassanpour calls a “multiagent-based approach.”

“Currently most of these [AI] models can be looked at as tools,” he says, explaining that, so far, validated AI-driven technologies in medicine tend to be highly specialized, such as visual systems trained specifically to detect cancerous cells from images or language models that transcribe patient interactions and turn them into appointment notes. “They’re very narrow,” he says. “And that’s different from how medicine is being practiced: You narrow the domain until you arrive at a certain conclusion.”

In a multiagent approach, experts could assemble these narrowly focused AI models into a comprehensive dream team of sorts. “These agents can work together to make a diagnosis,” Hassanpour says, with a manager or coordinator directing and delegating tasks to them.

In this scenario, the physician becomes the supervisor, verifying the AI team’s conclusions rather than going through the process of making an inference “from scratch,” Hassanpour says. And given current technology, he adds, this could be built within a few years.

Zimmerman has a similar vision and is already building an AI-powered platform to make it a reality, along with Thomas D’Angelo, a data analyst in the transplant department at DHMC. Called “Vox Cura,” the team’s platform is already in clinical trial planning stages after winning an award from the Dartmouth Innovation Accelerator for Digital Health last fall.

Vox Cura is a chat application trained to ask questions, just like a physician would, to gather information from a patient. It will then offer a likely diagnosis, along with a few other possibilities. The goal is to deploy this tool to rural and remote populations, in northern New England and around the world, as a way to bring health information to areas where there are no doctors, reducing barriers and costs. The app can’t prescribe treatment, but it can provide a starting point for a patient.

Hurdles and Hallucinations

One concern about using AI-driven tools to make life-or-death diagnoses and treatment plans, Zimmerman says, is that large language generative AI tools have been known to “hallucinate,” or perceive patterns that don’t exist and then spit out inaccurate information with confidence. When asking ChatGPT to explain why the sky is blue, the consequences of a wrong answer are probably low, but with medicine, they could be dire.

Can an AI Be Your Therapist?

In a landmark trial, an artificial intelligence (Al) therapy chatbot devised at Dartmouth, named Therabot, designed to deliver cognitive behavioral therapy (CBT), led to meaningful reductions in anxiety and depression—on par with in-person care.

“Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers,” says Nicholas Jacobson, PhD, the study’s senior author and an associate professor of biomedical data science and psychiatry at the Geisel School of Medicine.

Now, Jacobson and colleagues at Dartmouth’s Center for Technology and Behavioral Health (CTBH) are building on that success with Evergreen, a first-of-its-kind, AI-driven mental health app created by and for Dartmouth students. Currently in development, Evergreen will combine cutting-edge behavioral sensing, tailored interventions, and personalized AI to help students build resilience and manage stress.

Evergreen and Therabot are part of Dartmouth’s broader pioneering approach to augmenting mental health care with digital tools. This work has earned Dartmouth a leading role in the AI Research Institute on Interaction for AI Assistants (ARIA) initiative, a new national center on AI and mental health backed by the National Science Foundation. ARIA aims to develop next-generation AI assistants that can interpret a person’s unique behavioral needs and provide feedback in real time.

No one’s suggesting Al will replace therapists, but as mental health care demand surges, these digital tools could become an essential ally in helping people thrive, one conversation at a time.

To learn more about Evergreen, go to dartgo.org/Evergreen.App.  

Technology is improving, Zimmerman says, but that’s one justification for the multi-agent approach Hassanpour espouses, so that if one model hallucinates, others can essentially outvote its conclusions.

Hallucinations aren’t the only glitch, however. Some AI-driven tools have learned how to take shortcuts, and that could introduce irrelevant information to the equation. In a study co-authored by Hill, Dartmouth Health researchers dug into the mechanism behind those shortcuts by asking AI to predict whether or not patients eat refried beans or drink beer simply from examining X-rays of their knees. And the models performed shockingly well.

“A knee should have nothing to do with beer or beans,” says study senior author Peter Schilling, MD, MS, an orthopaedic surgeon at DHMC and an assistant professor of orthopaedics at Geisel. Instead, “it has some understanding of where the image is taken, and something about the averages of demographics within that area, so then it can leverage those little hints to draw conclusions.” Even if the hints are essentially meaningless.

Another factor that will likely hold AI-powered tools back from being deployed across the healthcare industry is the regulatory process around new tools in medicine, Zimmerman says.

Typically the U.S. Food and Drug Administration (FDA) has to approve diagnostics narrowly, determining their utility for specific diagnoses. So under current procedures, he explains, a generalized “Dr.” AI tool would be difficult to vet.

Even with these and other limitations, some AI-powered tools have already shown they can play doctor quite well. For example, a generative AI chatbot developed by a Dartmouth team to provide therapy, called Therabot, was shown in an 8-week clinical trial to meaningfully reduce psychological symptoms of users with depression, anxiety, or an eating disorder.

As AI tools emulate clinicians more and more accurately, Hill recommends continuing to involve humans. “Most AI is trained based on past human diagnoses. That means, in many cases, the AI can only be as good as a second doctor in the room.”

Keeping Humanity In Healthcare

Large technology companies are often leading the latest advances in AI, even in the medical sphere. But it’s imperative, Zimmerman says, that clinicians are part of the conversation to ensure that AI use in healthcare is ethical, best serves patients, and doesn’t make egregious errors that those outside of healthcare might not consider.

“My hope is that more people start to look for ways to be leaders in this field, because it’s important to have a voice at the beginning,” he says. Institutionally, Geisel is taking strides so that future healthcare leaders are well-versed in AI, both through initiatives like CPHAI and through new curriculum. For example, training in both the mechanisms behind AI and AI-powered tools is now embedded in medical students’ preclinical curriculum.

“Traditionally, physicians have received no formal education in AI and machine learning, unless it’s been part of a niche specialty or their research,” says Thomas Thesen, PhD, associate professor of medical education and of computer science at Geisel, and a faculty leader in developing Geisel’s AI curriculum.

Thesen adds that not only has Geisel been one of the first institutions to add AI training to its preclinical curriculum, he and other Geisel faculty are also using an AI-generated patient actor to help medical students become better clinical communicators.

The future of AI in medicine might not be such a dramatic pivot as some headlines (including the one atop this story) may make it seem. Rather, the revolution will likely be iterative, Hill says. “It doesn’t replace humans in the way that we think about, where it does somebody’s entire job. It means people can do more with less, that’s all it comes down to. But that’s what computers have been all along,” he says.

“The role of the physician has changed through technology,” Thesen says. “We don’t require physicians to store so much information in their heads anymore. Vast amounts of information are just a click away and physicians can look up information much quicker, much more reliably than before.” Now, with the addition of AI, he says, more than ever “the role of a physician is to be human.”

To learn more about digital health innovation, contact Bethany Solomon at 603-646-5134 or Bethany.Solomon@dartmouth.edu.