Over the last few years, artificial intelligence (AI) has moved from a nascent futuristic concept to an everyday tool. We’ve seen an AI-generated video of a murder victim make a statement in court, police using AI chatbots to write crime reports, traffic lights made “smart” with AI, and AI-supported apps that tell you why your houseplant is dying.
As AI matures, its ability to process information like humans—but at warp speed— makes it an enticing way to supercharge endeavors that require synthesizing large amounts of information. This is particularly true for medicine and healthcare, which require clinicians to use their knowledge of the vast body of existing and ever-changing medical research to interpret patients’ symptoms.
“This is going to be the biggest wave to hit medicine,” says Asha Zimmerman, MD, a transplant surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center (DHMC) and assistant professor of surgery at the Geisel School of Medicine at Dartmouth, who is working on his own applications for AI in medicine.
Transplant surgeon Asha Zimmerman, MD (right), says AI “is going to be the biggest wave to hit medicine.” Zimmerman, who practices at DHMC and is an assistant professor of surgery at Geisel, is building his own AI tool, called Vox Cura, to provide patients with on-demand medical advice when they can’t access human doctors.
Technology is improving, Zimmerman says, but that’s one justification for the multi-agent approach Hassanpour espouses, so that if one model hallucinates, others can essentially outvote its conclusions.
Hallucinations aren’t the only glitch, however. Some AI-driven tools have learned how to take shortcuts, and that could introduce irrelevant information to the equation. In a study co-authored by Hill, Dartmouth Health researchers dug into the mechanism behind those shortcuts by asking AI to predict whether or not patients eat refried beans or drink beer simply from examining X-rays of their knees. And the models performed shockingly well.
“A knee should have nothing to do with beer or beans,” says study senior author Peter Schilling, MD, MS, an orthopaedic surgeon at DHMC and an assistant professor of orthopaedics at Geisel. Instead, “it has some understanding of where the image is taken, and something about the averages of demographics within that area, so then it can leverage those little hints to draw conclusions.” Even if the hints are essentially meaningless.
Another factor that will likely hold AI-powered tools back from being deployed across the healthcare industry is the regulatory process around new tools in medicine, Zimmerman says.
Typically the U.S. Food and Drug Administration (FDA) has to approve diagnostics narrowly, determining their utility for specific diagnoses. So under current procedures, he explains, a generalized “Dr.” AI tool would be difficult to vet.