If A.I. Can Diagnose Patients, What Are Doctors For?
October 02, 2025 at 01:51AMArtificial intelligence is permeating every facet of our lives. Health care is no exception. As they once did by scouring WebMD, people are turning to ChatGPT to diagnose themselves. But doctors are using the technology too. Dhruv Khullar examines where the line between help and harm lies when it comes to incorporating large language models into medicine:
Learning how to deploy A.I. in the medical field, Rodman told me later, will require a science of its own. Last year, he co-authored a study in which some doctors solved cases with help from ChatGPT. They performed no better than doctors who didn’t use the chatbot. The chatbot alone, however, solved the cases more accurately than the humans. In a follow-up study, Rodman’s team suggested specific ways of using A.I.: they asked some doctors to read the A.I.’s opinion before they analyzed cases, and told others to give A.I. their working diagnosis and ask for a second opinion. This time, both groups diagnosed patients more accurately than humans alone did. The first group proved faster and more effective at proposing next steps. When the chatbot went second, however, it frequently “disobeyed” an instruction to ignore what the doctors had concluded. It seemed to cheat, by anchoring its analysis to the doctor’s existing diagnosis.
Systems that strategically combine human and A.I. capabilities have been described as centaurs; Rodman’s research suggests that they have promise in medicine. But if A.I. tools remain imperfect and humans lose the ability to function without them—a risk known as “cognitive de-skilling”—then, in Rodman’s words, “we’re screwed.” In a recent study, gastroenterologists who used A.I. to detect polyps during colonoscopies got significantly worse at finding polyps themselves. “If you’re a betting person, you should train doctors who know how to use A.I. but also know how to think,” Rodman said.
It seems inevitable that the future of medicine will involve A.I., and medical schools are already encouraging students to use large language models. “I’m worried these tools will erode my ability to make an independent diagnosis,” Benjamin Popokh, a medical student at University of Texas Southwestern, told me. Popokh decided to become a doctor after a twelve-year-old cousin died of a brain tumor. On a recent rotation, his professors asked his class to work through a case using A.I. tools such as ChatGPT and OpenEvidence, an increasingly popular medical L.L.M. that provides free access to health-care professionals. Each chatbot correctly diagnosed a blood clot in the lungs. “There was no control group,” Popokh said, meaning that none of the students worked through the case unassisted. For a time, Popokh found himself using A.I. after virtually every patient encounter. “I started to feel dirty presenting my thoughts to attending physicians, knowing they were actually the A.I.’s thoughts,” he told me. One day, as he left the hospital, he had an unsettling realization: he hadn’t thought about a single patient independently that day. He decided that, from then on, he would force himself to settle on a diagnosis before consulting artificial intelligence. “I went to medical school to become a real, capital-‘D’ doctor,” he told me. “If all you do is plug symptoms into an A.I., are you still a doctor, or are you just slightly better at prompting A.I. than your patients?”
from Longreads https://longreads.com/2025/10/01/if-a-i-can-diagnose-patients-what-are-doctors-for/
via IFTTT
Watch