By Carla K. Johnson, Associated Press
Don’t be surprised if your doctors start writing you overly friendly messages. They could be getting some help from artificial intelligence.
New AI tools are assisting doctors in communicating with their patients, some by responding to messages and others by taking notes during exams. It’s been 15 months since OpenAI released ChatGPT. Already thousands of doctors are using similar products based on large language models. One company says its tool works in 14 languages.
AI saves doctors time and prevents burnout, enthusiasts say. It also disrupts the doctor-patient relationship, raising questions of trust, transparency, privacy and the future of human connection.
A look at how new AI tools affect patients:
IS MY DOCTOR USING AI?
In recent years, medical devices with machine learning have been performing tasks such as interpreting mammograms, diagnosing eye disease, and detecting heart problems. What’s new is generative AI’s ability to respond to complex instructions by predicting language.
Your next check-up could be recorded by an AI-powered smartphone app that listens, documents and instantly organizes everything into a note you can read later. The tool also can mean more money for the doctor’s employer because it won’t forget details that legitimately could be billed to insurance.
Your doctor should ask for your consent before using the tool. You might also see some new wording in the forms you sign at the doctor’s office.
Other AI tools could be helping your doctor compose a message, but you might never know it.
“Your physician might tell you that they’re using it, or they might not tell you,” said Cait DesRoches, director of OpenNotes, a Boston-based group working for transparent communication between doctors and patients. Some health systems encourage disclosure, and some don’t.
Doctors or nurses must approve the AI-generated messages before sending them. In one Colorado health system, such messages contain a sentence disclosing they were automatically generated. But doctors can delete that line.
“It sounded exactly like him. It was remarkable,” said patient Tom Detner, 70, of Denver, who recently received an AI-generated message that began: “Hello, Tom, I’m glad to hear that your neck pain is improving. It’s important to listen to your body.” The message ended with “Take care” and a disclosure that it had been automatically generated and edited by his doctor.
Detner said he was glad for the transparency. “Full disclosure is very important,” he said.
WILL AI MAKE MISTAKES?
Large language models can misinterpret input or even fabricate inaccurate responses, an effect called a false sensory perception. The new tools have built-in safety measures to attempt to stop inaccuracies from reaching patients — or ending up in electronic health records.
“You don’t want those false things entering the clinical notes,” said Dr. Alistair Erskine, who heads digital innovations for Georgia-based Emory Healthcare, where hundreds of doctors are using a product from Abridge to document patient visits.
The tool processes the doctor-patient conversation through multiple large language models and removes strange ideas, Erskine said. “It’s a way of eliminating hallucinations.”
Ultimately, “the doctor is the most important safety measure,” said Abridge CEO Dr. Shiv Rao. As doctors review AI-generated notes, they can click on any word and listen to the specific segment of the patient’s visit to check accuracy.
In Buffalo, New York, a different AI tool misheard Dr. Lauren Bruckner when she told a teenage cancer patient it was a good thing she didn’t have an allergy to sulfa drugs. The AI-generated note said, “Allergies: Sulfa.”
The tool “totally misunderstood the conversation,” said Bruckner, chief medical information officer at Roswell Park Comprehensive Cancer Center. “That doesn’t happen often, but clearly that’s a problem.”
WHAT ABOUT THE HUMAN TOUCH?
AI tools can be programmed to be warm, compassionate and informative.
But they can get carried away. In Colorado, a patient with a runny nose was concerned to learn from an AI-generated message that the problem could be a brain fluid leak. (It wasn’t.) A nurse hadn’t proofread carefully and mistakenly sent the message.
“At times, it’s an astonishing help and at times it’s of no help at all,” said Dr. C.T. Lin, who leads technology innovations at Colorado-based UC Health, where about 250 doctors and staff use a Microsoft AI tool to write the initial version of messages to patients. The messages are delivered through Epic’s patient portal.
The tool had to be educated about a new RSV vaccine because it was drafting messages saying there was no such thing. But with routine advice — like rest, ice, compression and elevation for an ankle sprain — “it’s excellent for that,” Linn said.
Also on the positive side, doctors using AI are no longer confined to their computers during medical appointments. They can make eye contact with their patients because the AI tool records the exam.
The tool requires spoken words, so doctors are learning to explain things verbally, said Dr. Robert Bart, chief medical information officer at Pittsburgh-based UPMC. A doctor might say: “I am currently examining the right elbow. It is quite swollen. It feels like there’s fluid in the right elbow.”
Talking through the exam for the benefit of the AI tool can also help patients understand what’s going on, Bart said. “I’ve been in an examination where you hear the hesitation while the physician is doing it. And I’m always wondering, ‘Well, what does that mean?’”
WHAT ABOUT PRIVACY?
U.S. law requires health care systems to get assurances from business associates that they will safeguard protected health information, and the companies could face investigation and fines from the Department of Health and Human Services if they make a mistake.
Doctors interviewed for this article expressed confidence in the data security of the new products and asserted that the information will not be traded.
Data shared with the new tools is utilized to enhance them, which could increase the possibility of a health care data breach.
Dr. Lance Owens, chief medical information officer at the University of Michigan Health-West, where 265 doctors, physician assistants and nurse practitioners are utilizing a Microsoft tool to record patient exams, is confident that patient data is being safeguarded.
“When they assure us that our data is protected and isolated, we trust that,” Owens stated.
The Associated Press Health and Science Department receives backing from the Howard Hughes Medical Institute’s Science and Educational Media Group. The AP is fully accountable for all content.