The more I examine AI in medicine the more it appears to be very risky. In a Med City News piece they note:
Another healthcare executive — Jess Botros, vice president of IT strategy and operations at Ardent Health — noted that she wants the system’s clinicians to be able to spend as much time as possible with patients and have the right tools in hand. That said, there’s a lot of responsibility when it comes to deploying AI.
“In order to do this in the right way, you have to have your house in order from a data perspective, from a trust perspective,” she said. “You think about change management impacts and making sure that people are really along for the ride and really understand why we’re doing what we’re trying to do. It becomes super important.”
My concerns are as follows:
1. Good medical practice is trying to understand the patient. Patients are often the main obstacle to good medical practice. They delay treatment, they often do not express all the symptoms, they try their own diagnosis thus adding noise to the process, and fundamentally they do not listen, often through fear, and the physician does not explain well enough.
2. Patient data is all too often in error. From time to time I examine my own data and see entries that make no sense and critical entries missing. For example I never have had GERD and my lipids are rock bottom. But both were listed otherwise on various reports. Thus if this data is entered into an AI system the AI doc will naturally come up with the wrong answer. Just try correcting these errors, impossible.
3. Does the AI doc need a license to practice? In what state? If I were to try practice in Georgia I would face a period in prison. But if the AI doc is in Montana can the patient in New York be diagnosed?
4. My best issue is who does a patient sue? The AI doc does not really exist.
5. The AI doc is really just a good and fast research librarian. Ask a question and get an answer based upon existing information. But what if this patient is a one off? Never seen this before. (98% of medicine is rote, but the 2% is the challenge and most likely missing in the information fed to the AI system.
6. Patients are asked to fill out health forms. Many have no idea how to answer. Long lists of what a patient may have had get confusing answers. I am often reminded of Marty Samuels and his dizzy discussions. Trying to find out what type of dizzy and the cause may result in many unnecessary tests and may even miss a severe and immediate cause.
Thus bad input data, unseen conditions, poor patient communications are just a few of the issues with AI docs.
In a Bayesian world, and much of medicine is that way, diagnosis and treatment is often based upon pre-existing data. If that patient data is wrong, not current, then the results could be either unproductive or worse deadly!