We are entering a different world in medicine. Things are changing quickly and one of the biggest changemakers may be AI (artificial intelligence). A recent study showed that:
A study published in JAMA Internal Medicine indicates that artificial intelligence assistant-generated responses to patients’ questions are better than physicians’ responses regarding quality and empathy.
This may intrigue you but be careful. Comparing ChatGPT to burned out doctors answering emails is like shooting fish in a barrel. In other words, any response may be better. The point for doing the study is obvious. They want to replace physicians in the long run.
Based on these findings, the scientists recommend that AI chatbot assistants can be adopted in clinical setups for electronic messaging. However, chatbot-generated messages should be reviewed and edited by physicians to improve accuracy levels and restrict potential false or fabricated information.
First they want doctors to adopt it. Then they want doctors to review it. Soon they will remove the docs and just put a disclaimer in that the AI is not responsible for their answers.
But should you, as a DPC doc, use AI? I have played around with it and the AI answers sure sound nice. That’s a good thing. You may even get some ideas on a differential diagnosis from the AI. All that is helpful. But be careful. I have seen LOTS of AI mistakes and you need to review EVERYTHING. You also risk being found out by your patients that this is an AI response. This really looks bad.
In my “Churn” book I talk about how to make sure your emails and texts are not too abrupt or can be taken the wrong way. My point is that you DO need to be better in this area because of how patients can intepret your responses (if you are too curt, if you use the wrong words, etc).
I would love to hear any thoughts on how you are using AI for emails and texts.
Until then, enjoy this snippet about ChatGPT from South Park but watch the whole episode to see how things can really go wrong with it.