Heard a story that a doctor used llm to prescribe medicine for chronic issue that had terrible side effects and another doc had to cancel it. Come to find out the labs didn’t even match the meds. It’s over, there’s nothing more for us to do. Fucking hell world.


Y’know how reinforcement of learning is really important when you’re studying? Like listening to a lecture, writing notes, then reviewing those notes later?
It’s the same for doctors - they listen to patients and assess symptoms, they either take notes during a consultation or they take mental notes, then they write out full notes later to keep in your patient files. But that’s not just some administrative busywork, at least not entirely. The process of listening, examining, writing out and revising the formal notes gives a doctor time to process and identify any gaps and to recall obscure info or to spot indications of what they should look into further.
Burn me at the stake for this but I can see positive uses for limited AI in applications for diagnosis and for troubleshooting or bouncing ideas off of. That can be very useful, although it comes with risks. But using AI to replace the work of doctors is very troublesome.
I read a story from Redd*t, I think, so 50/50 it was a real story but a person was reporting back as a medical transcriber talking about their company shifting to AI transcription and how it was making their job harder because AI would regularly hallucinate the most absurd things and it started inserting commentary from one fictitious figure that would say weird shit. The team started to talk about this figure as if it was a character in a novel and it became a running joke.
It’s mindboggling because there are certain things that can make your life really hard in seeking healthcare, like being marked as having drug-seeking behavior or having BPD. It would only take AI one time to hallucinate this on your patient file and suddenly you’re stuck with a label that is virtually impossible to get rid of that can drastically affect your treatment as a patient. And let’s be honest here, a doctor is probably not going to remember the details from 6 or 12 months ago when they allegedly wrote that in your file, especially if they didn’t actually write it which is proven to affect recall, so they’re almost certainly going to defer to “their” notes and agree with them.
This shit is so concerning. I wish we weren’t a dictatorship of the bourgeoisie being puppeted by silicon valley techbros. AI should get the Amish treatment - it should exist in some outhouse building, isolated from the rest of the world, and you have to intentionally go out of your way to use it purposefully and with consideration for the consequences, it shouldn’t be effectively replacing things and least of all in critical institutions like medicine or education; you can fuck with a lot of things, and believe me I have a laundry list of complaints about both of these institutions, but breaking education and/or medicine risks breaking society.
Counterpoint: You can randomly talk about your huge dick a few times every appointment and the AI has no choice but to put it in your file.
A solution has arrived for this poor soul
My ex dealt with this a lot. She has chronic health problems, and one of her specialists moved to an LLM based ‘scribe.’ It would routinely misinterpret her, littering her official file with symptoms she didn’t have, claiming she hadn’t tried remedies she’d explicitly said she tried, and hallucinating all sorts of other garbage. Then at the next appointment, the doctor would open by just reading this garbage off and chiding her for not doing x, claiming she had y but then saying it was z, etc. And the whole appointment would turn into correcting the record instead of anything productive. So frustrating and irresponsible