Heard a story that a doctor used llm to prescribe medicine for chronic issue that had terrible side effects and another doc had to cancel it. Come to find out the labs didn’t even match the meds. It’s over, there’s nothing more for us to do. Fucking hell world.
Doctor friends are having auto-scribe functionality pushed on them relentlessly. For those not in the know, competent doctors spend some time after your visit (up to half as long as the visit itself perhaps) writing a note summarizing the visit and deciding on course of action. This important process is being turned into slop as doctors at encouraged to use LLMs to generate the note. The time saved of course is rolled into increasing the number of patient visits or other income-producing activities for the hospital.
Y’know how reinforcement of learning is really important when you’re studying? Like listening to a lecture, writing notes, then reviewing those notes later?
It’s the same for doctors - they listen to patients and assess symptoms, they either take notes during a consultation or they take mental notes, then they write out full notes later to keep in your patient files. But that’s not just some administrative busywork, at least not entirely. The process of listening, examining, writing out and revising the formal notes gives a doctor time to process and identify any gaps and to recall obscure info or to spot indications of what they should look into further.
Burn me at the stake for this but I can see positive uses for limited AI in applications for diagnosis and for troubleshooting or bouncing ideas off of. That can be very useful, although it comes with risks. But using AI to replace the work of doctors is very troublesome.
I read a story from Redd*t, I think, so 50/50 it was a real story but a person was reporting back as a medical transcriber talking about their company shifting to AI transcription and how it was making their job harder because AI would regularly hallucinate the most absurd things and it started inserting commentary from one fictitious figure that would say weird shit. The team started to talk about this figure as if it was a character in a novel and it became a running joke.
It’s mindboggling because there are certain things that can make your life really hard in seeking healthcare, like being marked as having drug-seeking behavior or having BPD. It would only take AI one time to hallucinate this on your patient file and suddenly you’re stuck with a label that is virtually impossible to get rid of that can drastically affect your treatment as a patient. And let’s be honest here, a doctor is probably not going to remember the details from 6 or 12 months ago when they allegedly wrote that in your file, especially if they didn’t actually write it which is proven to affect recall, so they’re almost certainly going to defer to “their” notes and agree with them.
This shit is so concerning. I wish we weren’t a dictatorship of the bourgeoisie being puppeted by silicon valley techbros. AI should get the Amish treatment - it should exist in some outhouse building, isolated from the rest of the world, and you have to intentionally go out of your way to use it purposefully and with consideration for the consequences, it shouldn’t be effectively replacing things and least of all in critical institutions like medicine or education; you can fuck with a lot of things, and believe me I have a laundry list of complaints about both of these institutions, but breaking education and/or medicine risks breaking society.
Counterpoint: You can randomly talk about your huge dick a few times every appointment and the AI has no choice but to put it in your file.
My ex dealt with this a lot. She has chronic health problems, and one of her specialists moved to an LLM based ‘scribe.’ It would routinely misinterpret her, littering her official file with symptoms she didn’t have, claiming she hadn’t tried remedies she’d explicitly said she tried, and hallucinating all sorts of other garbage. Then at the next appointment, the doctor would open by just reading this garbage off and chiding her for not doing x, claiming she had y but then saying it was z, etc. And the whole appointment would turn into correcting the record instead of anything productive. So frustrating and irresponsible
Oh great! So this is systematized and not some, one off situation.
The people pushing it sincerely believe that it reduces error. They are out of their minds.
It’s cool that no amount of proof could tell them this fact, and they will ignore this truth because they prefer the lie that their machine isn’t capable of lying.
https://lookinside.kaiserpermanente.org/blog/2025/06/10/ai-assists-in-medical-visit-note-taking/
Between October 2023 and December 2024, the tool was used by 7,260 physicians to assist in about 2.5 million patient encounters. Some doctors were particularly heavy users — nearly 3,500 physicians each used AI scribes in at least 100 patient encounters.
Former scribe here. I left the job for unrelated reasons, but it was being pushed on doctors hard (final straw for my doc, he switched hospitals)
LLM for scribing is radically stupid because it can only catch words, and not emotions. Sometimes the patients themselves don’t even know how to describe their symptoms- or the patient focuses on something non serious over a very serious symptom. “Yeah doc I know one of my legs gives out every 5 minutes and I’m going blind but I sneeze a lot???”
Llm’s can’t parse that, and end up making more work for the doctors by forcing them to edit all of their notes.
If this is true, may their state medical board make confetti of their license. jail time would be nice, for the doc and anyone in their org who approved use of LLMs in patient care. Restitution for the patients harmed. I wanna go fucking scorched earth on these people

Thank you for coming to my Ted talk now please read Health Communism by Beatrice Alder-Bolton and Artie Vierkant
More reason’s to deprivatize the healthcare.
last one out the door, please turn off the lights.
Absolutely wild
Machine learning is well-suited for diagnostics and treatment, there’s already tools for this, there is no fucking reason to use a language model of all things. 12+ years of school, making 400k+ a year, but is too lazy to use UpToDate or any of the hundreds of other options. Rather than a glorified Markov chain. Walking past millions of hours of human study and knowledge to pick up a fuckin magic 8 ball.
This is even worse than the story about using AI to recognize edible mushrooms.
Oh god, no. I’m a beginner forager and I wouldn’t even touch a mushroom I found unless multiple professionally written sources helped me identify it clearly. AI can’t take spore prints, so…what the fuck?








