• 33 Posts
  • 1.18K Comments
Joined 3 years ago
cake
Cake day: June 27th, 2023

help-circle
















  • The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by “regulating” them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a “public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users”. And so we would have to “implement an age assurance or verification system to determine whether a current or prospective user on the social media platform” is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what “practicable” age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.

    So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don’t trust any of these people.



  • “Scientists invented a fake disease. AI told people it was real”

    https://www.nature.com/articles/d41586-026-01100-y

    But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

    The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

    The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.