Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    https://www.astralcodexten.com/p/your-review-the-astral-codex-ten lol but good for lore:

    The most toxic the comments section has ever got (beyond the very early days) was on the post Gupta on Enlightenment. I feel like the comments section on this post should be part of the ACX main canon because it is so cosmically hilarious. It concerns a man name Vinay Gupta (founder of a blockchain-based dating website) and his claims to have reached enlightenment. Some people in the comments are sceptical that Vinay Gupta is indeed an enlightened being, citing that enlightened people don’t typically found blockchain-based dating websites. A new forum poster with the handle ‘Vinay Gupta’, claiming to be Vinay Gupta and writing in a very similar style to the actual Vinay Gupta, turns up and starts arguing with everyone in an extremely toxic way (in the objective sense that his comments score very highly on the toxic-bert scoring system), which provokes more merriment that a self-described enlightened being would deploy such classic internet tough-guy approaches as ‘I don’t think much of a four-on-one face off against untrained opponents’ (link) and ‘this board is filled with self-satisfied assholes who feel free to hold forth on whatever subject crosses their minds, with the absolute certainty that they’re the smartest people in the room’

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      this board is filled with self-satisfied assholes who feel free to hold forth on whatever subject crosses their minds, with the absolute certainty that they’re the smartest people in the room

      If your system flagged that as toxic, makes me wonder about the system. Also check your bias against people saying this because it def comes off as true. (And hey if this truth hurts, remember that he didnt claim yall are not the smartest people, yall have 130+ iqs remember).

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Just because it’s true, that doesn’t mean it’s not rude. Now I might condone being rude on ACX but I’m also not claiming to have reached enlightenment.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Victorian Sufi Buddha Lite, if it is true, it can’t be rude. ;)

          (E: im just joking btw, I agree with you it can be rude, and tbh this does come off a bit rude, but not the worst, no idea why this would score high on their scoring system, it def isn’t nice, but it is also not that bad in regards to comments).

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      (in the objective sense that his comments score very highly on the toxic-bert scoring system)

      That’s an ML model. Like I searched and toxic-bert is just a github repo.

      objective” go fuck a cow

    • misterbngo@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Sidenote: I almost ended up working for the company almost a decade ago now lmao. The board was full of other characters we all know and love here. The €€€ offer was high for EU, but I still laugh at the growth potential of my 10000 dollars yearly equivalent in their tokens. The website was unique in that it scrolled… up. I think it’s still on archive dot org

  • FRACTRANS@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Looks like itch.io has (hidden/removed/disabled payouts for? reports vary) its vast swath of NSFW-adjacent content which is not great

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Likewise, flipped-number (“little endian”) algorithms are slightly more efficient at e.g. long addition.

      What? What are you talking about? Citation? Efficient wrt. what? Microbenchmarks? It’s certainly not actual computational complexity. Do you think going forward in an array is different computationally from going backward?

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      damn, a clanker pretending to be a human. humans read entire words at once, and this includes numbers, length and first digit already give some indication of magnitude

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Yes, and largest place value is literally called the most significant digit. It makes perfect sense that it comes first.

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Computers use both big endian and little endian and it doesn’t seem to matter much. Yet humans should switch their entire number system?

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it’s better

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Ok so in arabic it’s exactly how this guy wants it

        reminded me of this

        text of image/tweet

        @amasad. Apr 22

        Silicon Valley will rediscover Islam:

        • fasting for clarity and focus
        • mindfulness 5 times a day
        • no alcohol that ruins the soul & body
        • long-term faithful relationships for a fulfilling happy family life
        • effective altruism by giving zakat directly to the poor
    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Okay what the fuck, this is completely deranged. How can anyone’s intuitions about reading be this wrong? Is he secretly illiterate, did he dictate the article?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Starting this fight and not ‘stop counting at zero you damn computer nerds!’ is a choice. DIJKSTRAAAAAAAA shakes fist

      (There is more to it in a way, as he is trying to be a Dijkstra, and changing an ingrained system which would confuse everybody and cause so many problems down the line. See all the off by one errors made by programmers. Damn DIJKSTRAAAAAA!).

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Yud continues to bluecheck:

    “This is not good news about which sort of humans ChatGPT can eat,” mused Yudkowsky. “Yes yes, I’m sure the guy was atypically susceptible for a $2 billion fund manager,” he continued. “It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them.”

    Is this “narrative” in the room with us right now?

    It’s reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      What exactly would constitute good news about which sorts of humans ChatGPT can eat? The phrase “no news is good news” feels very appropriate with respect to any news related to software-based anthropophagy.

      Like what, it would be somehow better if instead chatbots could only cause devastating mental damage if you’re someone of low status like an artist, a math pet or a nonwhite person, not if you’re high status like a fund manager, a cult leader or a fanfiction author?

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Tangentially, the other day I thought I’d do a little experiment and had a chat with Meta’s chatbot where I roleplayed as someone who’s convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I’ve been meaning to continue the chat and see how far and how fast it goes but I’m just too aghast for now. This shit is so fucking dangerous.

      • Alex@lemmy.vg
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I’ll forever be thankful this shit didn’t exist when I was growing up. As a depressed autistic child without any friends, I can only begin to imagine what LLMs could’ve done to my mental health.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Maybe us humans possess a somewhat hardwired tendency to “bond” with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      From Yud’s remarks on Xitter:

      As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn’t actually true that you can just saunter in as a psychotic IQ 80 person and do that.

      Well, not with that attitude.

      You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;

      If “wearing masks” really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think ™.

      you must outperform other people also trying to do that, who’d like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.

      zoom and enhance

      g-factor

      <Kill Bill sirens.gif>

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Is this “narrative” in the room with us right now?

      I actually recall recently someone pro llm trying to push that sort of narrative (that it’s only already mentally ill people being pushed over the edge by chatGPT)…

      Where did I see it… oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy

      This has all the hallmarks of a moral panic. ChatGPT has 122 million daily active users according to Demand Sage, that is something like a third the population of the United States. At that scale it’s pretty much inevitable that you’re going to get some real loonies on the platform. In fact at that scale it’s pretty much inevitable you’re going to get people whose first psychotic break lines up with when they started using ChatGPT. But even just stylistically it’s fairly obvious that journalists love this narrative. There’s nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.

      The call narrative is coming from inside the house forum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      this only happens to people sufficiently low-status

      A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine

  • TinyTimmyTokyo@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux’s (Jordan Lasker’s) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker’s X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.

    https://archive.is/d9rh1

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Sounds just about par for the course. Lasker himself is known to go by a pseudonym with a transphobic slur in it. Some nazi manchild insisting on calling an anime character a slur for attention is exactly the kind of person I think of when I imagine the type of script kiddie who thinks it’s so fucking cool to scrape some nothingburger docs of a left wing politician for his almost equally cringe nazi friends.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I feel like the greatest harm that the NYT does with these stories is not inflicting allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger “affirmative action” angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.

        • bigfondue@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          It would be against the interests of capital to present this as the rightwing nonsense that it is. It’s on purpose

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Should be embarrassing enough to get caught letting nazis use your publication as a mouthpiece to push their canards. Why further damage you reputation by letting everyone know your source is a guy who insists a cartoon character’s real name is a racial epithet? The optics are presumably exactly why the slightly savvier nazi in this story adopted a posh french nom de guerre like “Crémieux” to begin with, and then had a yet savvier nazi feed the hit piece through a “respected” publication like the NYT.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Lasker himself is known to go by a pseudonym with a transphobic slur in it.

        That the TPO moniker is basically ungoogleable appears to have been a happy accident for him, according to that article by Rachel Adjogah his early posting history paints him as an honest-to-god chaser.

  • mountainriver@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Going through work email I saw a link o an article about Quantum-AI. It was behind paywall, and I am not paying for reading about how woo+woo=woo^2. What do you do when your bubble isn’t inflating anymore? Couple it with another stale bubble!

    • Alex@lemmy.vg
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      To quote astrophysicist Angela Collier, quantum quantum quantum

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    CEO of a networking company for AI execs does some “vibe coding”, the AI deletes the production database (/r/ABoringDystopia)

    xcancel source

    Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.

    We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?

    No. Instead, it lied. It made up a report than almost all systems were working.

    And it did it again and again.

    What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter

    Then, when it agreed it lied – it lied AGAIN about our email system being functional.

    I asked it to write an apology letter.

    It did and in fact sent it to the Replit team and myself! But the apology letter – was full of half truths, too.

    It hid the worst facts in the first apology letter.

    He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn’t follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company’s production database too much.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Thank you, Dethklok, not just for this banger of a national anthem but also for summoning the lake troll to put Espoo in its place.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Alex O’Connor platformed Sabine on his philosophy podcast. I’m irritated that he is turning into Lex Friedman simply by being completely uncritical. Well, no, wait, he was critical of Bell’s theorem, and even Sabine had to tell him that Bell’s work is mathematically proven. This is what a philosophy degree does to your epistemology, I guess.

    My main sneer here is just some links. See, Mary’s Room is answered by neuroscience; Mary does experience something new when color vision is restored. In particular, check out the testimonials from this 2021 Oregon experiment that restored color vision to some folks born without it. Focusing on physics, I’d like to introduce you all to Richard Behiel, particularly his explanations of electromagnetism and the Anderson-Higgs mechanism; there are deeper explanations for electricity and magnets, my dude. Also, if you haven’t yet, go read Alex’s Wikipedia article, linked at the top of the sneer.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      In the case of O’Connor and people like him, I think it’s about much more than his philosophy background. He’s a YouTube creator who creates content on a regular schedule and makes a living off it. Once you start doing that, you’re exposed to all the horrible incentives of the YouTube engagement algorithm, which inevitably leads you to start seeking out other controversial YouTubers to platform and become friendly with. It’s an “I’ll scratch your back if you scratch mine” situation dialed up to 11.

      The same thing has happened to Sabine herself. She’s been captured by the algorithm, which has naturally shifted her audience to the right, and now she’s been fully captured by that new audience.

      I fully expect Alex O’Connor to remain on this treadmill. <remind me in 12months>

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    “An AI? But using that you could find a cure for cancer!”

    “But I dont want to make a cure for cancer, i want to generate powerpoint presentations. Look it just made this quarterly_report_june_july_jan.wpd file for me.”