Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Anisette [any/all]@quokk.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 天前

    OT: been slowly migrating from dbzer0. I broadly like the community on that instance, but it has become harder and harder for me to justify associating with anything ai-friendly. Because quokk.au is a piefed instance my client hasn’t fully caught up, but they seem to be working on it.

    E: ok, more issues than I thought, since it displayed this stubsack instead of the most recent one for some reason

    • dovel@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      Devin will be supervised by human employees and will handle jobs that engineers often consider drudgery, like updating internal code to newer programing languages, he said.

      Good luck to the workers having to debug that shit.

      Goldman is the first major bank to use Devin, according to Cognition, which was founded in late 2023 by a trio of engineers and whose staff is reportedly stocked with champion coders.

      Being good at Codeforces contests surely translates to any other domain. I expect the Cognition guys to fully deliver on their promises.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      Just the usual stuff religions have to do to maintain the façade, “this is all true but gee oh golly do NOT live your life as if it was because the obvious logical conclusions it leads to end in terrorism”

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      The hidden prompt is only cheating if the reviewers fail to do their job right and outsource it to a chatbot, it does nothing to a human reviewer actually reading the paper properly. So I won’t say it’s right or ethical, but I’m much more sympathetic to these authors than to reviewers and editors outsourcing their job to an unreliable LLM.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        It’s almost as if teachers were grading their students’ tests using a dice, and then the students tried manipulating the dice (because it was their only shot at getting better grades), and the teachers got mad about that.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      This is, of course, a fairly blatant attempt at cheating. On the other hand: Could authors ever expect a review that’s even remotely fair if reviewers outsource their task to a BS bot? In a sense, this is just manipulating a process that would not have been fair either way.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        I’ve had similar thoughts about AI in other fields. The untrustworthiness and incompetence of the bot makes the whole interaction even more adversarial than it is naturally.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      What I don’t understand is how these people didn’t think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        They probably got fed up with a broken system giving up it’s last shreds of legitimacy in favor of LLM garbage and are trying to fight back? Getting through an editor and appeasing reviewers already often requires some compromises in quality and integrity, this probably just seemed like one more.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 个月前

    Bummer, I wasn’t on the invite list to the hottest SF wedding of 2025.

    Update your mental models of Claude lads.

    Because if the wife stuff isn’t true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon???

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 个月前

    I’m going to put a token down and make a prediction: when the bubble pops, the prompt fondlers will go all in on a “stabbed in the back” myth and will repeatedly try to re-inflate the bubble, because we were that close to building robot god and they can’t fathom a world where they were wrong.

    The only question is who will get the blame.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 个月前

      nah they’ll just stop and do nothing. they won’t be able to do anything without chatgpt telling them what to do and think

      i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going

        I believe that without access to generative AI, spammers and scammers wouldn’t be able to successfully compete in their respective markets anymore. So at the very least, the AI companies got this going for them, I guess. This might require their sales reps to mingle in somewhat peculiar circles, but who cares?

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 个月前

          i meant more like scamming true believers out of their money like happens with crypto, this is cfar deal currently. spam, as something nobody should or wants to spend their creative juices on, or for that matter interact in any way, seems a natural fit for automation with llms

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      I increasingly feel that bubbles don’t pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      The only question is who will get the blame.

      Isn’t it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. We’ll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        it was me, I popped AI. I destroyed Twitter (and, in collateral damage, I blew up the United States), and those fuckers are next. You’re welcome.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 个月前

          You’re welcome.

          Given their assumptions, the doomers should be thanking us for delaying AGI doom!

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      In past tech bubbles, it was basically the VCs, the media hypesters and the liars in the companies. So the right people.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 个月前

    The Gentle Singularity - Sam Altman

    This entire blog post is sneerable so I encourage reading it, but the TL;DR is:

    We’re already in the singularity. Chat-GPT is more powerful than anyone on earth (if you squint). Anyone who uses it has their productivity multiplied drastically, and anyone who doesn’t will be out of a job. 10 years from now we’ll be in a society where ideas and the execution of those ideas are no longer scarce thanks to LLMs doing most of the work. This will bring about all manner of sci-fi wonders.

    Sure makes you wonder why Mr. Altman is so concerned about coddling billionaires if he thinks capitalism as we know it won’t exist 10 years from now but hey what do I know.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      anyone who doesn’t will be out of a job

      quick, Sam, name five jobs that don’t involve sitting at a desk

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      Chat-GPT is more powerful than anyone on earth (if you squint)

      xD

      No sorry, let me rephrase,

      Lol, lmao

      How do you even grace this with a response. Shut your eyes and loudly sing “lalalala I can’t hear you”

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 个月前

    “Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while.”

    Me, two months ago

    Well, it appears I’ve fucking called it - I’ve recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:

    You want my opinion on this small-scale debacle, I’ve got two thoughts about this:

    First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art’s uniquely AI-like sloppiness, and chatbots’ uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI’s detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.

    Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I’ve already noted, the LLM bubble’s undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that’s built up against AI, and you’ve got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 个月前

      Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.

      Remember, always close-read discussions about robotics by replacing the word “robot” with “slave”. When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:

      I’m not gonna lie, if slaves ever start protesting for rights, I’m also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 个月前

    In the recent days there’s been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don’t have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

    The argument deployed by individuals such as Bentham’s Bulldog boils down to: “Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

    “Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!”

    https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        Damn making honey is metal as fuck. (And I mean that in a omg this is horrible, you could write disturbing songs about it way) CRUSHED FOR YOUNG! MAMMON DEMANDS DISMEMBERMENT! LIVING ON SLOP, HIVE CULLING MANDATORY. Makes a 40k hive city sound nice.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

      This, coming from LW, just has to be satire. There’s no way to be this self-unaware and still remember to eat regularly.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 个月前

    NYT covers the Zizians

    Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html

    Archive link: https://archive.is/9ZI2c

    Choice quotes:

    Big Yud is shocked and surprised that craziness is happening in this casino:

    Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”

    “A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

    Good news everyone, it’s popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats

    Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.

    Keep saving money for retirement and keep having kids, but for god’s sake don’t stop blogging about how AI is gonna kill us all in 5 years:

    To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      Re the “A lot of the early Rationalists" bit. Nice way to not take responsibility, act like you were not one of them and throw them under the bus because “genuinely crazy” like some preexisting condition, and not something your group made worse, and a nice abuse of the general publics bias against “crazy” people. Some real Rationalist dark art shit here.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      Yet Rationalists I spoke with said they didn’t see targeted violence — bombing data centers, say — as a solution to the problem.

      ahem

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn’t count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      “A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 个月前

    Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.

    In simpler terms, it jailbreaks LLMs by speaking in Business Bro.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise “Business English” - if anything, the fact that LLM models are similarly prone to ignore their “conscience” and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

      Or:

      Shit, isn’t the whole point of Business Bro language to make evil shit sound less evil?

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 个月前

      maybe there’s just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that “explain step by step” trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

      it’d be more of case of getting awful output from awful input

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 个月前

        Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.