Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s the McMindfulness guy, nice to see that he is still kicking around.

      In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language—his soft-spoken, monkish image (gosh, little Sammy even practices mindfulness!)

      lol ofc he does

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Computer scientist Louis Rosenberg argues that dismissing AI as a “bubble” or mere “slop” overlooks the tectonic technological shift that’s reshaping society.

      “Please stop talking about the bubble bursting, I haven’t handed off my bag yet”

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein’s rape island. generally, credible publications should have Ē=0. this one, after a very quick look, has Ē=2, and also hosts sabine hossenfelder.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      We are three paragraphs and one subheading down before we hit an Ayn Rand quote. This clearly bodes well.

      A couple paragraphs later we’re ignoring both the obvious philosophical discussion about creativity and the more immediate argument about why this technology is being forced on us so aggressively. As much as I’d love to rant about this I got distracted by the next bit talking about how micro expressions will let LLMs decode emotions and whatever. I’d love to know this guy’s thoughts on that AI-powered phrenologist features a couple weeks ago.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    (e, cw: genocide and culturally-targeted hate by the felon bot)

    world’s most divorced man continues outperforming black holes at sucking

    404 also recently did a piece on his ego-maintenance society-destroying vainglory projects

    imagine what it’s like in his head. era-defining levels of vacuous.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      From the replies

      I wonder what prompted it to switch to Elon being worth less than the average human while simultaneously saying it’d vaporize millions if it could prolonged his life in a different sub-thread

      It’s odd to me that people still expect any consistency from chatbots. These bots can and will give different answers to the same verbatim question. Am I just too online if I have involuntarily encountered enough AI output to know this?

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Does this mean calibre’s use case is a digital equivalent of a shelf of books you never read?

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      https://awful.systems/post/5776862/8966942 😭

      also this guy is a bit of a doofus, e.g. https://bugs.launchpad.net/calibre/+bug/853934, where he is a dick to someone reporting a bug, and https://bugs.launchpad.net/calibre/+bug/885027, where someone points out that you can execute anything as root because of a security issue, and he argues like a total shithead

      You mean that a program designed to let an unprivileged user
      mount/unmount/eject anything he wants has a security flaw because it allows
      him to mount/unmount/eject anything he wants? I’m shocked.

      Implement a system that allows an appilcation to mount/unmount/eject USB
      devices connected to the system securely, then make sure that system is
      universally adopted on every linux install in the universe. Once you’ve done that, feel free to
      re-open this ticket.

      i would not invite this person to my birthday

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I was vaguely aware of the calibre vulnerabilities but this is the first I’ve actually read the thread and it’s wild.

        There were like 11 or so Proof of Concept exploits over the course of that bug? And he was just kicking and screaming the whole time about how fine his mount-stuff-anywhere-as-root (!!?) code was.

        I’m always fascinated when people are so close to getting something-- like in that first paragraph you quoted. In any normal software project you could just put that paragraph as the bug report and the owners would take is seriously rather than use it as an excuse for why their software has to be insecure.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    From Lila Byock:

    A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.

    The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

      I try to avoid having to even see the outputs of these fucking systems, but you just made me realize that there’s going to be more than a few of them that will “leak” (read: preferentially deliver, by way of training focus) the kinks of its particular owner. I mean it’s already happening for the textual replies on twitter, soothing felon’s ever so bruised ego. the chance of it not Shipping beyond that is pretty damn zero :|

      god I hate all of this

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Dr. Casey Fiesler reports,

    I was poking around Google Scholar for publications about the relationship between chatbots and wellness. Oh how useful: a systematic literature review! Let’s dig into the findings.

    […]

    Did you guess “that paper does not actually exist”?

    Did you also guess that NOT A SINGLE PAPER IN THEIR REFERENCES APPEARS TO EXIST? […] When I was searching in various places to confirm that those citations were fabricated, Google’s AI overview just kept the con going.

    Jill Walker Rettberg in the comments:

    There’s a peer reviewed published paper in AI & Society called Cognitive Imperialism and Artificial Intelligence which is clearly mostly AI-generated. Citations are real but almost all irrelevant. I emailed the editors weeks ago but it’s still up there and getting cited.

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      He came by campus last spring and did a reading, very solid and surprisingly well-attended talk.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Always thought she should have stuck to acting.

      (I know, Hayek just always reminds me of how people put his quotes over Hayeks image, and people just get really mad at her, and not at him. Always wonder if people would have been just as mad if it was Friedrichs image and not Salmas due to the sexism aspect).

    • jonhendry@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I can see it making sense, what with CPUs moving to integrated RAM, and probably CPU-integrated flash, to maximize speed. The business of RAM and flash drive upgrades will become a very large but shrinking retrocomputing niche probably served by small Chinese fabs.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This seems like a bit of a desperation pivot while the bubble money is still flowing. I’ve heard they struggled a bit with shipping PCIE CXL memory that’s capable of memory sharing between rackmount nodes, so they’re probably taking everything from the consumer channel and cramming it into the enterprise channel in a bid to be the low-cost/high-volume provider. I would expect them to eventually come limping back into the consumer market to much marketing fanfare, alongside trying to set a higher price floor there, similar to Taco Bell bringing back the Mexican pizza.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Bleugh, I’ve been using crucial ram and flash for a hell of a long time, and they’ve always been high quality and reasonably priced. I dislike having to find new manufacturers who don’t suck, especially as the answer seems to be increasingly “lol, there are no such companies”.

      Thanks to the ongoing situation in the us, it doesn’t look like the ai bubble is going to pop soon, but I can definitely see it causing more damage like this before the event.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Most insane part about this is after he assaulted the treasurer(?) of his foundation trying to siphon funds for an apparent terror act, the naive chuckle fucks still went and said “we dont think his violent tendencies are an indication he might do something violent”

      Like idk maybe update on the fact he just sent one of his own to the hospital??

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yud’s whole project is a pipeline intended to create zizians, if you believe that Yud is serious about his alignment beliefs. If he isn’t serious then it’s just an unfortunate consequence that he is not trying to address in any meaningful way.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          fortunately, yud clarified everything in his recent post concerning the zizians, which indicated that… uh, hmm, that we should use a prediction market to determine whether it’s moral to sell LSD to children. maybe he got off track a little

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          A belief system that inculates the believer into thinking that the work is the most important duty a human can perform, while also isolating them behind impenetrable pseudo-intellectual esoterica, while also funneling them into economic precarity… sounds like a recipe for delicious brownies trouble.

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Growing up in Alabama, I didn’t have the vocabulary to express it, but I definitely had the feeling when meeting some people, “Given the bullshit you alreasy buy, there is nothing in principle stopping you from going full fash.” I get the same feeling now from Yuddites: “There is nothing in principle stopping you from going full Zizian.”

    • BioMan@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Is it better for these people to be collected in one place under the singularity cult, or dispersed into all the other religions, cults, and conspiracy theories that they would ordinarily be pulled into?

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Another day, another instance of rationalists struggling to comprehend how they’ve been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

    A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn’t really engage with the fact the Anthropic has lied and broken “AI safety commitments” to rationalist/lesswrongers/EA shamelessly and repeatedly:

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

    I feel confused about how to engage with this post. I agree that there’s a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is “spun” in uncharitable ways.

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

    I think it’s sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

    I would find this all hilarious, except a lot of the regulation and some of the “AI safety commitments” would also address real ethical concerns.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This would be worrying if there was any risk at all that the stuff Anthropic is pumping out is an existential threat to humanity. There isn’t so this is just rats learning how the world works outside the blog bubble.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren’t really so relevant anymore, they served their role in early incubation.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don’t think they understand that, given their penchant for 10k word blog posts.

      One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don’t care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the “chopstick” stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

      I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

      I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc.

        I suspect that part of the problem is that there is company in there that’s doing a pretty amazing job of reusable rocketry at lower prices than everyone else under the guidance of a skilled leader who is also technically competent, except that leader is gwynne shotwell who is ultimately beholden to an idiot manchild who wants his flying cybertruck just the way he imagines it, and cannot be gainsayed.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      tl;dr: AI! Agents! AI! Agents! AI! Agents! AI…

      Just one thing that caught my attention:

      AI code review helps developers. We … found that 72.6% of developers who use Copilot code review said it improved their effectiveness.

      Only 72.6%? So why the heck are the other almost 30% of devs using it? For funsies? They don’t say.

      You’d think due to self selection effects most people who wouldn’t find using Copilot effective wouldn’t use it.

      The only way that number makes sense to me is if people were force to use Copilot and… no, wait, that checks out.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I know it is a bit of elitism/priviledge on my part. But if you don’t know about the existence of google translate(*), perhaps you shouldn’t be doing vibe coding like this.

      *: this of course, could have been a LLM based vibe translation error.

      E: And I guess my theme this week is translations.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The documentation for “Turbo mode” for Google Antigravity:

      Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

      No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

      Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

      It’s hard not to give the user a hard time when they write:

      Bro, I didn’t know I needed a seatbelt for AI.

      But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good

      but it is very fucking funny to watch them FAFO

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: “Never let an LLM have any decision-making power.” At most, LLMs will serve as a heuristic function for an algorithm that actually works.

      Unlike the railroads of the First Gilded Age, I don’t think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it’s not worth spending lots of money on a task where you don’t need reliability.

      The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

      The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true “use cases” to be mainly spam, and perhaps students cheating on homework.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Pessimistically I think this scourge will be with us for as long as there are people willing to put code “that-mostly-works” in production. It won’t be making decisions, but we’ll get a new faucet of poor code sludge to enjoy and repair.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The documentation for “Turbo mode” for Google Antigravity:

    Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

    No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It’s not even named similarly to dangerous modes in other software (like “force” or “yolo” or “danger”)

    Just a cool marketing name that makes users want to turn it on. Heck if I’m using some software and I see any button called “turbo” I’m pressing that.

    It’s hard not to give the user a hard time when they write

    Bro, I didn’t know I needed a seatbelt for AI.

    But really they’re up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user “well in our small print somewhere we used the phrase ‘Gemini can make mistakes’ so why did you enable turbo mode??”