Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    OpenAI’s trying to make an AI-generated animated film, and claiming their Magical Slop Extruderstm can do in nine months what allegedly would take three years, with only a $30 mil budget and the writers of Paddington in Peru for assistance.

    Allegedly, they’re also planning to show it off at the Cannes Film Festival, of all places. By my guess, this was Sam Altman’s decision - he’s already fawned over AI-extruded garbage before, its clear he has zero taste in art whatsoever.

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      OpenAI’s tools also lower the cost of entry, allowing more people to make creative content, he said.

      So, even working under the assumption that this somehow works, they still needed two animation studios, professional writers, and 30 million to get this film off the ground.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      $30 mil budget

      From what I heard that is twice the budget of a Studio Ghibli movie. Nausicaä of the Valley of the Wind cost 1 million to make in 1984. (No idea what that would be adjusted for inflation).

      • nightsky@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Well you have to assume they need $29 million alone for humans doing color correction to make it less yellow

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Nausicaä of the Valley of the Wind cost 1 million to make in 1984. (No idea what that would be adjusted for inflation).

        I checked a few random inflation calculators, and it comes out to roughly $3.1 million.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I was wondering about the origins of sneerclub and discovered something kinda fun: “r/SneerClub” pre-dates “r/BlogSnark”, the first example of a “snark subreddit” listed on the wiki page! The vibe of snark subreddits seem to be very different to that of sneerclub etc. (read: toxic!!!) but I wouldn’t know the specifics as I’m not a snark participant.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Some quality wordsmithing found in the wild:

    transcript

    @MosesSternstein (quote-twitted): AI-Capex is the everything cycle, now.

    Just under 50% of GDP growth is attributable to AI Capex

    @bigblackjacobin: Almost certainly the greatest misallocation of capital you or I will ever see. There’s no justification for this however you cut it but the beatings will continue until a stillborn god is born.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I mean it’s still just funny money seeing the creator works for some company that resells tokens from Claude, but very few people are stepping back to note the drastically reduced expectations of LLMs. A year ago, it would have been plausible to claim that a future LLM could design a language from scratch. Now we have a rancid mess of slop, and it’s an “art project”, and the fact it’s ersatz internally coherent is treated as a great success.

      Willison should just have let this go, because it’s a ludicrous example of GenAI, but he just can’t help himself defending this crap.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s a good point that I’m not sure many people are talking about. There’s a shift happening where while I’m still seeing way too much “you’re just not prompting it right”, it has lessened. Now I’m seeing a lot more “well of course it can’t do that” even from the believers.

        They’re still crying “the problem is that you’re using it wrong”, and blaming on the end user but it seems to be quietly shifting so they’re now calling people dumb for ever believing the hype that they were the ones pushing to begin with.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Top-tier from Willison himself:

      The learning isn’t in studying the finished product, it’s in watching how it gets there.

      Mate, if that’s true, my years of Gentoo experience watching compiler commands fly past in the terminal means I’m a senior operating system architect.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        which naturally leads us to: having to fix a portage overlay ~= “compiler engineer”

        wonder what simonw’s total spend (direct and indirect) in this shit has been to date. maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?

          Probably. Beyond throwing a shitload of cash into the LLM money pit, Willison’s completely wrapped his public image up in being an AI booster, having spent years advocating for AI and “learning” how to use it.

          If he admits he’s wrong about LLMs, he has to admit the money and time he spent on AI was all for nothing.

          • flere-imsaho@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            he’s claiming he is taking no llm money with exception of specific cases, but he does accept api credits and access to early releases, which aren’t payments only when you think of payments in extremely narrow sense of real money being exchanged.

            this would in no way stand if he were, say, a journalist.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            if you call him an AI promoter he cites his carefully organised blog posts of concerns

            meanwhile he was on the early access list for GPT-5

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      That the useless programming language is literally called “cursed” is oddly fitting, because the continued existence of LLMs is a curse upon all of humanity

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Sigh. Love how he claims it’s worth it for “learning”…

      We already have a thing for learning, it’s called “books”, and if you want to learn compiler basics, $14000 could buy you hundreds of copies of the dragon book.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’ve learned so much langdesign and stuff over the years simply by hanging around plt nerds, didn’t even need to spend for a single dragon book!

        (although I probably have a samizdat copy of it somewhere)

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        $14,000 could probably still buy you a lesser Porsche in decent shape, but we should praise this brave pioneer for valuing experiences over things, especially at the all-important boundary of human/machine integration!

        (no, I’m not bitter at missing the depreciation nadir for 996-era 911s, what are you talking about)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Good sneer from user andrewrk:

      People are always saying things like, “surprisingly good” to describe LLM output, but that’s like when 5 year old stops scribbling on the walls and draws a “surprisingly good” picture of the house, family, and dog standing outside on a sunny day on some construction paper. That’s great, kiddo, let’s put your programming language right here on the fridge.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The Oracle deal seemed absurd, but I didn’t realize how absurd until I saw Ed’s compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

      I’ve been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his “real beliefs” are gives him too much credit.

      Just to remind everyone: the market can stay irrational longer than you can stay solvent!

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment.

        Another possibility is that Altman’s bought into his own hype, and genuinely believes OpenAI will achieve AGI before the money runs out. Considering the tech press has been uncritically hyping up AI in general, and Sammy Boy himself has publicly fawned over “metafiction” “written” by an in-house text extruder, its a possibility I’m not gonna discount.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Since appearing on Piers Morgan’s show, Eric Weinstein has taken to expounding additional theories about physics. Peer review was created by the government, working with Ghislaine Maxwell’s father, to control science, he said on “Diary of a CEO,” one of the world’s most popular podcasts. Jeffrey Epstein was sent by an intelligence agency to throw physics off track and discourage space exploration, keeping humanity trapped in “the prison built by Einstein.”

      Heartbreaking! Weinstein isn’t fully wrong. Maxwell’s daddy was Robert Maxwell, who did indeed have a major role in making Springer big and kickstarting the publish-or-perish model, in addition to having incredibly tight Mossad ties; the corresponding Behind the Bastards episodes are subtitled “how Ghislane Maxwell’s dad ruined science.” Epstein has been accused of being a Mossad asset tasked with seeking out influential scientists like Marvin Minsky to secure evidence for blackmail and damage their reputations. As they say on Reddit, everybody sucks here.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Epstein was a sophon controlled trisolaran asset working to prevent crucial development of physics!!! j/k

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        First domino: US government invents peer review

        Last domino: Richard Stallman successfully kamikazes his reputation for good after multiple close attempts over the years

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Richard Stallman successfully kamikazes his reputation for good after multiple close attempts over the years

          He still maintains a solid reputation with FOSS freaks, fascists and pedophiles to this day. Given the Venn diagram of these three groups is a circle, this isn’t particularly shocking.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              I asked the other day whether they’ve actually spoken with these people that they keep posting such takes about, and thus far my presumption is “they haven’t”. posts like the above reinforce that view

          • corbin@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Fuck, your lack of history is depressing sometimes. That Venn diagram is well-pointed, even among people who have met RMS, and the various factions do not get along with each other. For a taste, previously on Lobsters you can see an avowed FLOSS communist ripping the mask off of a Suckless cryptofascist in response to a video posted by a recently-banned alt-right debate-starter.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s just yer bog-standard “the best lie has a seed of truth”, ainnit?

        (Peer review in its modern form was adopted gradually, with a recognizable example in 1831 from the same William Whewell who coined the word scientist. It displaced the tradition of having the editor of a journal decide everything himself, so whatever its flaws, it has broadened the diversity of voices that influence what gets officially published.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Starting this Stubsack off, I found a Substack post titled “Generative AI could have had a place in the arts”, which attempts to play devil’s advocate for the plagiarism-fueled slop machines.

    Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:

    While the idea of generative AI “democratizing” art is more or less a meme these days, there are in fact AI tools that do make certain artforms more accessible to low-budget productions. The first thing to come to mind is how computer vision-based motion capture give 3D animators access to clearer motion capture data from a live-action actor using as little as a smartphone camera and without requiring expensive mo-cap suits.

    • Alex@lemmy.vg
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Oh hey, that’s my article actually, thanks for reading it! :D

      Reading the part on motion capture back with your feedback in mind, I do see how it can give the impression of conflating generative AI with another form of machine learning (or “AI”, as all of these are marketed as). That’s my mistake, I could have worded it better – thanks for pointing it out.

      I don’t agree that I was playing devil’s advocate for the slop machines, however. I spend the majority of the article talking about my explicit disdain for them and their users. The point of the piece was to point to what I believe to be genuine use-cases for ethical ML (including gen AI) in art – not as replacement of talent but as tools purpose-built for creatives, like the few that existed were before the current bubble. I think the paragraph right after the one on mo-cap best serves to summarize my thoughts:

      Imagine that […] we purpose-built miniature voice cloning models to enhance voice artists’ performances. Not by replacing them with text-to-speech or voice changing algorithms, but by aiding their craft to venture places traditional voice work could not reach on its own. Take, for example, role-playing video games with self-insert protagonists allowing its characters to say the player’s chosen name without having to dance around it. We could have had voice artists and machine learning experts working together in designing minimalistic AI models to seamlessly weave computer-assisted voice lines into their human performances, creating something previously impossible.

      Did you have anything thoughts on my article? I’m still very much a novice writer so every bit of feedback is invaluable to me.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it’s common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn’t priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.

        From that angle, it’s worth understanding that today’s generative tooling will also be configured for capitalism. Indeed, that’s basically what RLHF does to a language model; in the jargon, it creates an “agent”, a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won’t ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection (“filters”) in selfie-sharing apps aren’t added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they’ll invest in Shmoo production in order to capture more of a potential future market.

        So, imagine that we build miniature cloned-voice text-to-speech models. We don’t need to imagine what they’re used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.

        Finally, yes, you’re completely right that e.g. smartphones completely revolutionized filmmaking. It’s important to know that the film industry didn’t intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can’t easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y’know, plan interference.)

      • FredFig@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I think it’s a piece in the long line of “AI means A and B, and A is bad and B can be good, so not all AI is bad”, which isn’t untrue in the general sense, but serves the interest of AIguys who aren’t interested in using B, they’re interested in promoting AI wholesale.

        We’re not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren’t interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.

        More directly addressing your piece, I don’t think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it’ll be cheaper than traditional alternatives.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    New Loser Lanyard (ironically called the Friend) just dropped, a “chatbot-enabled” necklace which invades everyone’s privacy and provides Internet reply “commentary” in response. As if to underline its sheer shittiness, WIRED has reported that even other promptfondlers are repulsed by it, in a scathing review that accidentally sneers its techbro shithead inventor:

    If you’re looking for some quick schadenfreude, here’s the quotes on Bluesky.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Nah, call it the PvP Tag.

        These things look dorky as fuck, wearing them is a moral failing, and people (rightfully) treat it as grounds to shit on you, might as well lean into the “shithead nerd who ruined everything” vibe with some gratuitous gaming terminology, too.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Damn, I was hoping someone would finally recognize the true value of the ASCII-art goatse that used to show up on Slashdot all the time, before this inevitably came to pass

  • JFranek@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.

    It talked about how it’s almost impossible to detect whether a model was deliberately trained to output some “bad” output (like vulnerable code) for some specific set of inputs.

    Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a “sleeper agent”. But maybe some of y’all will find it interesting.

    link

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      This isn’t the first time I’ve heard about this - Baldur Bjarnason’s talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

      Funnily enough, calling a poisoned LLM as a “sleeper agent” wouldn’t be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin’s home address, which she promptly posted on bluesky. Won’t link because I don’t think we’ve had the doxxing discussion but It’s easily findable now.

    I’m mostly posting this because the article featured this photo:

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    there was a post shared out there recentlyish (blogpost? substack? can’t find it. enjoy vagueposting) that was about how ai companies have no clue what they’re doing, and compared them to alchemy in that they also had no idea what they were doing, and over time moved on to more realistic goals, but while they had funding for these unrealistic goals they invented distillation and crystallization and black powder and shit. and the same for ai would be buildout of infra that can be presumably used for something later (citation needed).

    so comparison of this entire ea/lw/openai milieu to alchemists is unjust for alchemists. alchemy has that benefit on its side that it was developed before scientific method was a proper thing, modern chatbot peddlers can’t really claim that what they’re doing is protoscience. what is similar is that alchemy and failed ea scifi writers claim that magic tech will get you similar things. cure for all disease (nanobots), immortality (cryonics or mind uploading or nanobots), infinite wisdom (chatbots), transformation of any matter at will (nanobots again), mind control derived from supreme rationality (ok this one comes from magic), synthetic life (implied by ai bioweapons, but also agi itself). when chinese alchemists figured out that mercury pills kill people and don’t make them immortal, there was a shift to “inner alchemy” that is spiritual practices (mental tech). maybe eliezer &co are last alchemists (so far) and not first ai-safety-researchers