Want to wade into the snowy sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      My God this is so bad. So in addition to lying about AI what they actually offered wasn’t speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance. This is such a silicon valley move and I honestly suspect that a number of people using and investing in these asshats knew exactly what was going on and simply didn’t care.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        what they actually offered wasn’t speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance.

        I mean… Yeah. I think if you read it any other way you’re a massive rube. Like it’s obviously not possible to do the former in “days” as they advertise.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          At best it’s the same shitty arguments we heard from crypto grifters and their suckers. Let’s take a process that’s complex and manual by design to allow for independent validation and securing against fraud and make it faster by cutting those parts out and throwing some high-tech nonsense at the problem that we can claim replaces all the verification and validation. (The fact that they called their system “trustless” in the face of this is deeply ironic.) But maybe it’s the cynicism talking but I’m even less inclined to give anyone other than maybe the author of that sub stack the benefit of the doubt that they actually believed it.

          The ideal customer for this service is the kind of “Visionary Leader” with the “Founder Mindset” and “Drive to Innovate” that lets them see that all those privacy, security, fraud prevention, anti-embezzlement, and whatever else those standards and their associated compliance mechanisms are meant to provide are just pointless obstacles on the path to making obscene amounts of money by burning the world behind you. Often the shit we talk about here makes me think the world has gone mad or stupid, but every so often I feel like I’m staring at the face of capital-E Evil and this is one of those times.

          • V0ldek@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 days ago

            From that substack:

            Even though we knew we’d technically be lying about our security to anyone we sent these policies to for review (clients, auditors, investors), we decided to adopt these policies because we simply didn’t have the bandwidth to rewrite them all manually.

            Ye man, then you’re complicit. If I were one of the clients, auditors, investors, I’d be printing that out on an A1 sheet and rushing to file as evidence, this is just plain fraud

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          Doesn’t surprise me in the slightest that all the companies listed in that substack as having used Delve are also AI slop companies (vibecoding, AI “customer service”, AI “video meeting assistant” (whatever that would be))

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      I will never understand why people seriously bet “yes” on these types of things. Like you either loose the bet and loose money or you win the bet and die

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        Eliezer is trying to get around that with some weird conditions and game on the prediction market question:

        This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.

        I don’t think that actually helps. But Eliezer is committed to prediction markets being useful on a nearly ideological level, so he has to try to come up with weird complicated strategies to try to get around their fundamental limits.

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          It feels like a teenaged argument about Batman v. Superman or the USS Enterprise v. a Star Destroyer. I think many LessWrongers are not serious about the belief system as something to act on, but the problem is that when they are serious you get Ziz Lasota. Its also similar to how they love markets in theory, but don’t want to start a business or make speculative investments.

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          prediction markets being useful on a nearly ideological level

          At this point, I would say prediction markets are now an explicit ideological plank of what’s left of the libertarian movement. Darkly amusing that they’re desperately trying to pump life and legitimacy into something the GW Bush administration thought was too corrupt to use.

        • lurker@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          If you have to set up that many rules to get around the inherent flaw of “gambling on everyone’s lives” just run a normal ass poll. gets rid of unnecessary financial incentives

    • samvines@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?

      • lurker@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        Technically yes, but Yud probably wouldn’t count that, since the AI didn’t have the express purpose of destroying everyone

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          So if Bender took over he wouldn’t count. As he wants to ‘kill all humans (except Fry)’. Seems like a loophole.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            17 days ago

            Bender really takes the “intelligence” out of “artificial superintelligence”. “Yeah, kill all humans. Except Fry, he’s my friend or pet or something. And I guess Leela because he’ll be whiny about it and also I owe her for the thing. And Hermes because he still owes me money. And I guess the professor is okay…” And so on and so forth through all of humanity.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        On the one hand, Yud’s vision of AI doomsday is specifically “AI turns sentient/superintelligent and kills us all because reasons”, not “Humanity wipes itself out because they trusted lying machines”.

        On the other hand, the absence of sentience/superintelligence hasn’t stopped AI from causing untold damage anyways, as the past two to three years can attest.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    An early hint of Gwern’s rejection of chaos theory in the sequences from 2008 (the “build God to conquer Death” essay):

    And the adults wouldn’t be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn’t seem so harsh, would be only another problem to be solved.

    Someone who got to high-school math or coded a working system would probably have encountered the combinatorial explosion, the impossibility of representing 0.1 as a floating-point binary, Chaos Theory, and so on. Even Games Theory has situations like “in some games, optimal play guarantees a tie but not a win.” But Yud was much too special for any of those and refused offers to learn.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      18 days ago

      This is what happens when your worldview is based on anime.

      (A lot of anime has heavy themes, but most people understand that it’s not real life, just like all such art. Unlike Yud, most people’s worldviews on coding and math are based on actual coding and math.)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        Not just anime but also science fiction. See also all the people who love ‘hard’ science fiction (science fiction more based on real world physics), which often isn’t that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).

        content warning discussion about sexual abuse thrope

        A similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).

        Sorry for being a bit offtopic and talking about entertainment again.

        • BioMan@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          I will never forget the time I calculated the energy output on one of the torpedo engines of The Expanse and realized it was higher than the total wattage of all human civilization in 2020

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            18 days ago

            Ah the Epstein drive. (oof that aged…)

            Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don’t have a quote for that however.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isn’t to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        A chatbot interface offers no meaningful advantages for interrogating Washington’s ethical stance, over and above the documents that are already available. Instead, it offers a pleasant sheen of false certainty. So in that way, it’s dragging a guy who’s been dead for two centuries into the social media era. Huzzah!

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            21 days ago

            The classic 40k catch-22: either it doesn’t do what you’re claiming it does, in which case you’re a heretic lying to the inquisition OR it does and you’re summoning the spirits of the dead like a necromancer heretic.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    https://mail.cyberneticforests.com/the-computer-science-fetish/

    The fetishism of the computer scientist therefore refers less to specific expertise than to whatever we imagine a credentialed expert can bestow: an external voice that says, "ask, and you shall receive.” The computer scientist becomes a mirror where those who work with the social, practical impacts of the tech hope to see our understanding affirmed. The people who offer that validation — who position themselves against the discourse of critique, who seem unbothered and detached, even ridiculing the same critical lingo that exhausts you — are not doing it out of sober objectivity or insight.

    Sometimes they just don’t respect you. Sometimes they’re just annoyed by calls for accountability. And sometimes, they do it because they’ve fused with an interacting swarm of chatbots and transcended their human identity.

    • picklefactory@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      I’ve been reading this guy’s blog and techpolicy.press articles for about a year and have found them very worthwhile.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 days ago

        I was sufficiently interested based off of this that I tracked down a few others of his. This one felt like a good take for an era where these things are being used for more than just slop generation despite the underlying flaws not being resolved.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      The grand irony is I’m not even sure most people click on or read this sort of stuff. I don’t think it’s often even created to be read by anyone. I think it’s created as a sort of swaddling fan fiction for MBAs, advertisers, event sponsors and sources, so they can tune out ethical quibbles and feel good about how clever they are.

      Every time someone hypes up Steve Jobs’ “reality distortion field” this is what they’re actually talking about whether they realize it or not.

      • samvines@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        In my experience “all hands” meetings are very much CEOs and their sycophants cosplaying at podcast hosts for an hour whilst forcing their employees to watch/listen. They are almost never useful and a colossal waste of money - especially in corporation’s with 10k+ employees. Like the salary cost for 10k people for 1 hour would probably pay off my mortgage.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Hold on now, the uptime number contains two digits that are nines! The image itself has four nines in total!

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        Can’t believe I’m nerd-sniped this easily. Very technically, the point at which a service should be considered unreliable or down is at γ nines, where γ = 0.9030899869919434… is a transcendental constant. γ nines is exactly 87.5% availability, or 7/8 availability, and it’s the point at which a service’s availability might as well be random. (Another one of the local complexity theorists can explain why it’s 7/8 and not 1/2.)

        • lagrangeinterpolator@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          19 days ago

          We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).

          Right now, Sora is at zero 9s of availability.

          • corbin@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            17 days ago

            Suppose a bullshitter brings up a number of distinct Boolean claims and some tangled pile of connections between them, such that they hope to convince you that at least one connection is plausible. Without loss of generality, we can reduce this to 3-satisfiability in polynomial time: we can quickly produce a list of subconnections where each subconnection relates exactly three claims. Then, assuming the bullshitter is uniformly random, the probability that any particular subconnection is satisfied is 7/8. Therefore, if a bullshitter tries to overwhelm you with any pile of claims which sounds plausible, the threshold for plausibility has to be at least 7/8 in order to distinguish from random noise.

            • flaviat@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              17 days ago

              Bravo. The farthest i could get is 2/3 assuming the following model: x₁ is a random number between 0 and 1, x₂ between x₁ and 1, and so on. If the service breaks at x₁, gets fixed at x₂, breaks again at x₃, etc. availability is 2/3.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    https://www.todayintabs.com/p/who-goes-ai

    taking shots at the gray lady:

    You might think Mr. R not so different, superficially, from Ms. L. He’s also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:

    Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a “Master Editor” agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.

    And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a “writing style,” and not one that can’t easily be duplicated by a large language model. Checking facts? Assessing his work’s strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who “expects AI models to get better than him at everything eventually?” Why does he go AI when Ms. L never would?

    Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.

    spoiler

    It’s Kevin Roose

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    A pretty staid-sounding law firm warns that the AI industry is partying like it’s 2007:

    Lenders who originated data center loans […] have begun pooling those loans and selling tranches to asset managers and pension funds, spreading risk well beyond the original lending institutions.

    Also of note:

    The most basic litigation risk in AI infrastructure finance is that the revenues generated by the sector may prove insufficient to service the fixed obligations incurred to build it. The industry brought in approximately $60 billion in revenue in 2025 against roughly $400 billion in capital expenditure.

    (Via.)

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      Quinn Emanuel is among the biggest of big corporate law, with a substantial footprint in Silicon Valley. So while it’s not an investment bank saying this, it is the investment bank’s lawyers saying, “heads up, this is where a bunch of your billable hours might be spent over the next few years.”

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Is Trace (Tracing Woodgrains) the only one of our friends who has served in the military? A lot of neurodivergent young people spend some time in the US military and some of our friends were the right age to get in before the War on Abstract Nouns began.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    Putting “Novelty Purposes Only” on my psychosis suicide bot after I laid off 80% of my legal (replaced them with the psychosis suicide bot)

  • samvines@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    Cloudflare casually license-laundering wordpress

    While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license.

    Oh really. So you’re sure you Claude wasn’t trained on wordpress? It’s all irrelevant anyway because AI generated code can’t be copyrighted or licensed.

    Silver lining, it might piss off Matt Mullenweg!

    • Evinceo@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      So you’re sure you Claude wasn’t trained on wordpress?

      Unfortunately FOSS is basically dead because nobody is enforcing licenses against training.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      i feel in my gut that on some level license disputes are ultimately slapfights for which titanic corporation gets the money. however i will absolutely point and laugh at every misfortune that comes the way of that particular transmisogynist asshole