Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)
Introducing the Palantir shit sandwich combo: Get a cover up for the CEO tweaking out and start laying the groundwork for the AGI god’s priest class absolutely free!
https://mashable.com/article/palantir-ceo-neurodivergent
TL;DR- Palantir CEO tweaks out during an interview. Definitely not any drugs guys, he’s just neurodivergent! But the good, corporate approved kind. The kind that has extra special powers that make them good at AI. They’re so good at AI, and AI is the future, so Palantir is starting a group of neurodivergents hand picked by the CEO (to lead humanity under their totally imminent new AI god). He totally wasn’t tweaking out. He’s never even heard of cocaine! Or billionaire designer drugs! Never ever!
Edit: To be clear, no hate against neurodivergence, or skepticism about it in general. I’m neurodivergent. And yeah, some types of neurodivergence tend to result in people predisposed to working in tech.
But if you’re the fucking CEO of Palantir, surely you’ve been through training for public appearances. It’s funnier that it didn’t take, but this is clearly just an excuse.
I strongly feel that it’s an attempt to start normalizing the elevation of certain people into positions of power based off vague characteristics they were born with.
Lemmy post that pointed me to this: https://sh.itjust.works/post/51704917
Jesus. This being 2025 of course he had to clarify that it’s definitely not DEI. Also it really grinds me gears to see hyperfocus listed as one of the “beneficial” aspects because there’s no way it’s not exploitative. Hey, so you know how sometimes you get so caught up in a project you forget to eat? Just so you know, you could starve on the clock. For me.
I feel bad for the gullible ND people who spend time applying to this thinking they might have a chance and it isn’t a high level coverup attempt.
Otoh, somebody should take some fun drugs and tape their interviews, see how it works out. Are there any Hunter S Tech journalists around?
Larian Studios founder/CEO Swen Vincke is posting through it on Twitter, after the studio’s use of plagiarism machines caused significant backlash (to the shock of everyone except Swen).
This is probably Pivot to AI material.
Ah yes, I love the smell of burning bridges in the evening. Fuck. And I was getting excited about Divinity! Well, guess that means more money to spend on other things.
Here’s a substack post (sorry) with a quote I found both neat and pretty funny:
Integrity comes from the Latin “integer,” meaning whole or complete. A person with integrity is “whole” in the sense that their words, actions, and values are unified rather than fragmented or contradictory. They understand themselves; they have integrated the warring parts of themselves; and they respect and act on the values that their parts can agree upon.
Rationalists in shambles
Ryanair now makes you install their app instead of allowing you to just print and scan your ticket at the airport, claiming it’s “better for our environment (gets rid of 300 tonnes of paper annually).” Then you log in into the app and you see there’s an update about your flight, but you don’t see what it’s about. You need to open an update video, which, of course, is a generated video of an avatar reading it out for you. I bet that’s better for the environment than using some of these weird symbols that I was putting into a box and that have now magically appeared on your screen and are making you feel annoyed (in the future for me, but present for you).
….this made me twitch
chat is this kafkaesque
New conspiracy theory: Posadist aliens have developed a virus that targets CEOs and makes them hate money.
An academic sneer delivered through the arXiv-o-tube:
Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.
Sadly, it’s a Chomskian paper, and those are just too weak for today. Also, I think it’s sloppy and too Eurocentric. Here are some of the biggest gaffes or stretches I found by skimming Moro’s $30 book, which I obtained by asking a shadow library for “impossible languages” (ISBN doesn’t work for some reason):
book review of Impossible Languages (Moro, 2016)
- Moro claims that it’s impossible for a natlang to have free word order. There’s many counterexamples which could be argued, like Arabic or Mandarin, but I think that the best counterexample is Latin, which has Latinate (free) word order. On one hand, of course word order matters for parsers, but on the other hand the Transformers architecture attends without ordering, so this isn’t really an issue for machines. Ironically, on p73-74, Moro rearranges the word order of a Latin phrase while translating it, suggesting either a use of machine translation or an implicit acceptance of Latin (lack of) word order. I could be harsher here; it seems like Moro draws mostly from modern Romance and Germanic languages to make their points about word order, and the sensitivity of English and Italian to word order doesn’t imply a universality.
- Speaking of universality, both the generative-grammar and universal-grammar hypotheses are assumed. By “impossible” Moro means a non-recursive language with a non-context-free grammar, or perhaps a language failing to satisfy some nebulous geometric requirements.
- Moro claims that sentences without truth values are lacking semantics. Gödel and Tarski are completely unmentioned; Moro ignores any sort of computability of truth values.
- Russell’s paradox is indirectly mentioned and incorrectly analyzed; Moro claims that Russell fixed Frege’s system by redefining the copula, but Russell and others actually refined the notion of building sets.
- It is claimed that Broca’s area uniquely lights up for recursive patterns but not patterns which depend on linear word order (e.g. a rule that a sentence is negated iff the fourth word is “no”), so that Broca’s area can’t do context-sensitive processing. But humans clearly do XOR when counting nested negations in many languages and can internalize that XOR so that they can handle utterances consisting of many repetitions of e.g. “not not”.
- Moro mentions Esperanto and Volapük as auxlangs in their chapter on conlangs. They completely fail to recognize the past century of applied research: Interlingue and Interlingua, Loglan and Lojban, Láadan, etc.
- Sanskrit is Indo-European. Also, that’s not how junk DNA works; it genuinely isn’t coding or active. Also also, that’s not how Turing patterns work; they are genuine cellular automata and it’s not merely an analogy.
I think that Moro’s strongest point, on which they spend an entire chapter reviewing fairly solid neuroscience, is that natural language is spoken and heard, such that a proper language model must be simultaneously acoustic and textual. But because they don’t address computability theory at all, they completely fail to address the modern critique that machines can learn any learnable system, including grammars; they worst that they can say is that it’s literally not a human.
Plus, natural languages are not necessarily spoken nor heard; sign language is gestured (signed) and seen and many, mutually-incompatible sign languages have arisen over just the last few hundred years. Is this just me being pedantic or does Moro not address them at all in their book?
Today in autosneering:
KEVIN: Well, I’m glad. We didn’t intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that’s why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.
https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o
Follow the hype, Kevin, follow the hype.
I hate-listen to his podcast. There’s not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.
Obscure crypto programming term. Sure
John Scalzi:
I search my name on a regular basis, not only because I am an ego monster (although I try not to pretend that I’m not) but because it’s a good way for me to find reviews, end-of-the-year “best of” lists my book might be on, foreign publication release dates, and other information about my work that I might not otherwise see, and which is useful for me to keep tabs on. In one of those searches I found that Grok (the “AI” of X) attributed to one of my books (The Consuming Fire) a dedication I did not write; not only have I definitively never dedicated a book to the characters of Frozen, I also do not have multiple children, just the one.
The monorail salespeople at Checkmarx have (allegedly) discovered a new exploit for code extruders.
The “attack”, titled “Lies in the Loop”, involves taking advantage of human-in-the-loop “”“safeguards”“” to create fake dialogue prompts, thus tricking vibe-coders into running malicious code.
It’s interesting to see how many ways they can find to try and brand “LLMs are fundamentally unreliable” as a security vulnerability. Like, they’re not entirely wrong, but it’s also not something that fits into the normal framework around software security. You almost need to treat the LLM as though it were an actual person not because it’s anywhere near capable of that but because the way it fits into the broader system is as close as IT has yet come to a direct in-place replacement for a human doing the task. Like, the fundamental “vulnerability” here is that everyone who designs and approves these implementations acts like LLMs are simultaneously as capable and independent as an actual person but also have the mechanical reliability and consistency of a normal computer program, when in practice they are neither of those things.
Does Checkmarx have any relation to infamous ring-destroying pro wrestler Cheex? https://prowrestling.fandom.com/wiki/Mike_Staples
If not, perhaps they should seek an endorsement deal!
That would be such a “we didnt know the dotcom bubble was popping a month later” move.
Purdue mandating AI to graduate: https://www.purdue.edu/newsroom/2025/Q4/purdue-unveils-comprehensive-ai-strategy-trustees-approve-ai-working-competency-graduation-requirement/
I’m looking for the actual curricula / docs signed off by the trustees. Looks like another domino falls.
Purdue and Google recently expanded their strategic partnership, emphasizing the importance of public-private partnerships that are essential to accelerating innovation in AI.
Translation: somebody’s getting paid off
🎶 Money makes the world go 'round 🎶
I learned yesterday that Helsinki’s uni is also on the list: prompts not only tolerated, but encouraged
been starting to wonder whether these are like the google etc plays there: “suuuuure you can get a sweetheart deal for our systems” [5y later and much storage on the expensive rentabox] “hey btw we’re renewing prices, your contracts are going up 400%. oh and also taking data out of the system is $20/TB. just…in case you wanted to try”
Boilermakers gonna boil water i guess
OT: Lurasidone is neat stuff.
deleted by creator
Yeah, BP2. Replacing risperidone. Metformin can help with antipsych weight gain fwiw, some really fascinating studies out there.
deleted by creator
I’m hoping I can switch to lamotrigine in the long term. Valproate is nasty stuff.
deleted by creator
Hypomania sucks. I’m lucky that I just get terrible insomnia for about a week.
Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)… advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn’t weight MIRI’s views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer’s content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer’s imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.
Some choice comments
I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can’t say who or any specifics, but they were saying that they wanted to take seriously short AI timelines
Ah yes, they were totally secretly agreeing with your short timelines but couldn’t say so publicly.
Open Phil decisions were strongly affected by whether they were good according to worldviews where “utter AI ruin” is >10% or timelines are <30 years.
OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn’t high enough or acted on strongly enough for Eliezer!
At a meta level, “publishing, in 2025, a public complaint about OpenPhil’s publicly promoted timelines and how those may have influenced their funding choices” does not seem like it serves any defensible goal.
Lol, someone noting Eliezer’s call out post isn’t actually doing anything useful towards Eliezer’s goals.
It’s not obvious to me that Ajeya’s timelines aged worse than Eliezer’s. In 2020, Ajeya’s median estimate for transformative AI was 2050. […] As far as I know, Eliezer never made official timeline predictions
Someone actually noting AGI hasn’t happened yet and so you can’t say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy… but we’ve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)
There is a Yud quote about closet goblins in More Everything Forever p. 143 where he thinks that the future-Singularity is an empirical fact that you can go and look for so its irrelevant to talk about the psychological needs it fills. Becker also points out that “how many people will there be in 2100?” is not the same sort of question as “how many people are registered residents of Kyoto?” because you can’t observe the future.
Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the booster’s hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasn’t actually committed to one or a hard date publicly).
Yud:
I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.
The locker beckons
The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths weren’t really that bizarre alien, they broke free of their original creators programming and didn’t want to be controlled again.
I’m a nerd and even I want to shove this guy in a locker.
article in large part about our friends
https://bayareacurrent.com/meet-the-new-right-wing-tech-intelligentsia/
some of the people involved with kernel are pretty unhappy about this and claim the piece is in bad faith/factually wrong (see the replies to https://bsky.app/profile/kellypendergrast.bsky.social/post/3ma55xfq7d22y )
Shit like Palladium is going to be absolutely hilarious to dig up in the back of a used bookstore 20 years from now
That link can’t be viewed without a bluesky account, btw.
skill issue
it’s the actual cite, you’ve been led to the water, come the fuck on it’s not even a paywall
I didn’t make the comment because I struggled bypassing it, but because calling out this UI dark pattern bullshit feels topical here and I wasn’t sure if OP was aware it was in place.
Judging from votes, other people found the skyview link novel/useful, so it was constructive!
Thanks.
I love the fact that this “decentralized billionaire-proof open network” needs a nitter clone.
i’ll cut the coiners some slack on this one because requiring a login to view is an account level privacy option. i don’t know what the option is supposed to actually do. but that’s what it is
you do not, under any circumstances, “gotta hand it to them”
if bsky is supposed to be federated, then it does nothing, but as it is today with 99%+ of users on main instance, it only works as a recruitment tool for bsky
Increased friction against some particular harassment vectors, presumably.
It might help to know that Paul Frazee, one of the BlueSky developers, doesn’t understand capability theory or how hackers approach a computer. They believe that anything hidden by the porcelain/high-level UI is hidden for good. This was a problem on their Beaker project, too; they thought that a page was deleted if it didn’t show up in the browser. They fundamentally aren’t prepared for the fact that their AT protocol doesn’t have a way to destroy or hide data and is embedded into a network that treats censorship as reparable damage.
maybe it’s a good thing that it’s so fucking hard/expensive to selfhost bsky
Reminder Tivy was the guy behind Phalanx (back in his polyamory microblogging days)~
Made all the funnier by the fact that probably my favorite Hacker News thread of all time is on Tivy’s article about how he abandoned his job to “court” his wife:
https://news.ycombinator.com/item?id=29830743
If even the orange site is willing to roast you this hard, I guess your only response has to be pulling up stakes to go live in a neofascist social bubble instead.
He worked in fuel cells (hence the palladium name) and I think he got a bunch of stock option shit. Also “court” lol.
just came across a wild banger:
(An aside — In their official docs, Apple refers to the menu bar always in lowercase, because it’s just a menu bar. The ‘desktop’ is the same way. This is interesting, because we live in an era where everything is a branded product whose name is a proper noun– see the Dock– and we are not allowed to merely use things, we are forced to experience using them and you legally can’t ‘experience’ a regular ‘ol noun. Everybody knows it’s gotta be a proper noun in order to be experienced. The Las Vegas Demon Orb Experience. The Microsoft Windows Desktop Experience. The ESPN Experience Brought To You By Sports Gambling. The 6th Street Hostel Bathroom Experience. But our friends “menu bar” and “desktop” are just two things, average, normal, unobtrusive. This says something about how the people who created these things thought about them.)
I wish this attitude was more pervasive at Apple, my phone actually autocorrects to “Lock Screen” when I type it out in lower case.
Ben Williamson, editor of the journal Learning, Media and Technology:
Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that’s when it got real weird… So this was the non-existent paper I searched for:
Williamson, B. (2021). Education governance and datafication. European Educational Research Journal, 20(3), 279–296.
But the search result I got was a bit different…
Here’s the paper I found online:
Williamson, B. and Piattoeva, N. (2022) Education Governance and Datafication. Education and Information Technologies, 27, 3515-3531.
Same title but now with a coauthor and in a different journal! Nelli Piattoeva and I have written together before but not this…
And so checked out Google Scholar. Now on my profile it doesn’t appear, but somwhow on Nelli’s it does and … and … omg, IT’S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone…
Which makes it especially weird that in the paper I was reviewing today the precise same, totally blandified title is credited in a different journal and strips out the coauthor. Is a new fake reference being generated from the last?..
I know the proliferation of references to non-existent papers, powered by genAI, is getting less surprising and shocking but it doesn’t make it any less potentially corrosive to the scholarly knowledge environment.
Relatedly, AI is fucking up academic copy-editing.
One of the world’s largest academic publishers is selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
https://www.theinformation.com/articles/can-ucla-replace-teaching-assistants-ai
Miller’s team also recently used software from startup StackAI to develop an AI-powered app that writes letters of recommendation, saving faculty members time. Faculty type basic details about a student who has requested a letter, such as their grades and accomplishments, and the app writes a draft of the full letter.
AI is “one of those things that you might worry could dehumanize the process of writing recommendation letters, but faculty also say that process [of manually writing the letters] is very labor intensive,” Miller said. “So far they’ve gotten a lot out of” the new app.
Anyone using this thing should be required to serve on the admissions committee. LoRs aren’t for generic B+ students that you don’t even remember, just say no.
I googled stackai, saw their screenshots and had ptsd flashbacks of mid 2000s alteryx. why do we keep reinventing no-code drag-and-drop box-and-arrow crap.









