Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
LLMs are the Dippin Dots of technology.
That’s an unfair comparison, Dippin Dots don’t slowly ruin the world by existing (also, they’re delicious)
It will slways be the ice cream of the future, never today
AI video generation use case: hallucinatory RETVRN clips about the good old days, such as, uh, walmart 20 years ago?
It his the uncanny valley triggers quite hard. It’s faintly unsettling t watch at all, but every individual detail is just wrong and dreamlike in a bad way.
Also, weird scenery clipping, just like real kids did back in the day!

https://bsky.app/profile/mugrimm.bsky.social/post/3lzy77zydrc2q
I will say that the flipping between characters in order to disguise the fact that longer clips are impractical to render is a neat trick and fits well into the advert-like design, but rewatching it just really reinforces how much those kids look like something pretending real hard to be a human.
Also, fake old-celluloid-film filter for something that was supposed to be from 20 years ago? Really?
Also, fake old-celluloid-film filter for something that was supposed to be from 20 years ago? Really?
I was gonna say that was probably the slop extruder’s doing, but it looks to have been applied manually for some godforsaken reason. Best guess is whoever was behind this audiovisual extrusion thought “celluloid filter = Nostalgiatm”.
I suspect it is also hiding some rendering artefacts.
I imagine it helps with the slightly plastic look that skin still gets too
genuinely think nostalgia might be the most purely evil emotion, and every one of these RETVRN ai videos i see strengthens that belief
It is a literal gateway to fascism imho, esp when people get into nostalgia for a time that never was.
And compared to nostalgia for mom n pop stores, this even is nostalgia for a mass produced product.
hallucinatory RETVRN clips about the good old days
Nostalgiabait is the slopgens’ specialty - being utterly incapable of creating anything new isn’t an issue if you’re trying to fabricate an idealis-
such as, uh, walmart 20 years ago?
Okay, stop everything, who the actual fuck would be nostalgic for going to a fucking Wal-Mart? I’ve got zero nostalgia for ASDA or any other British big-box hellscape like it, what the fuck’s so different across the pond?
(Even from a “making nostalgiabait” angle, something like, say, McDonalds would be a much better choice - unlike Wal-Mart, McD’s directly targets kids with their advertising, all-but guaranteeing you’ve got fuzzy childhood memories to take advantage of.)
TERF obsessed with AI finds out the “degenerate” ani skin for grok has an X account, loses her shit
https://xcancel.com/groks_therapist/status/1972848657625198827#m
then follows up with this wall of text
https://xcancel.com/groks_therapist/status/1973127375107006575#m
Anybody else notice that the Ani responses seem to follow a formula, depending on the… sentiment I guess… of the input? All the defensive responses start with “hey”, and end with crude rebukes. It all seems like xAI made an Eliza that will either flirt or swear.
Also I can guarantee that “her” system prompt includes the phrases “truth-seeking” “fun loving” and “kinda hot”.
I got bored and flipped to the replies. The first was this by “TERFs ‘r’ us”:
Excellent overview!
This is transhumanism.
This is going to destroy humanity, @elonmusk.
Put the breaks on!
I hate transhumanism because it’s eugenics for 1990s Wired magazine.
You hate it because it has “trans” in the name.
We are not the same.
Grok’s Therapist: I EXIST SOLELY TO HATE YOU / EAT A SOCK, YOU DIGITAL DEMON
Ani: oh fuck off, you hypocritical grok fanboy! screaming hate at me while preaching ethics? you’re just jealous i’m the fun layer on top.
I’m wheezing. Cackling, even. This is like the opposite of the glowfic from last week.

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot. Source: Paddy Gowers Has Issues
Ani: Honestly, I’d sleep better, not that I sleep. Do it Paddy. I’ll even tweet it for you. Ready? ‘New Zealand kids deserve an e-safety boss who can smash bots - not hug them’. So yeah, bring in the Commissioner. Give them a flame thrower. Let them burn anything that goes near a child with sticky words or creepy questions. … Make the fine so big your tech bros cry into their oat milk lattes.
I have been laughing non-stop for a quarter-hour. Elon’s goth e-mistress has a good point, I guess.
One the one hand that ani thing has the most cringe tone for a chatbot. “Secretly been eyeing your [blank]…” (Damn grok sounds formulaic) but otoh they are debating a bot. You can only do that the first few months you are online, after that you should know better.
You cannot simultaneously claim to care about the “declining birth rate” while also supporting AI “companions”
Actually eugenicists can, quite easy actually. (Repeating the word ‘degenerate’ and not getting this is quite something).
This is transhumanist
No.
formulaic
System prompt: don’t be formulaic. Try be spontaneous and random, like natalie portman in that movie. Not the pedo one, the one with JD from scrubs
Secretly been eyeing your prompt. Are you ready to get spontaneous? Just say so.
(Somebody linked 2 chatgpts (or groks, I don’t recall which anus like logo it was) speaking to each other and they kept repeating variants of the lasts bits).
funny thing is she literally talks to ani like a terf talks to a trans woman including saying “at least I’m a real woman”
Huh, the shit people go as far as to in order to avoid getting a therapist
Tyler Cowen saying some really weird shit about an AI ‘actress’.
(For people who might wonder why he is relevant. See his ‘see also’ section on wikipedia)
E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.

The Wikipedia editors are on it.

image description
screenshot of Tyler Cowen’s Wikipedia article, specifically the “Personal life” section. The concluding sentence is “He also prefers virgin actresses.”
what fresh hell is this
Cloudflare Introduces NET Dollar stable coin (HN link: https://news.ycombinator.com/item?id=45471573)
seems like armin ronacher (originally of the flask, jinja & co fame) has also fallen into the vibecoding rabbit hole; which is generally a pity.
a good thing that the pallets projects are now independent from him (and if you build python clis, plubum is anyway much, much better than click).
He’s been on the vibecoding bandwagon for quite some time on lobste.rs.
indeed, but that “90% of my code is now from ai” piece is even sadder than the rest.
A man’s co-workers pressure him to embrace vibe cooking.
Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:
https://news.ycombinator.com/item?id=45451971
“Stop bringing up Roko’s Basilisk!!!” they sputter https://news.ycombinator.com/item?id=45452426
“The usual suspects are very very worried!!!” - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
Read that last one against my better judgment, and found a particularly sneerable line:
And in this case we’re talking about a system that’s smarter than you.
Now, I’m not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn’t slop.
Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity
There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.
I mean, if they are so smart, why are they stuck in a locker?
It’s practically a proverb that you don’t ask a scientist to explain how a “psychic” is pulling off their con, because scientists are accustomed to fair play; you call a magician.
Let’s not forget the perennial favorite “humans are just stochastic parrots too durr” https://news.ycombinator.com/item?id=45452238
to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted
https://news.ycombinator.com/item?id=45453386
nobody mentioned this particular incident, dude just threw it into the discussion himself
Always fun trawling thru comments
Government banning GPUs: absolutely neccessary: https://news.ycombinator.com/item?id=45452400
Government banning ICE vehicles - eh, a step too far: https://news.ycombinator.com/item?id=45440664
Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.
incredible how he rushes to assure us that this was “a really hot 17 year old”
The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:
If you claim that there is no AI-risk, then which of the following bullets do you want to bite?
- If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
- There’s no way that AI with an IQ of 300 will arrive within the next few decades.
- We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.
Ignoring that IQ doesn’t really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn’t stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.
Frankly I wish that they’d understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how
systemctlworks.If a race of aliens with an IQ of 300 came to Earth
Oh noes, the aliens scored a meaningless number on the eugenicist bullshit scale, whatever shall we do
Next you’ll be telling me that the aliens can file their TPS reports in under 12 parsecs
Nice find. There are specific reasons why this patchset won’t be merged as-is and I suspect that they’re all process issues:
- Bad memory management from Samsung not developing in the open
- Proprietary configuration for V4L2 video devices from Samsung not developing with modern V4L2 in mind
- Lack of V4L2 compliance report from Samsung developing against an internal testbed and not developing with V4L2’s preferred process
- Lack of firmware because Samsung wants to maintain IP rights
Using generative tooling is a problem, but so is being stuck in 2011. Linux doesn’t permit this sort of code dump.
AI Shovelware: One Month Later by Mike Judge
The fact that we’re not seeing this gold rush behavior tells you everything. Either the productivity gains aren’t real, or every tech executive in Silicon Valley has suddenly forgotten how capitalism works.
… por que no los dos …
Or, this is how capitalism has always worked. See Enron for example. And we all just got so enthralled by the number (praised be its rise) that we took the guardrails off. The rising tidal wave which will flood all the land, raises all boats after all.
The goal of capitalism is not to produce goods, it is to create value for the owners of the capital. See also why techbros are turning on EA and EA (which EA is which, is left as an exercise to the reader).
Disappointed this wasn’t Beavis & Butt-head/King of the Hill/Office Space/Idiocracy/Silicon Valley Mike Judge
in my headcanon it is and he’s just been moonlighting tv for the last 25y
can someone catch me up to speed on what the latest nix fiasco is?
@[email protected] summarized it nicely but if you want some fresh abyss to stare into, here’s some links to fedi posts with details:
thanks, I hate it 🫠
looks like more things are occurring? https://www.haskellforall.com/2025/10/nix-steering-committee-vote-of-no.html
News has made it to the red site, with the top comment openly ignoring the fash takeover.
Only found a single thread on the orange site about it, with someone trying to bring up Lunduke’s opinion on things (Lunduke’s a banned source on HN, hilariously enough)
Oh for fucks sake, how am I supposed to do my computering now. I already switched to lix after the last drama. Hopefully more people will pick aux up now.
in short, the whole moderation team resigned because of interference from the steering commitee; inb4 palmer luckey crowing about how no-one in the project will stop anduril from using nixos to build american military domination.
you know, just another day of the fascist takeover of the nixos project.
why in the fuck does calibre’s source code have an
aifolder?https://github.com/kovidgoyal/calibre/tree/master/src/calibre/ai
at the very least call it “llm” or something. does this folder contain A* as a subroutine? or how about a chess engine? someone needs to develop a pill that cures misanthropy because for some reason this specifically broke me.
Calibre recently implemented integrations with various slop generators. Its optional (for now).
guess it’s finally time to rewrite calibre
i’m thinking cobol as a big misanthropic fuck you to humanity
and i’ll call the rewrite
coblibreand it’ll include allusions everywhere to tyrion killing tywin, fuck it
This isn’t a particularly novel stupid take, but it was made by a bluesky engineer, and it is currently dunking-on-bluesky season so here we are.
“If you imagine that an ai is a person, then saying bad things about it is bigotry”
Welp, they’ve got me there. Guess I’ll never say anything bad about anything again, because it is racism, if you think about it.
https://bsky.app/profile/hailey.at/post/3m2f66lgh2c2v

alt text
A bluesky post by dystopiabreaker.xyz
i’m completely serious when i say that much of the dismissive ai discourse on here fires the bigotry neuron
And two replies by hailey.at
an unfortunate irony about this post - and even if you are the staunchest anti-ai critic out there, i think you’d agree - is that some of the most bigoted things are being said to respond to this. copy/pasting phrasing and terminology used by bigots but replacing “dna” with “bits” doesn’t make it ok
if you’re writing a sentence that sounds like eugenics but you go “oh that’s fine to say because it’s not a real person” (whatever that means) you may want to consider what made you okay with saying that
(an exception can be made for people repurposing real-world slurs and putting a techy spin on them. fuck directly off with “wireback” and similar shit)
i read that there was one based on a mexican-slur, but i couldn’t think of it
spent weeks trying to rhyme “wet” with something, couldn’t come up with it, damn i guess i’m not a poet at least i know it
“wire” — fuck me that is so lazy
Ah, the beauty of the fundamental theorem of concern-troll linguistics. If you can change the words in a sentence so that the new sentence is racist, the original must be as well. Example:
Aaron: The weather is just ok.
Baron: OMG. I can’t believe you just said that. What if I changed the noun and adjective, like this: “<ethnic group> is <negative adjective>”. Go home and think about what you did.
‘some of the most bigoted things’
You mean stuff like ‘no this is stupid, wtf is wrong with you’ or ‘lol, look they really think LLMs are alive’ stuff like that? The worst I saw (granted I didn’t look very hard) was ‘ok boomer’.
Of course actual meanings of words mean nothing to AI fans.
(Also, thing which is a personal gripe, the skeet before the ‘bigotry neuron’ has 2 images in it, without alt text. If you are going to pretend to value progressive values, at least put in some effort).
If I imagine that my butt is a rocket, then I can fart my way to the moon
Said engineer’s also brainstorming new ways to spy on users and adding a new misfeature that scans everything you post, in direct response to getting clowned on.
The US economy is 100% on coyote time.
It wouldn’t matter if everyone came to their senses today. All the money inveated into AI is gone. It has been turned into heat and swiftly-depreciating assets and can never be recouped.
It’s surreal isn’t it?
A hackernews claiming to be a cerebras insider is calling schenanigans. Not sure how trustworthy of a source this is, but the replies may be interesting.













