Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)

Just the usual stuff religions have to do to maintain the façade, “this is all true but gee oh golly do NOT live your life as if it was because the obvious logical conclusions it leads to end in terrorism”
prompt injection phish by email
so glad these things have a solid security model and this totally won’t result in a scrambled half-assed fix
A prompt-injection attack on Google’s Gemini model was disclosed through 0din, Mozilla’s bug bounty program
Whenever I think Mozilla can’t get any worse…
So now they do “Agentic Security” and “Real-time GenAI intelligence on emerging threats”.
mozilla has for years had a habit of tailchasing some utterly fucking weird shit instead of focusing on their core business, and this feels very much like part of that. but fucking still
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself
Discover Yourself with Physiognomy.ai
Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.
At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.
Whether you’re seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself
Well, I guess there’s your answer - “philosophy teaches you how to avoid falling for hucksters”
The web is often Dead Dove in a Bag as a Service innit?
do not eat
Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!
Thanks, I hate it!
Number magic?
they use numerology.ai as a backend
“we encode shit as numbers in an arbitrary way and then copy-paste it into chatgpt”
whyyyyy it’s a real site
The Gentle Singularity - Sam Altman
This entire blog post is sneerable so I encourage reading it, but the TL;DR is:
We’re already in the singularity. Chat-GPT is more powerful than anyone on earth (if you squint). Anyone who uses it has their productivity multiplied drastically, and anyone who doesn’t will be out of a job. 10 years from now we’ll be in a society where ideas and the execution of those ideas are no longer scarce thanks to LLMs doing most of the work. This will bring about all manner of sci-fi wonders.
Sure makes you wonder why Mr. Altman is so concerned about coddling billionaires if he thinks capitalism as we know it won’t exist 10 years from now but hey what do I know.
anyone who doesn’t will be out of a job
quick, Sam, name five jobs that don’t involve sitting at a desk
Chat-GPT is more powerful than anyone on earth (if you squint)
xD
No sorry, let me rephrase,
Lol, lmao
How do you even grace this with a response. Shut your eyes and loudly sing “lalalala I can’t hear you”
I think I liked this observation better when Charles St Ross made it.
If for no other reason than he doesn’t start off by dramatically overstating the current state of this tech, isn’t trying to sell anything, and unlike ChatGPT is actually a good writer.
A company that makes learning material to help people learn to code made a test of programming basics for devs to find out if their basic skills have atrophied after use of AI. They posted it on HN: https://news.ycombinator.com/item?id=44507369
Not a lot of engagement yet, but so far there is one comment about the actual test content, one shitposty joke, and six comments whining about how the concept of the test itself is totally invalid how dare you.
Looks like it’s been downranked into hell for being too mean to the AI guys, which is weird when its literally an AI guy promoting his AI generated trash.
It seems that the test itself is generated by autoplag? At least that’s how I understand the PS and one of the comments about “vibe regression” in response to an error
Anyway, they say it covers Node and to any question regarding Node the answer is “no”, I don’t need an AI to know webdev fundamentals
HN commenters are slobbering all over the new Grok. Virtually every commenter bringing up Grok’s recent full-tilt Nazism gets flagged into oblivion.
this particular abyss just fucking hurts to gaze into
Love how the most recent post in the AI2027 blog starts with an admonition to please don’t do terrorism:
We may only have 2 years left before humanity’s fate is sealed!
Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.
Most of the rest is run of the mill EA type fluff such as here’s a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.
It’s kind of telling that it’s only been a couple months since that fan fic was published and there is already so much defensive posturing from the LW/EA community. I swear the people who were sharing it when it dropped and tacitly endorsing it as the vision of the future from certified prophet Daniel K are like, “oh it’s directionally correct, but too aggressive” Note that we are over halfway through 2025 and the earliest prediction of agents entering the work force is already fucked. So if you are a ‘super forecaster’ (guru) you can do some sleight of hand now to come out against the model knowing the first goal post was already missed and the tower of conditional probabilities that rest on it is already breaking.

Funniest part is even one of authors themselves seem to be panicking too as even they can tell they are losing the crowd and is falling back on this “It’s not the most likely future, it’s the just the most probable.” A truly meaningless statement if your goal is to guide policy since events with arbitrarily low probability density can still be the “most probable” given enough different outcomes.
Also, there’s literally mass brain uploading in AI-2027. This strikes me as physically impossible in any meaningful way in the sense that the compute to model all molecular interactions in a brain would take a really, really, really big computer. But I understand if your religious beliefs and cultural convictions necessitate big snake 🐍 to upload you, then I will refrain from passing judgement.
https://www.wired.com/story/openworm-worm-simulator-biology-code/
Really interesting piece about how difficult it actually is to simulate “simple” biological structures in silicon.
One more comment, idk if ya’ll remember that forecast that came out in April(? iirc ?) where the thesis was the “time an AI can operate autonomously is doubling every 4-7 months.” AI-2027 authors were like “this is the smoking gun, it shows why are model is correct!!”
They used some really sketchy metric where they asked SWEs to do a task, measured the time it took and then had the models do the task and said that the model’s performance was wherever it succeeded at 50% of the tasks based on the time it took the SWEs (wtf?) and then they drew an exponential curve through it. My gut feeling is that the reason they choose 50% is because other values totally ruin the exponential curve, but I digress.
Anyways they just did the metrics for Claude 4, the first FrOnTiEr model that came out since they made their chart and… drum roll no improvement… in fact it performed worse than O3 which was first announced last December (note instead of using the date O3 was announced in 2024, they used the date where it was released months later so on their chart it make ‘line go up’. A valid choice I guess, but a choice nonetheless.)
This world is a circus tent, and there still aint enough room for all these fucking clowns.
Please, do not rid me of this troublesome priest despite me repeatedly saying that he was a troublesome priest, and somebody should do something. Unless you think it is ethical to do so.
Musk objects to the “stochastic parrot” labelling of LLMs. Mostly just the stochastic part.

Wake up babe, new alignment technique just dropped: Reinforcement Learning Elon Feedback
“We made it more truth-seeking, as determined by our boss, the fascist megalomaniac.”
https://www.lesswrong.com/posts/JspxcjkvBmye4cW4v/asking-for-a-friend-ai-research-protocols
Multiple people are quietly wondering if their AI systems might be conscious. What’s the standard advice to give them?
Touch grass. Touch all the grass.
What’s the standard advice to give them?
It’s unfortunately illegal for me to answer this question earnestly
Username called “The Dao of Bayes”. Bayes’s theorem is when you pull the probabilities out of your posterior.
知者不言,言者不知。 He who knows (the Dao) does not (care to) speak (about it); he who is (ever ready to) speak about it does not know it.
In the morning: we are thrilled to announce this new opportunity for AI in the classroom
Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.
*musk voice* if machine god didn’t want me to fuck with the racism dial, he wouldn’t make it
Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.
LLMs are automatic gaslighting machines, so this makes sense
It’s possible we may be catching sight of the first shy movements towards a a pivot to robotics:
https://techcrunch.com/2025/07/09/hugging-face-opens-up-orders-for-its-reachy-mini-desktop-robots/
Both developer kits, because it’s always a maybe the clients will figure something out type of business model these days.
But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI? Self-crashing cars are a gimme, but maybe a “sealed for your protection” Amazon locker with a robot arm that handles the package for you?
I was in LA this time a couple years ago, and some robot delivery startup had already left their little motorized shopping carts littering the sidewalks around Hollywood. I never saw them moving, they just sat there almost like they were abandoned.
But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI?
Good question - AFAICT, they’re gonna struggle to find places to cram their bubble-bots into. Plus, nothing’s gonna stop Joe Public from wrecking them in the streets - and given we’ve already seen Waymos getting torched and Lime scooters getting wrecked these AI-linked 'bots are likely next on the chopping block.
Bummer, I wasn’t on the invite list to the hottest SF wedding of 2025.

Update your mental models of Claude lads.

Because if the wife stuff isn’t true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon???
It’s gonna be so awkward when Anthropic reveals that inside their data center is actually just Some Guy Named Claude who has been answering everyone’s questions with his superhuman typing speed.
11.000 indian people renamed to Claude
Penny Arcade chimes in on corporate AI mandates:

This is so Charlie Stross coded that I tried to read the Mastodon comments.
Lmao I love this Lemmy instance
NYT covers the Zizians
Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html
Archive link: https://archive.is/9ZI2c
Choice quotes:
Big Yud is shocked and surprised that craziness is happening in this casino:
Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”
“A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”
Good news everyone, it’s popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats
Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.
Keep saving money for retirement and keep having kids, but for god’s sake don’t stop blogging about how AI is gonna kill us all in 5 years:
To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”
Yet Rationalists I spoke with said they didn’t see targeted violence — bombing data centers, say — as a solution to the problem.
Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn’t count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.
“A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

Re the “A lot of the early Rationalists" bit. Nice way to not take responsibility, act like you were not one of them and throw them under the bus because “genuinely crazy” like some preexisting condition, and not something your group made worse, and a nice abuse of the general publics bias against “crazy” people. Some real Rationalist dark art shit here.
In the recent days there’s been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don’t have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:
The argument deployed by individuals such as Bentham’s Bulldog boils down to: “Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.
“Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!”
I thought you were talking about lemmy.world (also uses the LW acrynom) for a second.
Lesswrong is a Denial of Service attack on a very particular kind of guy
Damn making honey is metal as fuck. (And I mean that in a omg this is horrible, you could write disturbing songs about it way) CRUSHED FOR YOUNG! MAMMON DEMANDS DISMEMBERMENT! LIVING ON SLOP, HIVE CULLING MANDATORY. Makes a 40k hive city sound nice.
You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.
This, coming from LW, just has to be satire. There’s no way to be this self-unaware and still remember to eat regularly.
















