

He will just bomb Gamers next. DLSS7 needs 9 GPUs.


He will just bomb Gamers next. DLSS7 needs 9 GPUs.


What makes all this extra funny is Yuds lifes work. Wants to ensure AI alignement and fix human rationality. Creates terrorists instead.
Reminds me a bit of his AI in the box experiments, which according to the stories always worked on his fans, but as soon as somebody skeptical did it, he stayed in the box.


A proper scientist would have tested it at home first.
proceeds to burn my own home down


But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.
Remember the cartoon of the bombs being dropped on people and the people going ‘I hear the next bombs will be sent by a woman’, this but ‘with lawful force’.
We end up 100% dead after slightly more time.
On a long enough timeframe…
Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals
This is always one of those things that baffles me, and makes it clear to me these people have never even been close to any real movement. All these movements have violent and non-violent parts. Hell, you see it even now with the far right, they have a violent and non-violent part, and the non-violent part scores points by pointing to their violent friends and going ‘we are not with them’ while going to the same parties, sharing the same ideas, and all being friends with each other. Hell, look at the various LW people who went ‘wow, all these rightwingers in our mids are horrible’ and then not stopping being friends with them. I see now how Sam got the drop on all these naive people.


Oww neural computers, are we gonna do the three laws of robotics again? It has been a while. (I know that was positronic)
I mean:
Conventional computers are already rewriting their own substrate for AI.
This is just science fiction talk.
E: but for fun lets take the last line seriously, so a neural computer rewrites the substrate of computers, suddenly it decides to flip what different voltage numbers mean. So instead of a one it is a zero and vice versa. What use is that even? So many things can go wrong in messing with the lower level shit that there is a reason we try to not mess with that too much. It will esp be fun if you combine this with the genAI/LLM (not sure which of the two) mathematical guarantee that it will confabulate, sub 90% uptime? That is a rookie number, it can get lower! (I know im using a very silly example here, but there is a reason general purpose computers took off, and constant changes in the substrate of that will not help).


more akin to Luigi Mangione’s
Didnt he also have sort of links to the LW idea space?


Ah right, I need to get a 365 license for word, which comes with a free copilot agent, who needs a 365 license for its copy of word, which comes with a free copilot agent, who needs a …


Specifically, a screenshot of a moderator warning him that advocating violence is grounds for a ban there. It would also be grounds for a ban on LW.
That explains why Yud is using twitter so much nowadays. I mean they did ban him right? right?


Nice. Good luck at your new job!


Ah suddenly when it reaches the class he feels he should be a part of (or is a part of, I don’t know how much money he makes) violence is suddenly a problem.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
…


the total cost was under $20,000
Doubt. Esp as the cost of training (or just the slamming of sites to get information, and all the extra costs related to that) are not included.


and that a subsequent increase in “productivity” is expected with it.
Oh no… they def will blame the users before blaming the faulty tools. Hope you will not be the one who gets blamed as a wrecker or something when the eventual increase isn’t there (or other metrics fall off a cliff).


Up next, when the first agent fails, implement an agent that checks the other agent. Both of these need agents to check for malicious inputs of course. And translation agents.


It can do trillions of calculations per second. All of them wrong.


So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.


Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn’t want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).
I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can’t work with these people. Like I feel more validated in the idea that EA is not the right way.
Another detail I noticed, nobody mentioned deepseek, again.


Yep, and would make us all happier, and keep us in control. (deleting all the HP printers is next).


New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).
“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”
He admits it!