Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
Dude discovers that one LLM model is not entirely shit at chess, spends time and tokens proving that other models are actually also not shit at chess.
The irony? He’s comparing it against Stockfish, a computer chess engine. Computers playing chess at a superhuman level is a solved problem. LLMs have now slightly approached that level.
For one, gpt-3.5-turbo-instruct rarely suggests illegal moves,
Writeup https://dynomight.net/more-chess/
HN discussion https://news.ycombinator.com/item?id=42206817
LLMs sometimes struggle to give legal moves. In these experiments, I try 10 times and if there’s still no legal move, I just pick one at random.
uhh
@gerikson @BlueMonday1984 the only analysis of computer chess anybody needs https://youtu.be/DpXy041BIlA?si=a1vU3zmOWs8UqlSQ
Stack overflow now with the sponsored crypto blogspam Joining forces: How Web2 and Web3 developers can build together
I really love the byline here. “Kindest view of one another”. Seething rage at the bullshittery these “web3” fuckheads keep producing certainly isn’t kind for sure.
Strap in and start blasting the Depeche Mode.
When the reporter entered the confessional, AI Jesus warned, “Do not disclose personal information under any circumstances. Use this service at your own risk.
Do not worry my child, for everything you say in this hallowed chamber is between you, AI Jesus, and the army of contractors OpenAI hires to evaluate the quality of their LLM output.
a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the
templates
,pages
,static
, andless
directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I’ll DM an invite link!The mask comes off at LWN, as two editors (jake and corbet) dive in to frantically defend the honour of Justine fucking Tunney against multiple people pointing out she’s a Nazi who fills her projects with racist dogwhistles
Is Google lacing their free coffee??? How could a woman with at least one college degree believe that the government is even mechanically capable of dissolving into a throne for Eric Schmidt.
fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:
- dole out a ban for being rude to a fascist
- dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
- in the process of the above, I create a safe space for a fascist and her friends
but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here
See, you’re assuming the goal of moderation is to maintain a healthy social space online. By definition this excludes fascists. It’s that old story about how to make sure your punk bar doesn’t turn into a nazi punk bar. But what if instead my goal is to keep the peace in my nazi punk bar so that the normies and casuals keep filtering in and out and making me enough money that I can stay in business? Then this strategy makes more sense.
Centrists Don’t Fucking Be Like This challenge not achieved yet again
fwiw this link didn’t jump me to a specific reply (if you meant to highlight a particular one)
It didn’t scroll for me either but there’s a reply by this corbet person with a highlighted background which I assume is the one intended to be linked to
Post by Corbet the editor. “We get it: people wish that we had not highlighted work by this particular author. Had we known more about the person in question, we might have shied away from the topic. But the article is out now, it describes a bit of interesting technology, people have had their say, please let’s leave it at that.”
So you updated the article to reflect this right? padme.jpg
Seems like they’ve actually done this now. There’s a preface note now.
This topic was chosen based on the technical merit of the project before we were aware of its author’s political views and controversies. Our coverage of technical projects is never an endorsement of the developers’ political views. The moderation of comments here is not meant to defend, or defame, anybody, but is in keeping with our longstanding policy against personal attacks. We could certainly have handled both topic selection and moderation better, and will endeavor to do so going forward.
Which is better than nothing, I guess, but still feels like a cheap cop-out.
Side-note: I can actually believe that they didn’t know about Justine being a fucking nazi when publishing this, because I remember stumbling across some of her projects and actually being impressed by it, and then I found out what an absolute rabbit hole of weird shit this person is. So I kinda get seeing the portable executables project, thinking, wow, this is actually neat, and running with it.
Not that this is an excuse, because when you write articles for a website that should come with a bit of research about the people and topic you choose to cover and you have a bit more responsibility than someone who’s just browsing around, but what do I know.
Well, at least they put down something. More than I expected.
And doing research on people? In this economy?
so is corbet the same kind of fucker that’ll complain “everything is so political nowadays”? it seems like they are
@dgerard @BlueMonday1984 also, and I know this is way beside the point, update the design of your website, motherfuckers
I don’t run any websites, what are you coming at me for
most of the dedicated Niantic (Pokemon Go, Ingress) game players I know figured the company was using their positioning data and phone sensors to help make better navigational algorithms. well surprise, it’s worse than that: they’re doing a generative AI model that looks to me like it’s tuned specifically for surveillance and warfare (though Niantic is of course just saying this kind of model can be used for robots… seagull meme, “what are the robots for, fucker? why are you being so vague about who’s asking for this type of model?”)
Quick, find the guys who were taping their phones to a ceiling fan and have them get to it!
Jokes aside I’m actually curious to see what happens when this one screws up. My money is on one of the Boston Dynamics dogs running in circles about 30 feet from the intended target without even establishing line of sight. They’ll certainly have to test it somehow before it starts autonomously ordering drone strikes on innocent people’s homes, right? Right?
Pokemon Go To The War Crimes
Pokemon Go To The Hague
Peter Watts’s Blindsight is a potent vector for brain worms.
Watts has always been a bit of a weird vector. While he doesn’t seem a far righter himself, he accidentally uses a lot of weird far right dogwhistles. (prob some cross contamination as some of these things are just scientific concepts (esp the r/K selection thing stood out very much to me in the rifters series, of course he has a phd in zoology, and the books predate the online hardcore racists discovering the idea by more than a decade, but still odd to me)).
To be very clear, I don’t blame Watts for this, he is just a science fiction writer, a particularly gloomy one. The guy himself seems to be pretty ok (not a fan of trump for example).
That’s a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.
not a fan of trump for example
In one the Eriophora stories (I think it’s officially the sunflower circle) I think there’s a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save what’s savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).
Huh. Say more?
Oh man where to begin. For starters:
- Sentience is overrated
- All communication is manipulative
- Assumes intelligence has a “value” and that it stacks like a Borderlands damage buff
- Superintelligence operates in the world like the chaos god Tzeench from WH40K. Humans can’t win, because all events are “just as planned”
- Humanity is therefore gormless and helpless in the face of superintelligence
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Sentience is overrated
Not sentience, self awareness, and not in a parτicularly prescriptive way.
Blindsight is pretty rough and probably Watt’s worst book that I’ve read but it’s original, ambitious and mostly worth it as an introduction to thinking about selfhood in a certain way, even if this type of scifi isn’t one’s cup of tea.
It’s a book that makes more sense after the fact, i.e. after reading the appendix on phenomenal self-model hypothesis. Which is no excuse – cardboard characters that are that way because the author is struggling to make a point about how intelligence being at odds with self awareness would lead to individuals with nonexistent self-reflection that more or less coast as an extension of their (ultrafuturistic) functionality, are still cardboard characters that you have to spend a whole book with.
I remember he handwaves a lot of stuff regarding intelligence, like at some point straight up writing that what you are reading isn’t really what’s being said, it’s just the jargonaut pov character dumbing it way down for you, which is to say he doesn’t try that hard for hyperintelligence show-don’t-tell. Echopraxia is better in that regard.
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Not really, there are some common ideas mostly because tesrealism already is scifi tropes awkwardly cobbled together, but usually what tescreals think is awesome is presented in a cautionary light or as straight up dystopian.
Like, there’s some really bleak transhumanism in this book, and the view that human cognition is already starting to become alien in the one hour into the future setting is kind of anti-longtermist, at least in the sense that the utilitarian calculus turns way messed up.
And also I bet there’s nothing in The Sequences about Captain Space Dracula.
I got a really nice omnibus edition of Blindsight/Echopraxia that was printed in the UK, but ultimately, the necessarily(?) cardboard nature of the vampire character in Echopraxia was what left me cold. The first chapter or two are some of the most densely-packed creative sci-fi ideas I’ve ever read, but I came to the book looking for more elaboration on the vampires, and didn’t really get that. Valerie remains an inscrutable other. The most memorable interaction she has is when she’s breaking her arm and making the POV character guy reset it, seemed like she was hitting on him?
I hear you. I should clarify, because I didn’t do a good job of saying why those things bothered me and nerd-vented instead. I understand that an author doesn’t necessarily believe the things used as plot devices in their books. Blindsight a horror/speculative fiction book that asks “what if these horrible things were true” and works out the consequences in an entertaining way. And, no doubt there’s absolutely a place for horror in spec fic, but Blindsight just feels off. I think @Soyweiser explained the vibes better than I did. Watts isn’t a bad guy. Maybe it’s just me. To me, it feels less Hellraiser and more Human Centipede i.e. here’s a lurid idea that would be tremendously awful in reality, now buckle up and let’s see how it goes to an uncomfortable extent. That’s probably just a matter of taste, though.
Unfortunately, the kind of people who read these books don’t get that, because media literacy is dead. Everyone I’ve heard from (online) seems to think that it is saying big deep things that should be taken seriously. It surfaces in discussions about whether or not ChatGPT is “alive” and how it might be alive in a way different from us. Eric Schmidt’s recent insane ramblings about LLMs being an “alien intelligence,” which don’t call Blindsight out directly, certainly resonate the same way.
Maybe I’m being unfair, but it all just goes right up my back.
I, too, have done the “all communication is manipulative”, but in the same way as one would do a bar trick:
all communication is manipulative, for any words I say/write that you perceive instantly manipulate (as in the physical manner / modifying state) your thoughts, and this is done so without you requesting I do so
it’s a handy stunt with which to drive an argument about a few parts of communication, rhetoric, etc. because it gives a kinda good handle on some meta without getting too deep into things
(although there was one of my friends who really, really hated the framing)
Explaining in detail is kind of a huge end-of-book spoiler, but “All communication is manipulative” leaves out a lot of context and personally I wouldn’t consider how it’s handled a mark against Blindsight.
predictions for the trump admin?
Hot Take: the damage from RFK Jr will be limited by the fact that he’s messing with the money for several large industries, particularly agriculture and pharmaceuticals. They have bottomless pockets and aren’t afraid to bribe the bribable. There will be damage, but he’ll be crushed like a bug in the end.
Also, he clearly annoys the orange guy, can offer him nothing in return now that the election is over, and has already been the victim of a ritual humiliation (e.g. being forced to partake in a McDonald’s meal for the camera), which is the first sign of a Trump guy being de-emphasized.
Please sneer at this article. I thought it was pissweak myself.
should RFK Jr be able to abandon his numerous conspiracy theories about vaccines, he can be the most transformative health secretary in our country’s history
this is exactly the sort of shit centrists were writing about trump in 2016. I guess they can’t get away with doing that now so they’re just writing the same pieces but about his underlings
It’s very “should RFK Jr become a completely different person to the person he is and has been for years, he could really do some great stuff”
Jesus Christ.
This guy is a classic example of kook magnetism with a side of Dunning-Kreuger.
Job history is sus as hell too: https://en.wikipedia.org/wiki/Neil_Barsky
Some dweeb with a journalism degree gets a job at a hedge fund right out of grad school, bombs out, “goes back to” journalism. Newspapers can’t get enough of this kind of guy; not surprised the TERF Island’s paper of record picked him up nor am I surprised that they think RFK Jr might be good actually.
With a touch of, and maybe I’m being mean, “this particular bad thing shouldn’t happen to someone like me! it must be that no one else tried hard enough to fix it till now”
I thought it was particularly telling that he says someone else in the “metabolic health” world warned him not to be mean about RFK. Take that as a sign that you’re in kooksville!
Hopefully the established capitalists will protect us from the fascists’ worst excesses hasn’t been much of a winning bet historically.
oh no, nothing is protecting us, you’re 100% right there. Eating food is about to become a much more dicey proposition.
It is still safe to assume that the ghouls who run Pfizer and ConAgra will bend their resources to protecting the bag from a disposable nutjob.
mine:
prediction 1: he dies halfway through. funniest way would be another pandemic gets him
prediction 2: he doesn’t die. it will be exactly the same as the first admin but infinitely worse. everyone will hate and backstab each other, they will constantly get fired and rehired and fired like reality tv, there will be a constant dribble of horrible things happening, then in four years there’s a coup attempt
prediction 3: elon doesn’t last a year, possibly doesn’t even make it six months
prediction 1: he dies halfway through. funniest way would be another pandemic gets him
I’m anticipating an Elvis re-enactment.
If they do press conferences this time around, ever question should just be “does Elon approve of decision ____ ?” Will drive Trump fkn insane.
Despite worrying my brains out about getting deported from my home of 14 years because I wasn’t born in this godforsaken place, I’m extremely excited that Elon will get fired in the next 6 months or less. Gives me life to think about him getting very publicly humiliated by an even greater piece of shit than he is.
now seeing EAs being deeply concerned about RFK running health during a H5N1 outbreak
dust specks vs leopards
Anyone here read “World War Z”? There’s a section there about how the health authorities in basically all countries supress and deny the incipient zombie outbreak. I think about that a lot nowadays.
Anyway the COVID response, while ultimately better than the worst case scenario (Spanish Flu 2.0) has made me really unconvinced we will do anything about climate change. We had a clear danger of death for millions of people, and the news was dominated by skeptics. Maybe if it had targetted kids instead of the very old it would have been different.
It’s not just systemic media head-up-the-assery, there’s also the whole thing about oil companies and petrostates bankrolling climate denialism since the 70s.
When I run into “Climate change is a conspiracy” I do the wide-eyed look of recognition and go “Yeah I know! Have you heard about the Exxon files?” and lead them down that rabbit hole. If they want to think in terms of conspiracies, at least use an actual, factual conspiracy.
The way many of the popular rat blogs started to endorse Harris in the last second before the US election felt a lot like an attempt at plausible deniability.
Sure we’ve been laying the groundwork for this for decade, but we wanted someone from our cult of personality to undermine democracy and replace it with explicit billionaire rule, not someone with his own cult of personality.
If H5N1 does turn into a full-blown outbreak, part of me expects it’ll rack up a heavier deathtoll than COVID.
At work, I’ve been looking through Microsoft licenses. Not the funniest thing to do, but that’s why it’s called work.
The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.
The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.
Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save… (drumroll)… Approximately nothing!
But if we hadn’t done all this, the costs would have increased by about 50%.
We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.
There’s got to be some kind of licensing clarity that can be actually legislated. This is just straight-up price gouging through obscurantism.
My professor is typing questions into chat gpt in class rn be so fucking for real
gentlemen, this means war
-me imagining myself paying to sit through that
He’s using it to give examples of exam question answers. The embarrassment
I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn’t have room to complain? Not that I’d recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.
We’re going to be answering two essay questions in an in-class test instead of writing a paper this year specifically to prevent chat gpt abuse. Which he laughed and joked about because he really believes chat gpt can produce good results !
I’m pretty sure you could download a decent markov chain generator onto a TI-89 and do basically the same thing with a more in-class appropriate tool, but speaking as someone with dogshit handwriting I’m so glad to have graduated before this was a concern. Godspeed, my friend.
I’d pipe up and go “uhhh hey prof, aren’t you being paid to, like, impart knowledge?”
(I should note that I have an extremely deficient fucks pool, and do not mind pissing off fuckwits. but I understand it’s not always viable to do)
“So, professor sir, are you OK with psychologically torturing Black people, or do you just not care?”
It was there and gone fairly quickly and I wouldn’t say I’m a model student so I didn’t say anything. I’ve talked to him about Chat GPT before though…
Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They’re Underage
Three billion dollars and its going into Character AI AutoGroomer 4000s. Fuck this timeline.
automated grooming is just what progress is and you have to accept it. like the printing press
AI finally allowing grooming at scale is the kind of thing I’d expect to be the setup for a joke about Silicon Valley libertarians, not something that’s actually happening.
HN runs smack into end-stage Effective Altruism and exhibit confusion
Title "The shrimp welfare project " is editorialized, the original is “The Best Charity Isn’t What You Think”.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
So… we should be vegetarians?
No, just replace all your sense of morality with utilitarian shrimp algebra. If you end up vegetarian, so be it.
Ohhhh, so this is a forced-birther agenda item. Got it
I think the author is just honestly trying to equivocate freezing shrimps with torturing weirdly specifically disabled babies and senile adults medieval style. If you said you’d pledge like 17$ to shrimp welfare for every terminated pregnancy I’m sure they’d be perfectly fine with it.
I happened upon a thread in the EA forums started by someone who was trying to argue EAs into taking a more forced-birth position and what it came down to was that it wouldn’t be as efficient as using the same resources to advocate for animal welfare, due to some perceived human/chicken embryo exchange rate.
rat endgame being eugenics again?? no waaay
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
This entire fucking shrimp paragraph is what failing philosophy does to a mf
deleted by creator
Did the human pet guy write this
This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn’t crumble.
Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.
Dog, you’ve lost the plot.
FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn’t indefensible, but the way it’s framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you’d be giving money to people who are weird in a bad way.
Not that I’m a super fan of the fact that shrimp have to die for my pasta, but it feels weird that they just pulled a 3% number out of a hat, as if morals could be wrapped up in a box with a bow tied around it so you don’t have to do any thinking beyond 1500×0.03×1 dollars means I should donate to this guys shrimp startup instead of the food bank!
Shrimp cocktail counts as vegetarian if there are fewer that 17 prawns in it, since it rounds down to zero souls.
Hold it right there criminal scum!
spoiler
Image of two casually dressed guys pointing fingerguns at the camera, green beams are coming out of the fingerguns. The Vegan Police from the movie Scott Pilgrim vs. The World. The cops are played by Thomas Jane and Clifton Collins Jr, the latter is wearing sunglasses, while it is dark.
Ah you see, the moment you entered the realm of numbers and estimates, you’ve lost! I activate my trap card: 「Bayesian Reasoning」 to Explain Away those numbers. This lets me draw the「Domain Expert」 card from my deck, which I place in the epistemic status position, which boosts my confidence by 2000 IQ points!
Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.
Apologies for focusing on just one sentence of this article, but I feel like it’s crucial to the overall argument:
… if [shrimp] suffer only 3% as intensely as we do …
Does this proposition make sense? It’s not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.
It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I’m too unimaginative to comprehend so many billions of bits of dust.
lol hahah.
Effective Altruism Declares War on the Entire State of Louisiana
OK to start us off how about some Simulation Hypothesis crankery I found posted on ActivityPub: Do we live in a computer simulation? (Article), The second law of infodynamics and its implications for the simulated universe hypothesis (PDF)
Someone who’s actually good at physics could do a better job of sneering at this than me, but I mean but look at this:
My law can confirm how genetic information behaves. But it also indicates that genetic mutations are at the most fundamental level not just random events, as Darwin’s theory suggests.
A super complex universe like ours, if it were a simulation, would require a built-in data optimisation and compression in order to reduce the computational power and the data storage requirements to run the simulation.
This feels like quackery but I can’t find a goal…
But if they both hold up to scrutiny, this is perhaps the first time scientific evidence supporting this theory has been produced – as explored in my recent book.
There it is.
Edit: oh God it’s worse than I thought
The web design almost makes me nostalgic for geocities fan pages. The citations that include himself ~10 times and the greatest hits of the last 50 years of physics, biology, and computer science, and Baudrillard of course. The journal of which this author is the lead editor and which includes the phrase “information as the fifth state of matter” in the scope description.
Oh God the deeper I dig the weirder it gets. Trying to confirm whether the Information Physics Institute is legit at all and found their list of members, one of whom listed their relevant expertise as “Writer, Roleplayer, Singer, Actor, Gamer”. Another lists “Hyperspace and machine elves”. One very honestly simply says “N/A”
The Gmail address also lends the whole thing an air of authority. Like, you’ve already paid for the domain, guys.
I love the word cloud on the side. What is 6G doing there
6G nanometer-wave, gently caressing your mitochondria thanks to the power of antiferromagnets and BORIS:
OK this membe list experience is just 👨🍳😗👌
- Psychonaut
- Practitioner of Yoga
- Quantum, Consciousness, Christian Theology, Creativity
Perfect. No notes.
I haven’t seen qualifications this relevant and high-quality since “architects and engineers for 9/11 truth.”
the terrible tryfecta
Still a bit sad we are not doing nano anymore.
You see, nano is real now and boring
But things being real doesn’t stop the cranks. See quantum.
Quantum superpredicting machines are not real, and that’s what they’re about. Nano- has lots of uninteresting bs like ultraefficient fluorescent things, but nanomachines are not and that was interesting to them (until they got bored)
Wait for AI and Crypto 2.0 to burn out, we’ll get there
I had a flash of a vision of tomorrow, it is Nano crypto AI
Sadly it seems the next one is gonna be Quantum.
Finally computer science is a real field, there are cranks! Suck it physics and mathematics, we are a real boy now!
Has this person turned up shilling their book on Coast to Coast AM with George Noory yet? If not, I think it’s a lock for 2025
Despite the lack of evidence, this idea is gaining traction in scientific circles as well as in the entertainment industry.
lol
I sneered that in a blog post last year, as it happens.
i mean, the Ray Charles one sounds fun. My 1st year maths lecturer demonstrated the importance of not dividing by zero by mathematically proving that if 1=0, then he was Brigitte Bardot. We did actually applaud.
“feel free to ignore any science “news” that’s just a press release from the guy who made it up.”
In particular, the 2022 discovery of the second law of information dynamics (by me) facilitates new and interesting research tools (by me) at the intersection between physics and information (according to me).
Gotta love “science” that is cited by no-one and cites the author’s previous work which was also cited by no one. Really the media should do better about not giving cranks an authoritative sounding platform, but that would lead to slightly fewer eyes on ads and we can’t have that now can we.
If you’re in the mood for a novel that dunks on these nerds, I highly recommend Jason Pargin’s If This Book Exists, You’re in the Wrong Universe.
https://en.wikipedia.org/wiki/If_This_Book_Exists,_You're_in_the_Wrong_Universe
It is the fourth book in the John Dies at the End series
oh damn, I just gave the (fun but absolute mess of a) movie another watch and was wondering if they ever wrote more stories in the series — I knew they wrote a sequel to John Dies at the End, but I lost track of it after that. it looks like I’ve got a few books to pick up!
Someone (maybe you) recommended this book here awhile back. But it’s the fourth book in a series so I had to read the other three first and so have only just now started it.
General sneer against the SH: I choose to dismiss it entirely for the same reason that I dismiss solipsism or brain-in-a-vat-ism: it’s a non-starter. Either it’s false and we’ve gotta come up with better ideas for all this shit we’re in, or it’s true and nothing is real, so why bother with philosophical or metaphysical inquiry?
The SH is catnip to “scientific types” who don’t recognize it as a rebrand of classical metaphysics. After all, they know how computers work, and it can’t be that hard to simulate the entire workings of a universe down to the quark level, can it? So surely someone just a bit smarter than themselves have already done it and are running a simulation with them in it. It’s basically elementary!
If you think about it, a slice of pizza is basically a computer that simulates a slice of pizza down the quark level.
Ha very clever, but as quantum level effects only occur when somebody is looking at it, they dont have to simulate it at quark level all the time. I watched what the bleep do we know, im very smart.
The “simulation hypothesis” is an ego flex for men who want God to look like them.
Since the Middle ages we’ve reduced God’s divine realm from the glorious kingdom of heaven to an office chair in front of a computer screen, rather than an office chair behind it.
You’re missing the most obvious implication, though. If it’s all simulated or there’s a Cartesian demon afflicting me then none of you have any moral weight. Even more importantly if we assume that the SH is true then it means I’m smarter than you because I thought of it first (neener neener).
But this quickly runs into the ‘don’t create your own unbreakable crypto system’ problem. There are people out there who are a lot smarter who quickly can point out the holes in these simulation arguments. (The smartest of whom go ‘nah, that is dumb’ sadly I’m not that enlightened, as I have argued a few times here before how this is all amateur theology, and has nothing to do with STEM/computer science (E: my gripes are mostly with the ‘ancestor simulation’ theory however)).
You’re doing the
lord’ssimulation-author’s work, my friend.
I don’t have the time to deep dive this RN but information dynamics or infodynamics looks to be, let’s say, “alternative science” for the purposes of trying to up the credibility of the simulation hypothesis.
How sneerable is the entire “infodynamics” field? Because it seems like it should be pretty sneerable. The first referenced paper on the “second law of infodynamics” seems to indicate that information has some kind of concrete energy which brings to mind that experiment where they tried to weigh someone as they died to identify the mass of the human soul. Also it feels like a gross misunderstanding to describe a physical system as gaining or losing information in the Shannon framework since unless the total size of the possibility space is changing there’s not a change in total information. Like, all strings of 100 characters have the same level of information even though only a very few actually mean anything in a given language. I’m not sure it makes sense to talk about the amount of information in a system increasing or decreasing naturally outside of data loss in transmission? IDK I’m way out of my depth here but it smells like BS and the limited pool of citations doesn’t build confidence.
I read one of the papers. About the specific question you have: given a string of bits s, they’re making the choice to associate the empirical distribution to s, as if s was generated by an iid Bernoulli process. So if s has 10 zero bits and 30 one bits, its associated empirical distribution is Ber(3/4). This is the distribution which they’re calculating the entropy of. I have no idea on what basis they are making this choice.
The rest of the paper didn’t make sense to me - they are somehow assigning a number N of “information states” which can change over time as the memory cells fail. I honestly have no idea what it’s supposed to mean and kinda suspect the whole thing is rubbish.
Edit: after reading the author’s quotes from the associated hype article I’m 100% sure it’s rubbish. It’s also really funny that they didn’t manage to catch the COVID-19 research hype train so they’ve pivoted to the simulation hypothesis.
Oh the author here is absolutely a piece of work.
Here’s an interview where he’s talking about the biblical support for all of this and the ancient Greek origins of blah blah blah.
I can’t definitely predict this guy’s career trajectory, but one of those cults where they have to wear togas is not out of the question.
Not only is the universe a simulation, the Catholics just had it right, isnt that neat.