Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
Never thought I’d die fighting alongside a League of Legends fan.
Aye. That I could do.
You just know Netflix’s inbox is getting flooded with the absolute worst shit League of Legends players can come up with right now
And having played more LoL than I care to admit in high school, that’s some truly vile shit. If only it actually made it through the filters to whoever actually made the relevant choices.
Dude discovers that one LLM model is not entirely shit at chess, spends time and tokens proving that other models are actually also not shit at chess.
The irony? He’s comparing it against Stockfish, a computer chess engine. Computers playing chess at a superhuman level is a solved problem. LLMs have now slightly approached that level.
For one, gpt-3.5-turbo-instruct rarely suggests illegal moves,
Writeup https://dynomight.net/more-chess/
HN discussion https://news.ycombinator.com/item?id=42206817
I remember when several months (a year ago?) when the news got out that gpt-3.5-turbo-papillion-grumpalumpgus could play chess around ~1600 elo. I was skeptical the apparent skill wasn’t just a hacked-on patch to stop folks from clowning on their models on xitter. Like if an LLM had just read the instructions of chess and started playing like a competent player, that would be genuinely impressive. But if what happened is they generated 10^12 synthetic games of chess played by stonk fish and used that to train the model- that ain’t an emergent ability, that’s just brute forcing chess. The fact that larger, open-source models that perform better on other benchmarks, still flail at chess is just a glaring red flag that something funky was going on w/ gpt-3.5-turbo-instruct to drive home the “eMeRgEnCe” narrative. I’d bet decent odds if you played with modified rules, (knights move a one space longer L shape, you cannot move a pawn 2 moves after it last moved, etc), gpt-3.5 would fuckin suck.
Edit: the author asks “why skill go down tho” on later models. Like isn’t it obvious? At that moment of time, chess skills weren’t a priority so the trillions of synthetic games weren’t included in the training? Like this isn’t that big of a mystery…? It’s not like other NN haven’t been trained to play chess…
Particularly hilarious at how thoroughly they’re missing the point. The fact that it suggests illegal moves at all means that no matter how good it’s openings are the scaling laws and emergent behaviors haven’t magicked up an internal model of the game of Chess or even the state of the chess board it’s working with. I feel like playing games is a particularly powerful example of this because the game rules provide a very clear structure to model and it’s very obvious when that model doesn’t exist.
Here are the results of these three models against Stockfish—a standard chess AI—on level 1, with a maximum of 0.01 seconds to make each move
I’m not a Chess person or familiar with Stockfish so take this with a grain of salt, but I found a few interesting things perusing the code / docs which I think makes useful context.
Skill Level
I assume “level” refers to Stockfish’s Skill Level option.
If I mathed right, Stockfish roughly estimates Skill Level 1 to be around 1445 ELO (source). However it says “This Elo rating has been calibrated at a time control of 60s+0.6s” so it may be significantly lower here.
Skill Level affects the search depth (appears to use depth of 1 at Skill Level 1). It also enables MultiPV 4 to compute the four best principle variations and randomly pick from them (more randomly at lower skill levels).
Move Time & Hardware
This is all independent of move time. This author used a move time of 10 milliseconds (for stockfish, no mention on how much time the LLMs got). … or at least they did if they accounted for the “Move Overhead” option defaulting to 10 milliseconds. If they left that at it’s default then 10ms - 10ms = 0ms so 🤷♀️.
There is also no information about the hardware or number of threads they ran this one, which I feel is important information.
Evaluation Function
After the game was over, I calculated the score after each turn in “centipawns” where a pawn is worth 100 points, and ±1500 indicates a win or loss.
Stockfish’s FAQ mentions that they have gone beyond centipawns for evaluating positions, because it’s strong enough that material advantage is much less relevant than it used to be. I assume it doesn’t really matter at level 1 with ~0 seconds to produce moves though.
Still since the author has Stockfish handy anyway, it’d be interesting to use it in it’s not handicapped form to evaluate who won.
LLMs sometimes struggle to give legal moves. In these experiments, I try 10 times and if there’s still no legal move, I just pick one at random.
uhh
@gerikson @BlueMonday1984 the only analysis of computer chess anybody needs https://youtu.be/DpXy041BIlA?si=a1vU3zmOWs8UqlSQ
Stack overflow now with the sponsored crypto blogspam Joining forces: How Web2 and Web3 developers can build together
I really love the byline here. “Kindest view of one another”. Seething rage at the bullshittery these “web3” fuckheads keep producing certainly isn’t kind for sure.
Strap in and start blasting the Depeche Mode.
When the reporter entered the confessional, AI Jesus warned, “Do not disclose personal information under any circumstances. Use this service at your own risk.
Do not worry my child, for everything you say in this hallowed chamber is between you, AI Jesus, and the army of contractors OpenAI hires to evaluate the quality of their LLM output.
a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the
templates
,pages
,static
, andless
directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I’ll DM an invite link!The mask comes off at LWN, as two editors (jake and corbet) dive in to frantically defend the honour of Justine fucking Tunney against multiple people pointing out she’s a Nazi who fills her projects with racist dogwhistles
Is Google lacing their free coffee??? How could a woman with at least one college degree believe that the government is even mechanically capable of dissolving into a throne for Eric Schmidt.
fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:
- dole out a ban for being rude to a fascist
- dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
- in the process of the above, I create a safe space for a fascist and her friends
but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here
See, you’re assuming the goal of moderation is to maintain a healthy social space online. By definition this excludes fascists. It’s that old story about how to make sure your punk bar doesn’t turn into a nazi punk bar. But what if instead my goal is to keep the peace in my nazi punk bar so that the normies and casuals keep filtering in and out and making me enough money that I can stay in business? Then this strategy makes more sense.
Centrists Don’t Fucking Be Like This challenge not achieved yet again
fwiw this link didn’t jump me to a specific reply (if you meant to highlight a particular one)
It didn’t scroll for me either but there’s a reply by this corbet person with a highlighted background which I assume is the one intended to be linked to
Post by Corbet the editor. “We get it: people wish that we had not highlighted work by this particular author. Had we known more about the person in question, we might have shied away from the topic. But the article is out now, it describes a bit of interesting technology, people have had their say, please let’s leave it at that.”
So you updated the article to reflect this right? padme.jpg
Seems like they’ve actually done this now. There’s a preface note now.
This topic was chosen based on the technical merit of the project before we were aware of its author’s political views and controversies. Our coverage of technical projects is never an endorsement of the developers’ political views. The moderation of comments here is not meant to defend, or defame, anybody, but is in keeping with our longstanding policy against personal attacks. We could certainly have handled both topic selection and moderation better, and will endeavor to do so going forward.
Which is better than nothing, I guess, but still feels like a cheap cop-out.
Side-note: I can actually believe that they didn’t know about Justine being a fucking nazi when publishing this, because I remember stumbling across some of her projects and actually being impressed by it, and then I found out what an absolute rabbit hole of weird shit this person is. So I kinda get seeing the portable executables project, thinking, wow, this is actually neat, and running with it.
Not that this is an excuse, because when you write articles for a website that should come with a bit of research about the people and topic you choose to cover and you have a bit more responsibility than someone who’s just browsing around, but what do I know.
Well, at least they put down something. More than I expected.
And doing research on people? In this economy?
so is corbet the same kind of fucker that’ll complain “everything is so political nowadays”? it seems like they are
@dgerard @BlueMonday1984 also, and I know this is way beside the point, update the design of your website, motherfuckers
I don’t run any websites, what are you coming at me for
most of the dedicated Niantic (Pokemon Go, Ingress) game players I know figured the company was using their positioning data and phone sensors to help make better navigational algorithms. well surprise, it’s worse than that: they’re doing a generative AI model that looks to me like it’s tuned specifically for surveillance and warfare (though Niantic is of course just saying this kind of model can be used for robots… seagull meme, “what are the robots for, fucker? why are you being so vague about who’s asking for this type of model?”)
Quick, find the guys who were taping their phones to a ceiling fan and have them get to it!
Jokes aside I’m actually curious to see what happens when this one screws up. My money is on one of the Boston Dynamics dogs running in circles about 30 feet from the intended target without even establishing line of sight. They’ll certainly have to test it somehow before it starts autonomously ordering drone strikes on innocent people’s homes, right? Right?
Pokemon Go To The War Crimes
Pokemon Go To The Hague
Peter Watts’s Blindsight is a potent vector for brain worms.
Watts has always been a bit of a weird vector. While he doesn’t seem a far righter himself, he accidentally uses a lot of weird far right dogwhistles. (prob some cross contamination as some of these things are just scientific concepts (esp the r/K selection thing stood out very much to me in the rifters series, of course he has a phd in zoology, and the books predate the online hardcore racists discovering the idea by more than a decade, but still odd to me)).
To be very clear, I don’t blame Watts for this, he is just a science fiction writer, a particularly gloomy one. The guy himself seems to be pretty ok (not a fan of trump for example).
That’s a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.
not a fan of trump for example
In one the Eriophora stories (I think it’s officially the sunflower circle) I think there’s a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save what’s savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).
Huh. Say more?
Oh man where to begin. For starters:
- Sentience is overrated
- All communication is manipulative
- Assumes intelligence has a “value” and that it stacks like a Borderlands damage buff
- Superintelligence operates in the world like the chaos god Tzeench from WH40K. Humans can’t win, because all events are “just as planned”
- Humanity is therefore gormless and helpless in the face of superintelligence
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Sentience is overrated
Not sentience, self awareness, and not in a parτicularly prescriptive way.
Blindsight is pretty rough and probably Watt’s worst book that I’ve read but it’s original, ambitious and mostly worth it as an introduction to thinking about selfhood in a certain way, even if this type of scifi isn’t one’s cup of tea.
It’s a book that makes more sense after the fact, i.e. after reading the appendix on phenomenal self-model hypothesis. Which is no excuse – cardboard characters that are that way because the author is struggling to make a point about how intelligence being at odds with self awareness would lead to individuals with nonexistent self-reflection that more or less coast as an extension of their (ultrafuturistic) functionality, are still cardboard characters that you have to spend a whole book with.
I remember he handwaves a lot of stuff regarding intelligence, like at some point straight up writing that what you are reading isn’t really what’s being said, it’s just the jargonaut pov character dumbing it way down for you, which is to say he doesn’t try that hard for hyperintelligence show-don’t-tell. Echopraxia is better in that regard.
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Not really, there are some common ideas mostly because tesrealism already is scifi tropes awkwardly cobbled together, but usually what tescreals think is awesome is presented in a cautionary light or as straight up dystopian.
Like, there’s some really bleak transhumanism in this book, and the view that human cognition is already starting to become alien in the one hour into the future setting is kind of anti-longtermist, at least in the sense that the utilitarian calculus turns way messed up.
And also I bet there’s nothing in The Sequences about Captain Space Dracula.
I got a really nice omnibus edition of Blindsight/Echopraxia that was printed in the UK, but ultimately, the necessarily(?) cardboard nature of the vampire character in Echopraxia was what left me cold. The first chapter or two are some of the most densely-packed creative sci-fi ideas I’ve ever read, but I came to the book looking for more elaboration on the vampires, and didn’t really get that. Valerie remains an inscrutable other. The most memorable interaction she has is when she’s breaking her arm and making the POV character guy reset it, seemed like she was hitting on him?
I hear you. I should clarify, because I didn’t do a good job of saying why those things bothered me and nerd-vented instead. I understand that an author doesn’t necessarily believe the things used as plot devices in their books. Blindsight a horror/speculative fiction book that asks “what if these horrible things were true” and works out the consequences in an entertaining way. And, no doubt there’s absolutely a place for horror in spec fic, but Blindsight just feels off. I think @Soyweiser explained the vibes better than I did. Watts isn’t a bad guy. Maybe it’s just me. To me, it feels less Hellraiser and more Human Centipede i.e. here’s a lurid idea that would be tremendously awful in reality, now buckle up and let’s see how it goes to an uncomfortable extent. That’s probably just a matter of taste, though.
Unfortunately, the kind of people who read these books don’t get that, because media literacy is dead. Everyone I’ve heard from (online) seems to think that it is saying big deep things that should be taken seriously. It surfaces in discussions about whether or not ChatGPT is “alive” and how it might be alive in a way different from us. Eric Schmidt’s recent insane ramblings about LLMs being an “alien intelligence,” which don’t call Blindsight out directly, certainly resonate the same way.
Maybe I’m being unfair, but it all just goes right up my back.
I, too, have done the “all communication is manipulative”, but in the same way as one would do a bar trick:
all communication is manipulative, for any words I say/write that you perceive instantly manipulate (as in the physical manner / modifying state) your thoughts, and this is done so without you requesting I do so
it’s a handy stunt with which to drive an argument about a few parts of communication, rhetoric, etc. because it gives a kinda good handle on some meta without getting too deep into things
(although there was one of my friends who really, really hated the framing)
Explaining in detail is kind of a huge end-of-book spoiler, but “All communication is manipulative” leaves out a lot of context and personally I wouldn’t consider how it’s handled a mark against Blindsight.
predictions for the trump admin?
Hot Take: the damage from RFK Jr will be limited by the fact that he’s messing with the money for several large industries, particularly agriculture and pharmaceuticals. They have bottomless pockets and aren’t afraid to bribe the bribable. There will be damage, but he’ll be crushed like a bug in the end.
Also, he clearly annoys the orange guy, can offer him nothing in return now that the election is over, and has already been the victim of a ritual humiliation (e.g. being forced to partake in a McDonald’s meal for the camera), which is the first sign of a Trump guy being de-emphasized.
Please sneer at this article. I thought it was pissweak myself.
Edit: “Hear me out: RFK Jr could be a transformational health secretary”
Will a Republican-controlled Congress allow for more government regulation – even if it saves lives?
No. Next question.
should RFK Jr be able to abandon his numerous conspiracy theories about vaccines, he can be the most transformative health secretary in our country’s history
this is exactly the sort of shit centrists were writing about trump in 2016. I guess they can’t get away with doing that now so they’re just writing the same pieces but about his underlings
It’s very “should RFK Jr become a completely different person to the person he is and has been for years, he could really do some great stuff”
Jesus Christ.
This guy is a classic example of kook magnetism with a side of Dunning-Kreuger.
Job history is sus as hell too: https://en.wikipedia.org/wiki/Neil_Barsky
Some dweeb with a journalism degree gets a job at a hedge fund right out of grad school, bombs out, “goes back to” journalism. Newspapers can’t get enough of this kind of guy; not surprised TERF Island’s paper of record picked him up nor am I surprised that they think RFK Jr might be good actually.
all your barsk are belong to us
With a touch of, and maybe I’m being mean, “this particular bad thing shouldn’t happen to someone like me! it must be that no one else tried hard enough to fix it till now”
I thought it was particularly telling that he says someone else in the “metabolic health” world warned him not to be mean about RFK. Take that as a sign that you’re in kooksville!
“…this particular bad thing shouldn’t happen to someone like me! it must be that no one else tried hard enough to fix it till now…”
Ugh, but also, good point. It’s like Engineer’s Disease fucked The Secret and made something even worse.
I think it happens often enough when someone rich and/or famous becomes disabled or has a disabled kid. See the whole world of alternative treatments for autism.
Hopefully the established capitalists will protect us from the fascists’ worst excesses hasn’t been much of a winning bet historically.
oh no, nothing is protecting us, you’re 100% right there. Eating food is about to become a much more dicey proposition.
It is still safe to assume that the ghouls who run Pfizer and ConAgra will bend their resources to protecting the bag from a disposable nutjob.
mine:
prediction 1: he dies halfway through. funniest way would be another pandemic gets him
prediction 2: he doesn’t die. it will be exactly the same as the first admin but infinitely worse. everyone will hate and backstab each other, they will constantly get fired and rehired and fired like reality tv, there will be a constant dribble of horrible things happening, then in four years there’s a coup attempt
prediction 3: elon doesn’t last a year, possibly doesn’t even make it six months
prediction 1: he dies halfway through. funniest way would be another pandemic gets him
I’m anticipating an Elvis re-enactment.
If they do press conferences this time around, ever question should just be “does Elon approve of decision ____ ?” Will drive Trump fkn insane.
Despite worrying my brains out about getting deported from my home of 14 years because I wasn’t born in this godforsaken place, I’m extremely excited that Elon will get fired in the next 6 months or less. Gives me life to think about him getting very publicly humiliated by an even greater piece of shit than he is.
At work, I’ve been looking through Microsoft licenses. Not the funniest thing to do, but that’s why it’s called work.
The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.
The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.
Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save… (drumroll)… Approximately nothing!
But if we hadn’t done all this, the costs would have increased by about 50%.
We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.
There’s got to be some kind of licensing clarity that can be actually legislated. This is just straight-up price gouging through obscurantism.
now seeing EAs being deeply concerned about RFK running health during a H5N1 outbreak
dust specks vs leopards
The way many of the popular rat blogs started to endorse Harris in the last second before the US election felt a lot like an attempt at plausible deniability.
Sure we’ve been laying the groundwork for this for decade, but we wanted someone from our cult of personality to undermine democracy and replace it with explicit billionaire rule, not someone with his own cult of personality.
Anyone here read “World War Z”? There’s a section there about how the health authorities in basically all countries supress and deny the incipient zombie outbreak. I think about that a lot nowadays.
Anyway the COVID response, while ultimately better than the worst case scenario (Spanish Flu 2.0) has made me really unconvinced we will do anything about climate change. We had a clear danger of death for millions of people, and the news was dominated by skeptics. Maybe if it had targetted kids instead of the very old it would have been different.
It’s not just systemic media head-up-the-assery, there’s also the whole thing about oil companies and petrostates bankrolling climate denialism since the 70s.
When I run into “Climate change is a conspiracy” I do the wide-eyed look of recognition and go “Yeah I know! Have you heard about the Exxon files?” and lead them down that rabbit hole. If they want to think in terms of conspiracies, at least use an actual, factual conspiracy.
If H5N1 does turn into a full-blown outbreak, part of me expects it’ll rack up a heavier deathtoll than COVID.
My professor is typing questions into chat gpt in class rn be so fucking for real
gentlemen, this means war
-me imagining myself paying to sit through that
He’s using it to give examples of exam question answers. The embarrassment
I’d pipe up and go “uhhh hey prof, aren’t you being paid to, like, impart knowledge?”
(I should note that I have an extremely deficient fucks pool, and do not mind pissing off fuckwits. but I understand it’s not always viable to do)
“So, professor sir, are you OK with psychologically torturing Black people, or do you just not care?”
It was there and gone fairly quickly and I wouldn’t say I’m a model student so I didn’t say anything. I’ve talked to him about Chat GPT before though…
I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn’t have room to complain? Not that I’d recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.
We’re going to be answering two essay questions in an in-class test instead of writing a paper this year specifically to prevent chat gpt abuse. Which he laughed and joked about because he really believes chat gpt can produce good results !
I’m pretty sure you could download a decent markov chain generator onto a TI-89 and do basically the same thing with a more in-class appropriate tool, but speaking as someone with dogshit handwriting I’m so glad to have graduated before this was a concern. Godspeed, my friend.
Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They’re Underage
Three billion dollars and its going into Character AI AutoGroomer 4000s. Fuck this timeline.
automated grooming is just what progress is and you have to accept it. like the printing press
AI finally allowing grooming at scale is the kind of thing I’d expect to be the setup for a joke about Silicon Valley libertarians, not something that’s actually happening.
HN runs smack into end-stage Effective Altruism and exhibit confusion
Title "The shrimp welfare project " is editorialized, the original is “The Best Charity Isn’t What You Think”.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
So… we should be vegetarians?
No, just replace all your sense of morality with utilitarian shrimp algebra. If you end up vegetarian, so be it.
Ohhhh, so this is a forced-birther agenda item. Got it
I think the author is just honestly trying to equivocate freezing shrimps with torturing weirdly specifically disabled babies and senile adults medieval style. If you said you’d pledge like 17$ to shrimp welfare for every terminated pregnancy I’m sure they’d be perfectly fine with it.
I happened upon a thread in the EA forums started by someone who was trying to argue EAs into taking a more forced-birth position and what it came down to was that it wouldn’t be as efficient as using the same resources to advocate for animal welfare, due to some perceived human/chicken embryo exchange rate.
rat endgame being eugenics again?? no waaay
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
This entire fucking shrimp paragraph is what failing philosophy does to a mf
deleted by creator
Did the human pet guy write this
This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn’t crumble.
Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.
Dog, you’ve lost the plot.
FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn’t indefensible, but the way it’s framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you’d be giving money to people who are weird in a bad way.
Not that I’m a super fan of the fact that shrimp have to die for my pasta, but it feels weird that they just pulled a 3% number out of a hat, as if morals could be wrapped up in a box with a bow tied around it so you don’t have to do any thinking beyond 1500×0.03×1 dollars means I should donate to this guys shrimp startup instead of the food bank!
Shrimp cocktail counts as vegetarian if there are fewer that 17 prawns in it, since it rounds down to zero souls.
I was just notified of the corollary that eating 18 shrimp rounds up to cannibalism.
Hold it right there criminal scum!
spoiler
Image of two casually dressed guys pointing fingerguns at the camera, green beams are coming out of the fingerguns. The Vegan Police from the movie Scott Pilgrim vs. The World. The cops are played by Thomas Jane and Clifton Collins Jr, the latter is wearing sunglasses, while it is dark.
Ah you see, the moment you entered the realm of numbers and estimates, you’ve lost! I activate my trap card: 「Bayesian Reasoning」 to Explain Away those numbers. This lets me draw the「Domain Expert」 card from my deck, which I place in the epistemic status position, which boosts my confidence by 2000 IQ points!
Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.
Apologies for focusing on just one sentence of this article, but I feel like it’s crucial to the overall argument:
… if [shrimp] suffer only 3% as intensely as we do …
Does this proposition make sense? It’s not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.
It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I’m too unimaginative to comprehend so many billions of bits of dust.
lol hahah.
Effective Altruism Declares War on the Entire State of Louisiana