

good, I’ll add vetted models to my blocklist


good, I’ll add vetted models to my blocklist


She’s probably spending more
that’s my point, spending $35 to avoid the extra fee is not really a problem, because one can easily do that with everyday food for one person per week. In fact it’d be really hard to spend less than that on food.


every website will start blocking VPN IPs, more so than what some already do, which is exactly what these cunts want


$35 of groceries is only 4 to 8 meals, not to mention hygiene and household items. One person would spend much more than that weekly if they’re not dining out.


and it’ll take a few million years for Andromeda to get the news


micro for sensible defaults out of the box, and because I don’t like modal editors.
Bitwarden’s npm distribution pipeline stayed compromised for approximately 19 hours and 334 developers had enough time to pull the malicious package before it was caught.
It was actually about 90 minutes
Everyone running bw in a CI pipeline just handed the attackers whatever else happened to live on that machine.
only if they installed bw in that time window
Otherwise yes, I agree it’d be better if the CLI was written in a non-JS/TS ecosystem. Perhaps Rust or Go. And the criticisms to list including secrets are super valid.


ah right, and my eyes need to be recreated because they can’t see ultraviolet


Barely. Even with the code and seeds, it’s still a struggle to do that. There’s plenty of questions from people running pytorch and tensorflow models that can’t reproduce results. Maybe you isolate enough variables that consecutive runs actually produce the same output, but the study is about commercial models. You’ll never get deterministic output from those.


You’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer.
I don’t know what ads show that, but anyone who knows the first thing about LLMs knows you don’t get the same answer twice.
I’d get this expectation 5 years ago when most people weren’t familiar with it, but come on… you don’t need to feed it an image 500 times to see that.


Waste of energy. It’s like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don’t bother.


Exactly, it’s only an improvement until they’re bought and we’re all in the same boat again. We need a federated forge and open standards.


wtf
An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root.
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
how does that only get a CVE score of 7.8, the impact of this is huge


tl;dr clickbaity title for an article that cites xitter and tries to sell their security solutions


yeah, that was cringy
the world’s most-followed person
most followed on his own echo chamber :clap: :clap: :clap:


and again, you end up sacrificing readability to address what, a fraction of a percent in memory use? If that matters in your program, maybe don’t use JS.


Agreed, optimize it. Where it matters. Reducing the number of functions to save space on the stack when the heap has 99% of the data is nonsense.


this sounds like a pretty bad reason to justify ugly code today
any readability gain will greatly outweight resources in most situations
I’ve been seeing this for a long time now, as they blocked VPN and anonymous access, so I use a combination of cached pages and libredirect if I really want to bother going there.