

Exactly, it’s only an improvement until they’re bought and we’re all in the same boat again. We need a federated forge and open standards.


Exactly, it’s only an improvement until they’re bought and we’re all in the same boat again. We need a federated forge and open standards.


wtf
An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root.
If your kernel was built between 2017 and the patch — which covers essentially every mainstream Linux distribution — you’re in scope.
how does that only get a CVE score of 7.8, the impact of this is huge


tl;dr clickbaity title for an article that cites xitter and tries to sell their security solutions


yeah, that was cringy
the world’s most-followed person
most followed on his own echo chamber :clap: :clap: :clap:


and again, you end up sacrificing readability to address what, a fraction of a percent in memory use? If that matters in your program, maybe don’t use JS.


Agreed, optimize it. Where it matters. Reducing the number of functions to save space on the stack when the heap has 99% of the data is nonsense.


this sounds like a pretty bad reason to justify ugly code today
any readability gain will greatly outweight resources in most situations


they are open weight and have a whitepaper, that’s already vastly better than whatever openai and anthropic are doing


it is, but it’s initially opt-in; meaning the system won’t have AI features, but once it does, you’ll remove them by removing snaps
it’s confusing because the paragraphs they talk about opt-in and removing snaps are different, but 26.10 won’t ship with AI features in a fresh install.


why would that only affect “some” countries


at this rate, it will be


I’m interested in setting it up, are you using vs code? Which extension or editor?


watch me go back to debugging like a real engineer: copying and pasting from stack overflow


Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires, however, model multipliers will increase on June 1 (see table).

holy shit, 9x the previous cost. which was already not great. I was on the fence about cancelling it, but thanks for making up my mind, MS


ok, to start with, if you need a POSIX interface to the filesystem, you already have an SSH connection to that server, and don’t need much stability across multiple clients, SSHFS may do just fine. For a homelab, that is likely the case.
now, if you’re hosting a web server that needs data distributed across drives/nodes, data redundancy, and the usage is primarily programmatic, closer to a CDN’s or machine learning pipeline than a single user browsing files; then you want an S3-compatible solution. The S3 API makes it easier to plug it into your application, while allowing you to migrate to a different one - which I’m actually currently doing for a MinIO deployment at work.


SSHFS is a hack and has nothing to do with the proposal of S3 compatible backends
you got a pizza box? Fancy
Waste of energy. It’s like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don’t bother.