

Isn’t it against the law in many places to charge customers without providing a breakdown of what they’re being charged with?


Isn’t it against the law in many places to charge customers without providing a breakdown of what they’re being charged with?
you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.
Doubt it. The incentives don’t align. They benefit from open source much more than are threatened by it. Even that “embrace, extent, extinguish” idea comes from different times and it’s likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.


Even if you’re into AI coding, I never understood the hype around cursor. In the beginning, they were just 3 months ahead of alternatives. Today you can’t even say that anymore and they’re still “worth” billions. You can get a similar prediction quality from other editors if you know how to use them, paying a fraction of the price.
Cursor also chugs on tokens like a 1978 Lincoln Continental, that’s how they get marginally better results, so bringing your API is not even a viable option. The first time I tried it, I asked a simple 1-line edit on a markdown and it sent out 20k tokens before I could say “AGI is 6 months away”, and it still got the change wrong.
call me a racoon then


maybe he calls his net worth “cognition”


I miss start menu ads, intrusive bing searches, copilot upselling, MSN news, and uninstallable things I’ll never use on my PC like Xbox.
the main reason I started installing more “-bin” variants of packages


I had purchased it several years ago, but this is at least the 3rd concerning headline in the past 3 years. If you’re still on that boat, jump ship.
i like trains


yes, the system will likely use some swap if available even when there’s plenty of free RAM left:
The casual reader1 may think that with a sufficient amount of memory, swap is unnecessary but this brings us to the second reason. A significant number of the pages referenced by a process early in its life may only be used for initialisation and then never used again. It is better to swap out those pages and create more disk buffers than leave them resident and unused.
Src: https://www.kernel.org/doc/gorman/html/understand/understand014.html
In my recently booted system with 32GB and half of that free (not even “available”), I can already see 10s of MB of swap used.
As rule of thumb, it’s only a concern or indication that the system is/was starved of memory if a significant share of swap is in use. But even then, it might just be some cached pages hanging around because the kernel decided to keep instead of evicting them.


if my system touches SWAP at all, it’s run out of memory
That’s a swap myth. Swap is not an emergency memory, it’s about creating a memory reclamation space on disk for anonymous pages (pages that are not file-backed) so that the OS can more efficiently use the main memory.
The swapping algorithm does take into account the higher cost of putting pages in swap. Touching swap may just mean that a lot of system files are being cached, but that’s reclaimable space and it doesn’t mean the system is running out of memory.


sshhh 🤫


I disagree. What I could hack over a weekend starting a project, I can do in a couple hours with AI, because starting a project is where the typing bottleneck is, due to all of the boilerplate. I can’t type faster than an LLM.
Also, because there are hundreds of similar projects out there and I won’t get to the parts that make mine unique in a weekend, that’s the perfect use case for “vibe coding”.


what are sections, chapters, indices? Who’s the librarian?
we don’t need to go all the way into a metaphor


you haven’t seen my frontend code


potentially relevant: paperless recently merged some opt-in LLM features, like chatting with documents and automated title generation based on the OCR context extracted.
same