Eager Eagle
- 19 Posts
- 2.49K Comments
I’ve never worked on a codebase where using ORMs wasn’t better than rolling your own queries. What are people writing that they actually need the marginal performance gains? And even if that’s worth it, why not just use raw queries in your critical paths?
Every time I have to write or modify raw SQL it feels like I’m throwing away all my static checking features and increasing the chance of bugs, because I have no idea of the query matches my schema or if it’ll blow up at runtime.
I’ve done it quietly 4y ago, only told some closer relatives. It was kind of funny when relatives I wasn’t talking to told my closer ones that we were keeping in touch on WhatsApp, not even aware I wasn’t in the platform for over 6 months at that point.
Now after years and leaving 2 family groups, politics, and a whole lot of drama behind, I feel it was a great decision, and the only regret was not doing it earlier.
in order to crack down on
AIunprofitable botsI’m sure they’ll have no issues allowing bots that align with their interests
Eager Eagle@lemmy.worldto
Technology@lemmy.world•French aircraft carrier Charles de Gaulle tracked via Strava activity in OPSEC failureEnglish
5·6 days agoyou can’t compare it to the islands, because the GPS trace is in an area within that green circle, with a different scale. You can only look at the 300m scale in the bottom right, which looks in the ballpark of an aircraft carrier to me
Eager Eagle@lemmy.worldto
Programming@programming.dev•Your thoughts on Code ReviewsEnglish
3·6 days agoThe way I see it, for any code review there are going to be different levels of recommendation regarding the comments. When I review, I try to make it clear what’s optional (/ nitpick) and what I’d really like to see fixed before I can approve it.
So even making some assumptions, I can’t choose between 4 and 5 because optional and “less optional” changes are often in a same PR.
The only one I haven’t done much of is #3. That one looks better if one has questions about code that was already reviewed, merged, and it’s likely in production.
Eager Eagle@lemmy.worldto
Programming@programming.dev•Your thoughts on Code ReviewsEnglish
3·6 days agoAs with a lot of things in life, it depends.
I use 1-5 depending on the repo, who made the change, what the change is about, and how involved I am in the project.
Though the “time-frame” idea of #4 is usually replaced by conversations if it’s a coworker, as it’s more effective.
Q: about #3, do you mean on code that is already merged / committed to the default branch?
Eager Eagle@lemmy.worldto
Privacy@lemmy.ml•out of the loop, what's the problem with signal?English
116·6 days agoThe problem is that you didn’t bring much, and it sounds like you’re trying to spread FUD yourself:
- didn’t quote the original comment
- didn’t elaborate on misinformation and how it could be a problem to signal
- the questions immediately assumed it (whatever it is) is true
Eager Eagle@lemmy.worldto
Privacy@lemmy.ml•out of the loop, what's the problem with signal?English
91·6 days agowhy are you making a post instead of replying to a comment?
Eager Eagle@lemmy.worldto
Technology@lemmy.world•Our commitment to Windows qualityEnglish
11·6 days agoso… some really basic shit that should have been expected in a pre-2010 update + AI
Well done, guys. I guess you gotta start somewhere.
you’re the one comparing it to Linux
Eager Eagle@lemmy.worldto
Linux@lemmy.ml•Dolphin: order files/folder priority, prioritize `space` over `.` e.g. `1 A` folder above `1.1`English
1·6 days agoI don’t think you can have both
Eager Eagle@lemmy.worldto
Technology@lemmy.world•French aircraft carrier Charles de Gaulle tracked via Strava activity in OPSEC failureEnglish
7·6 days agoYou don’t need that assumption. Your assumption can just be “the person and vessel (or a point in the vessel, like its center of mass) don’t diverge significantly over time”.
Then, if you treat velocity as a vector and compute the person’s average velocity vector over time, you’ll have a pretty close estimation to the vessel’s velocity vector.
After all, if those two average vectors (vessel’s and person’s) were to differ much, they would end up in different locations.
The average basically zeroes the vector for each lap the person does, so the remainder must be the vessel’s.
Eager Eagle@lemmy.worldto
Selfhosted@lemmy.world• Local AI Companion for Radarr: Uses Ollama to Understand Your Taste (Vibe, Atmosphere, Mood), Auto-Complete Sagas, Explore Filmographies, and Add Movies SeamlesslyEnglish
6·6 days agoyeah, I think the whole “water” argument really dilutes the case against data centers.
On a serious note, the argument works for areas that already struggle to supply enough water for consumers. Otherwise, we should be focusing more on the power stress to the grid, and the domino effect on supply chain of hardware cost increases that it’s happening across many industries. It started with GPUs, now it’s CPU, storage, networking equipment, and other components.
If these prices are too high for a couple of years, we’ll start seeing generalized price increases as companies need to pass along the costs to consumers.
Eager Eagle@lemmy.worldto
Selfhosted@lemmy.world• Local AI Companion for Radarr: Uses Ollama to Understand Your Taste (Vibe, Atmosphere, Mood), Auto-Complete Sagas, Explore Filmographies, and Add Movies SeamlesslyEnglish
13·6 days agoIt’s not, I read the code. It’s not merely asking the LLM for recommendations, it’s using embeddings to compute scores based on similarities.
It’s a lot closer to a more traditional natural language processing than to how my dad would use GPT to discuss philosophy.
Ok, I’m not suggesting replacing humans with AI and I despise companies trying to do this unsustainable practice.
With that out of the way, I’ll restate that LLMs follow some rules more reliably than humans today. It’s also easier to give feedback when you don’t have to worry about coming across as a pedantic prick for pointing out the smaller things.
On your point that LLMs are not improving; well, agents and tooling are definitely improving. 6 months ago I would need to babysit an agent to implement a moderately complex feature that touches a handful of files. Nowadays, not as much. It might get some things wrong, but usually because it lacks context rather than ability. They can write tests, run them, and iterate until it passes, then I can just look at the diff to make sure the tests and solution makes sense. Again, something that would fail to yield decent results just in the last year.
Eager Eagle@lemmy.worldto
Selfhosted@lemmy.world• Local AI Companion for Radarr: Uses Ollama to Understand Your Taste (Vibe, Atmosphere, Mood), Auto-Complete Sagas, Explore Filmographies, and Add Movies SeamlesslyEnglish
7·6 days agoNo, it also doesn’t do that. It gets embeddings from an LLM and uses that to rank candidates.
can’t be worse than that screen




















fitting translation for “meta”