I assume the KDE implementation resizes to default when you stop shaking it.
I could totally see someone coding a function that increases the mouse pointer by x% every y mouse shakes, and then neglecting to put in a size cap.
I assume the KDE implementation resizes to default when you stop shaking it.
I could totally see someone coding a function that increases the mouse pointer by x% every y mouse shakes, and then neglecting to put in a size cap.
This is kinda old information, but my understanding was that there were 3 issues with dasiy-chained UPSes.
The first is that you’re potentially going to cause a ground loop, which is not healthy for the life of anything plugged into those UPSes.
The second is that there’s a potential for a voltage droop going through from the first to second UPS, which means the UPSes will flap constantly and screw their batteries up, though I’d be shocked if that was necessarily still true for modern high-quality units.
And of course, the UPS itself won’t be outputting a proper sinewave when it’s on battery, which means your 2nd UPS in the chain will freak out (though again, maybe modern ones don’t have that limitation).
Yeah, and Windows and OS X both do it as well.
Though there being no upper limit to the size is amusing.
comparative scale of the content involved
PhotoDNA is based on image hashes, as well as some magic that works on partial hashes: resizing the image, or changing the focus point, or fiddling with the color depth or whatever won’t break a PhotoDNA identification.
But, of course, that means for PhotoDNA to be useful, the training set is literally ‘every CSAM image in existance’, so it’s not really like you’re training on a lot less data than an AI model would want or need.
The big safeguard, such as it is, is that you basically only query an API with an image and it tells you if PhotoDNA has it in the database, so there’s no chance of the training data being shared.
Of course, there’s also no reason you can’t do that with an AI model, either, and I’d be shocked if that’s not exactly how they’ve configured it.
The problem I ran into is that every single platform that primarily interacted with Mastodon (The keys, etc.) had the same exact same set of problems.
While yes, my Firefish instance had search, what was it searching? Local data only, and once I figured out that Mastodon-style replies didn’t federate to all of someone’s followers, it became pretty clear that it was uh, not very useful.
You can search, but any given server may or may not have access to data you actually want and thus, well, you just plain cannot meaningfully search for shit unless you go to one of the mega instances, or join giant piles of relays and store gigabyte upon gigabyte upon gigabyte of garbage data you do not care about.
The whole implementation is kinda garbage for search-based discovery from it’s very basic design all the way through to everyone’s implementations.
first time law enforcement are sharing actual csam with a technology company
It’s very much not: PhotoDNA, which is/was the gold standard for content identification, is a collaboration between a whole bunch of LEOs and Microsoft. The end user is only going to get a ‘yes/no idea’ result on a matched hash, but that database was built on real content working with Microsoft.
Disclaimer: below is my experience dealing with this shit from ~2015-2020, so ymmv, take it with some salt, etc.
Law enforcement is also rarely the first-responder to these issues, either: in the US, at least, reports will come to the hosting/service provider first for validation and THEN to NCMEC and LEOs, if the hosting provider confirms what the content is. Even reports that are sent from NCMEC to the provider aren’t being handled by law enforcement as the first step, usually.
And as for validating reports, that’s done by looking at it without all the ‘access controls and safeguards’ you think there are, other than a very thin layer of CYA on the part of the company involved. You get a report, and once PhotoDNA says ‘no fucking clue, you figure it out’ (which, IME, was basically 90% of the time) a human is going to look at it and make a determination, and then file a report with NCMEC or whatever, if it turns out to be CSAM.
Frankly, after having done that for far too fucking long, if this AI tool can reduce the amount of horrible shit someone doing the reviews has to look at, I’m 100% for it.
CSAM is (grossly) a big business, and the ‘new content’ funnel is fucking enormous and is why an extremely delayed and reactive thing like PhotoDNA isn’t all that effective is that, well, there’s a fuckload of children being abused and a fuckload of abusers escaping being caught simply because there’s too much shit to look at and handle effectively and thus any response to anything is super super slow.
This looks like a solution to make it so less people have to be involved in validation, and could be damn near instant in responding to suspected material that does need validation, which will do a good job of at least pushing the shit out of easy (ier?) availability and out of more public spaces, which honestly, is probably the best thing that is going to be managed unless the countries producing this shit start caring and going after the producers which I’m not holding my breath on.
For me, it’s full text search.
I tend to want to find an opinion on something very specific, so if I can just toss a phrase or model number or name of something into a search field and get actual non-AI, non-advertisement, non-stupid-shit results, that’d be absolutely ideal.
Like, say, how Google worked 15 years ago.
First: you’d probably be shocked how many pedos have zero opsec and just post shit/upload shit in the plain.
By which I mean most of them, because those pieces of crap don’t know shit about shit and don’t encrypt anything and just assume crap is private.
And second, yeah, I’ll catch kids generating CSAM, but it’ll catch everyone else too, so that’s probably a fair trade.
Install it and use it?
Their PDS is self hosted, but it does still rely on the central relays (though you COULD host that yourself if you wanted to pay for it, I suppose?).
It’s very centralized, but it’s not that different from what you’d have to do to make Mastodon useful: a small/single user instance will get zero content, even if you follow a lot of people, without also adding several relays to work around some of the design decisions made by the Mastodon team regarding replies and how federation works for those kind of things, as well as to populate hashtags and searches and such.
Though really you shouldn’t do any of that, and just use a good platform for discussion, like a forum or a threadiverse platform. (No seriously, absolutely hate “microblog” shit because it’s designed to just be zingers and hot takes and not actual meaningful conversations.)
15 million Series A financing
Maybe shitty corporate search engines are failing me, but has there been a stated valuation for Bluesky? Googling 'Bluesky valuation" or any combination thereof is a problem since that’s a business term so lol, lmao, search engine worthless.
$8m seed + $15m A series may be a shockingly small amount of equity, or it could be the whole damn company but I’m just not seeing it actually posted anywhere.
I gather that’s a meme that’s older than you are?
By linux ISOs I meant any content you’re torrenting: movies, software, audio, my little pony porn, whatever.
Frankly, it probably means absolutely nothing.
Even when captain coffee cup was the FCC chairman, did you lose the ability to torrent linux isos? Did usenet stop working?
I wouldn’t expect anything different this time, either.
That’s a wee revisionist: Zen/Zen+/Zen2 were not especially performant and Intel still ran circles around them with Coffee Lake chips, though in fairness that was probably because Zen forced them to stuff more cores on them.
Zen3 and newer, though, yeah, Intel has been firmly in 2nd place or 1st place with asterisks.
But the last 18 months has them fucking up in such a way that if you told me that they were doing it on purpose, I wouldn’t really doubt it.
It’s not so much failing to execute well-conceived plans as it was shipping meltingly hot, sub-par performing chips that turned out to self-immolate, combined with also giving up on being their own fab, and THEN torching the relationship with TSMC before you launched your first products they’re fabbing.
You could write the story as a malicious evil CEO wanting to destroy the company and it’d read much the same as what’s actually happening (not that I think Patty G is doing that, mind you) right now.
Yeah but it’s priced the same as a cheap laptop and/or desktop, which of course doesn’t then require you to pay monthly to actually use the stupid thing.
It feels like another ‘Microsoft asked Microsoft what Microsoft management would buy, and came up with this’ product, and less one that actually has a substantial market, especially when you’re trying to sell a $350 box that costs you $x a month to actually use as a ‘business solution’.
This would probably be a cool product at $0 with-a-required-contract-with-Azure, but at $350… meh, I suspect it’s a hard sale given the VDI stuff on Azure isn’t cheap.
Amazing what happens when your primary competitor spends 18 months stepping on every rake they can find.
And, then, having run out of rakes, they then deeply invest in a rake factory so they can keep right on stepping on them.
This’ll probably be a lot more interesting a year from now, given that the product lines for the next ~9 months or so are out and uh, well…
Yeah, it doesn’t appear that PSSR (which I cannot help but pronounce with an added i) is the highest quality upscaling out there, combined with console gamers not having experienced FSR/FSR2/FSR3’s uh, specialness is leading to people being confused why their faster console looks worse.
Hopefully Sony does something about the less than stellar quality in a PSSR2 or something relatively quickly, or they’re going to burn a lot of goodwill around the whole concept, much like how FSR is pretty much considered pretty trash by PC gamers.
Yeah but all Google needs to do is back up a dump truck of cash to Mar A Lago, and he’ll forget all about whatever it was he didn’t like about Google and immediately start tweeting how he’s the bigliest fan of all the very good things Google is doing, so I’m going to skip the breath holding bit.
really effects performance that much
Depending on the exact flags, some workloads will be faster, some will be identical, and some will be slower. Compilier optimization is some dark magic that relies on a ton of factors, but you can’t just assume that going from like -O2 to -O3 will provide better performance, since the optimizations also rely on the underlying code as to what they’ll actually make happen… and is why, for the most part, everyone suggests you stop at -O2 since you can start getting unexpected behavior the further up the curve you go.
And we’re talking low single digit performance improvements at best, not anything that anyone who is doing anything that’s not running benchmarks 24/7 would ever even notice in real world performance.
Disclaimer: there are workloads that are going to show different performance uplifts, but we’re talking Firefox and KDE and games here, per the OP’s comments.
Also they do default to a different scheduler, which is almost certainly why anyone using it will notice it feels “faster”, but it’s mainlined in the kernel so it’s not like you can’t use that anywhere else.
Yeah, I think you’ve made a mistake in thinking that this is going to be usable as generative AI.
I’d bet $5 this is just a fancy machine learning algorithm that takes a submitted image, does machine learning nonsense with it, and returns a ‘there is a high probability this is an illicit image of a child’, and not something you could use to actually generate CSAM with.
You want something that’s capable of assessing the similarities between a submitted image and a group of known bad images, but that doesn’t mean the dataset is in any way usable for anything other than that one specific task - AI/ML in use cases like this is super broad and has been a thing for decades before the whole ‘AI == generative AI’ thing became what everyone is thinking.
But, in any case: the PhotoDNA database is in one place and access to it is scaled by the merit of uh, lots of money?
And of course, any ‘unscrupulous engineer’ that may have any plans for doing anything with this is probably not a complete idiot, even if a pedo: they’re going to have shockingly good access controls and logging and well, if you’re in the US, if the dude takes this database and generates a couple of CSAM images using it, the penalty is, for most people, spending the rest of their life in prison.
Feds don’t fuck around with creation or distribution charges.