• 11 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: July 19th, 2023

help-circle
  • I haven’t done a headcount yet and the election’s not fully tallied, but I think that the Senate still has around 70% support for NATO, and historically we can expect to see a “blue dog” phenomenon in the House as a reaction to Republicans gaining seats. Effectively, both the Democrats and Republicans will function as big tents of two distinct parties, and there is usually tripartisan support (everybody but the far-right Republicans) for imperialism. We may well see votes where the legislators override presidential vetoes to force weapons sales and otherwise fulfill NATO obligations.

    And yes, you read that correctly; Democrats move right as a reaction to Republicans doing well. Go back to bed, America…










  • It’s almost completely ineffective, sorry. It’s certainly not as effective as exfiltrating weights via neighborly means.

    On Glaze and Nightshade, my prior rant hasn’t yet been invalidated and there’s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.

    I think that “force-feeding” is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that “forced” input is destined to be discarded or retagged.










  • Hallucinations — which occur when models authoritatively states something that isn’t true (or in the case of an image or a video makes something that looks…wrong) — are impossible to resolve without new branches of mathematics…

    Finally, honesty. I appreciate that the author understands this, even if they might not have the exact knowledge required to substantiate it. For what it’s worth, the situation is more dire than this; we can’t even describe the new directions required. My fictional-universe theory (FU theory) shows that a knowledge base cannot know whether its facts are describing the real world or a fictional world which has lots in common with the real world. (Humans don’t want to think about this, because of the implication.)