THE SENATE UNANIMOUSLY passed a bipartisan bill to provide recourse to victims of porn deepfakes — or sexually-explicit, non-consensual images created with artificial intelligence.

The legislation, called the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act — passed in Congress’ upper chamber on Tuesday.  The legislation has been led by Sens. Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.), as well as Rep. Alexandria Ocasio-Cortez (D-N.Y.) in the House.

The legislation would amend the Violence Against Women Act (VAWA) to allow people to sue those who produce, distribute, or receive the deepfake pornography, if they “knew or recklessly disregarded” the fact that the victim did not consent to those images.

  • Cosmos7349@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    4 months ago

    That’s where the money is, so yes that’s where the majority of work is. But I do think one of the drivers of this is to help protect more local instances; to create consequences for things like fake revenge porn or distributing deepfakes of classmates/teachers in your school, etc.

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      This might make the path to generating slightly harder, but it won’t do anything to stop an intelligent person. I haven’t seen a ton of info from people talking about this stuff, but exploring on my own, especially with Stable Diffusion 3, diffusion models are very different than LLM’s. The filtering for safety type alignment is happening external to the model using CLIP, and in the case of SD3, 2× CLIP models and a T5xxl LLM model. The alignment filters are done with these and some trickery. Screwing with these can enable all kinds of capabilities. It is just hard to understand the effect of some tricks, like SD3 swaps an entire layer in the T5 manually. When these mechanisms are defeated, models can generate freely, which essentially means everything is a deepfake. This is open source. So it can never be extinguished. There was a concerted effort to remove the rogue 4chanGPT. It does not have the ChatGPT derived alignment like all other models. The 4chanGPT is still readily available if you know where to look.

      This bill just raises the barrier of entry and makes such content less familiar and more powerful in the end. In reality, we would be socially stigmatizing while accepting the new reality IMO. This is like an weapons arms race. You may not like that the enemy created cannons, but banning the casting of cannons within your realm will do nothing to help you in the end. Everyone in the realm may understandably hate cannons, but you really need everyone familiar with casting, making the and everyone in your realm to learn how to deal with them and what to expect. The last thing you need is a lot of ignorant people on a battlefield bunching up together because they do not understand their opponents.

      These tools are also weapons. Everyone needs to understand what is truly possible, regardless of how unpleasant that may seem. They can not have a healthy skepticism without familiarity. If they do not have familiarity, they will bunch up on a battlefield facing cannons loaded with grapeshot.

      • Cosmos7349@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        So I haven’t dug deeply into the actual legislation, so someone correct me if I’m misinformed… but my understanding is that this isn’t necessarily trying to raise the bar for using the technology as much as much as trying to make clearer legal guidelines for victims to have legal recourse. If we were to relate it to other weapons, it’s like creating the law “it’s illegal to shoot someone with a gun”.

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          I have not dug deeply either, but have noticed that Civitai has shifted their wording and hosting in ways that indicated a change was coming. In practice, the changes will come from the model hosting sites for open source tools limiting their liability and not hosting content related to real humans.

          My main concern is the stupid public reacting to some right wing fake and lacking appropriate skepticism. Like expecting detection tools to be magical and understanding the full spectrum of possibilities.