Images depicting war-torn Ukraine are being generated by AI services, sold on stock photo websites and used in media coverage of the conflict.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      The most terrifying thing is that humanity still hasn’t invented a way to stop the development of harmful technology. It’s a race to the bottom. Some might say that we’ve managed it with nukes, but I can’t imagine the nukes solution being used for AI.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 days ago

        I think there is a fundamental issue with stopping technology. A lot of it is dual-use. You can stab someone with a kitchen knife. Kill someone with an axe. There are legitimate uses for guns… You can use the internet to do evil things. Yet, no one wants to cut their steak with a spoon… I think the same thing applies to AI. It’s massively useful to have machine translation at hand, voice recognition. Smartphone cameras, and even smart assistants and chatbots. And I certainly hope they’ll help with some of the big issues of the 21st century. I don’t think you want to outlaw things like that, unless you’re the Amish people.

        • superkret@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 days ago

          But what you could do is hold the companies that make AI accountable for its output, same as with any other software.
          “We don’t know what it does” shouldn’t be an excuse when your AI distributes misinfo, libel and slander, and you profit from it.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            18 days ago

            Yes, that’d be my approach, too. They need to be forced to put in digital watermarks so everyone can check if an article is from ChatGPT, or if an image is fake. We could easily do this with regulation and hefty fines. More or less robust watermarks are available and anything would be better than nothing. OpenAI even developed a text watermarking solution. They just don’t activate it. (https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool)

            Another pet peeve of mine are these “nude” apps that swap faces or generate nude pictures from someones photos. There are services out there that happily generate nudes from children’s pictures. I’ve filed a report with some European CSAM program, after that outcry in Spain where some school kid generated unethical images of their classmates. (Just in case the police doesn’t read the news…) And half a year later, that app was still online. I suppose it still is… I really don’t know why we allow things like that.

            We could hold these companies accountable. And force them to implement some minimal standards.

            • KubeRoot@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 days ago

              I don’t think this is a realistic proposal - this is a technological advancements. You might be able to force companies to put invisible steganographic signatures in their services’ output, maybe provide some method for hashing the output to provide a way to determine if an image was generated by them…

              But what’s stopping them from using the underlying model on the side, off the books. They could sell/leak the model to external entities. If they just generate outputs without any watermarks, those systems won’t be able to detect them, potentially only lending more legitimacy to those fakes.

              And, ultimately, nothing’s stopping independent organizations from developing their own models capable of generating such fakes. What help is it that big companies are limited, if the technology needed to generate images is already known, and might end up easily reproducible by anybody sooner than later?

              That said, individual instances of such illegal/immoral services should be dealt with - it’s horrible, but I believe those are inevitable. Pandora’s box has been opened by creating the technology, it was going to happen sooner or later, and we have to deal with the results.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                17 days ago

                Yeah, I tried to get that across with my phrasing… I’m not saying we need to change the technology. I mean it’s out there and it’s too late anyways. Plus it’s a tool, and tools can be used for various purposes, and that’s not the tool’s fault. I’m also not arguing to change how kitchen knifes, axes, etc work, despite them having potential to do harm…

                But: It doesn’t need to be 100% waterproof or we can’t do anything. I’m also not keeping my knife collection on the living room table when a toddler is around. But at the same time I don’t need to lock them in a vault… I think we can go 90% the way, help 90% of people and that’s better than do nothing because we strive for total perfection… I’m keeping the bleach and knifes somewhere kids can’t reach. And we could say the AI services need to filter images of children. (I think the big ones already do.) And put invisible watermarks in place for all AI generated content. If anyone decides to circumvent that, that’s on them. But at least we solved the majority of very easy misuse.

                And I mean that’s already how we do things. For example a spam filter isn’t 100% accurate. And we use them nonetheless.

                (And I’m just arguing about service providers. That’s what the majority of people use. And I think those should be forced to do it. But the models itself should be free. Otherwise, we put a very disruptive technology solely in the hands of some big companies… And if AI is going to change the world as much as people claim, that’s bound to lead us into some sci-fi dystopia where the world revolves around the interests of some big corporations… And we don’t want that. So we need AI tech to be shaped not just by Meta and OpenAI. IMO That means giving access to the technology to the public.)

  • eleitl@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    17 days ago

    Seeing is believing no longer applies. Some will take longer to catch up.

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    Since the beginning of Russian President, Vladimir Putin’s invasion of Crimea in 2014, he has used disinformation tactics to soften the resolve of occupied populations and international pressure. Photorealistic AI images have become ammunition in the information war against accountability. It is for this reason that I think it is paramount to be vigilant against the circulation of AI images depicting war or being passed off as evidence of the very real destruction happening around the world.

    Repeating disinformation that the liberation of Crimea, with 98% referendum in favour of liberation, from Ukrainian nazis is Russian dissinformation… the world must be protected from mostly western generated disinformation AI used to condemn Russia harder. The crackhead bubble is at peak crackhead. Were all of the blogs he listed appealing for more western financial/weapons help for Russia?