The time has come for us to make passwords for identifying each other…

  • AmbientChaos@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I’m in the US and have a well off friend who had his Facebook hacked. The bad actors sent messages to his friends asking to borrow $500 until tomorrow because his bank accounts were locked and he needed the cash. Someone who was messaged by the bad actors posted a screenshot of a deepfaked video call he received that caused him to fall for it. Wild times we live in!

      • djmarcone@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I routinely get emails from the owner of the company I work for asking me to kindly purchase several large gift cards and forward them and the receipt to him for prompt reimbursement.

        • graphite@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          asking me to kindly purchase several large gift cards

          kindly give me your money, thanks

  • redcalcium@c.calciumlabs.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Right now deepfakes doesn’t work well when the face is viewed from extreme angles, so you can ask them to slowly turn their face to the side or up/down as far as they can until the face is not visible. It also doesn’t work well when something obstruct the face, so ask them to put their hand in their face. It also can’t seem to render mouth right if you open it too wide, or stick out your tongue.

    I base this from a deepfake app I tried: https://github.com/s0md3v/roop . But as the tech improves, it might be able to handle those cases in the future.

    Edit: chance that the scammer use a live deepfake app like this one: https://github.com/iperov/DeepFaceLive . It also supports using the Insight model which only need a single well lit photo to impersonate someone.

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Right now deepfakes doesn’t work well when the face is viewed from extreme angles, so you can ask them to slowly turn their face to the side or up/down as far as they can until the face is not visible.

      or, you know, you can just pickup the phone and call them.

  • kn33@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I got one of these a few months ago. I could tell it was fake before I even answered, but I was curious so I pointed my own camera at a blank wall and answered. It was creepy to see my friend’s face (albeit one that was obviously fake if you knew what to look for) when I answered.

    • Kodemystic@lemmy.kodemystic.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      How do these scamers know who our friends are? Also how are they able to get pictures or video from said friend to create the fake?

      • kn33@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In my case, the friend’s facebook account was compromised. So they were able to get his pictures and call me from his account.

  • preasket@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Here’s hoping for popularising secure communication protocols. It’s gonna become a must at some point.

      • Takumidesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        But key exchanges work.

        Signal for example, will warn you when the person you are talking to is using a new device.

        As long as the user heeds the warning, it is an effective stop, and at the very least gives the user pause.

        If the signal safety number changes, but the communication stays on track, as in, the context of the conversation is the same, it’s unlikely to be a problem. But if the safety number changes and the next message is asking for money, that is a very simple and easy to process situation.

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    With deepfake technology being so advanced nowadays, how will we ever know if the person we are talking with on the internet is who they say they are?