If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • Call me Lenny/Leni@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    18 days ago

    A camera can only show us what it sees. It doesn’t objectively necessitate a viewer’s interpretation of it. I remember some of us being called down to the principal’s office (before the age of footage-based scandals, which if anything imply shortcoming in the people progressing the rulings to be in so much awe at, sadly a common occurrence, adding to the “normal people distaste” I have, and something authorities have made sure I’m no stranger to) who may say “we saw you on the camera doing something against the rules” only to be responded to with “that’s not me, I have an alibi” or “that’s not me, I wouldn’t wear that jacket” or “that’s not me, I can’t do that person’s accent” (aforementioned serial slander of me serving as a prime example where this would be the case). In connection to the process, you might say it’s witness testimony from a machine and that they’ve “just started” to get into the habit of not being very honest to the humans in thw court. I remember my first lie.