• Gaywallet (they/it)@beehaw.orgOP
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.

    • leisesprecher@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I have a feeling that’s the point with a lot of their use cases, like RealPage.

        It’s not a criminal act when an AI did it! (Except it is and should be.)