Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English.
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
We’re like teenaged trailer trash parents who just gave birth to a genius at the trailer park where we’re all dysfunctional alcoholics and meth addicts …
… now we’re acting surprised that our genius baby talks like an idiot after listening to us for ten years.
“Wow Johnson, no matter how much biased data we feed this thing it just keeps repeating biases from human society.”
Sample input from a systematically racist society (the entire world), get systematically racist output.
No shit. Fix society or “tune” your model, whatever that entails…
Obviously only one of these is feasible from a developer perspective.
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
I have a feeling that’s the point with a lot of their use cases, like RealPage.
It’s not a criminal act when an AI did it! (Except it is and should be.)
“It’s not redlining when an algorithm does it!”
.
This is thing I keep pointing out about AI
We’re like teenaged trailer trash parents who just gave birth to a genius at the trailer park where we’re all dysfunctional alcoholics and meth addicts …
… now we’re acting surprised that our genius baby talks like an idiot after listening to us for ten years.