I’ll focus specifically on the image generation bias.
Imagine for a moment that you “trained” a model to output tools, based on the following four pics:
Green wrench, blue wrench, red hammer, red screwdriver.
Most tools in the pic are red, so the model “assumes” [NB: metaphor] that the typical tool is red. Most of them are wrenches, so it “assumes” that the typical tool is a wrench.
So once you ask it “I want a picture of a tool”, here’s what it’ll show you:
A red wrench. All the fucking time.
That’s a flaw of the technology - it’ll exacerbate any bias from its training data set.
Now, instead of training the model with tools, train it with images of people doing multiple activities. I believe that users here should quickly get what would happen - “engineer” goes from “mostly men” to “always a man”, “primary teacher” goes from “mostly women” to “always a woman”, same shit with skin colour, the shape of your nose, clothing, and everything else.
Could be this solved? Yes; you’d need to pay extra attention to human pictures that you’re “training” the image generator with and constantly check for biases. That’s considerably slower, but it would solve the issue of image generators perpetrating stereotypes.
However big corporations give no flying fucks about social harm, even if they really want suckers like you and me to believe otherwise.
I’ll focus specifically on the image generation bias.
Imagine for a moment that you “trained” a model to output tools, based on the following four pics:
Green wrench, blue wrench, red hammer, red screwdriver.
Most tools in the pic are red, so the model “assumes” [NB: metaphor] that the typical tool is red. Most of them are wrenches, so it “assumes” that the typical tool is a wrench.
So once you ask it “I want a picture of a tool”, here’s what it’ll show you:
A red wrench. All the fucking time.
That’s a flaw of the technology - it’ll exacerbate any bias from its training data set.
Now, instead of training the model with tools, train it with images of people doing multiple activities. I believe that users here should quickly get what would happen - “engineer” goes from “mostly men” to “always a man”, “primary teacher” goes from “mostly women” to “always a woman”, same shit with skin colour, the shape of your nose, clothing, and everything else.
Could be this solved? Yes; you’d need to pay extra attention to human pictures that you’re “training” the image generator with and constantly check for biases. That’s considerably slower, but it would solve the issue of image generators perpetrating stereotypes.
However big corporations give no flying fucks about social harm, even if they really want suckers like you and me to believe otherwise.