• kbal@fedia.io
    link
    fedilink
    arrow-up
    8
    ·
    4 months ago

    Yeah, no doubt they will push to give the things built atop the shaky foundation of LLMs as much responsibility and access to credentials as they think they can get away with. Making the models trustworthy for such purposes has been the goal since DeepMind set off in that direction with such optimism. There are a lot of people eager to get there, and a lot of other people eager to give us the impression right now that they will get there soon. That in itself is one more reason they react with some alarm when the products are easily provoked into producing garbage.

    I’m sure it will go wrong in many interesting ways. Seems to me there are risks they haven’t begun to think about. There’s a lot of focus on preventing the models producing output that’s obviously morally offensive, very little thought given to the idea that output entirely within the bounds of what is thought acceptable might end up accidentally calibrated to reinforce and perpetuate the existing prejudices and misconceptions the machines have learned from us.