• Swedneck@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    to be fair back then google just showed you what you searched for, i’m not as happy about people googling stuff these days. With AI we already know that it tends to make shit up, and it might very well only get worse as they start being trained on their own output.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 months ago

      Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don’t hallucinate in the first place.

      The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was “I’m sorry I do not know”, so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don’t have access to it in their current data set.

      That being said, Google’s AI is an idiot.