• ssfckdt@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    I mean, I think intelligence requires the ability to integrate new information into one’s knowledge base. LLMs can’t do that, they have to be trained on a fixed corpus.

    Also, LLMs have a pretty shit-tastic track record of being able to differentiate correct data from bullshit, which is a pretty essential facet of intelligence IMO

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      18 hours ago

      LLMs have a perfect track record of doing exactly what they were designed to, take an input and create a plausible output that looks like it was written by a human. They just completely lack the part in the middle that properly understands what it gets as the input and makes sure the output is factually correct, because if it did have that then it wouldn’t be an LLM any more, it would be an AGI.
      The “artificial” in AI does also stand for the meaning of “fake” - something that looks and feels like it is intelligent, but actually isn’t.