Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • Silinde@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    LLMs act nothing like our brains and are not neural networks

    Err, yes they are. You don’t even need to read a paper on the subject, just go straight to the Wikipedia page and it’s right there in the first line. The ‘T’ in GPT is literally Transformer, you’re highly unlikely to find a Transformer model that doesn’t use an ANN at its core.

    Please don’t turn this place into Reddit by spreading misinformation.