Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • Dojan@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s a language model, text prediction. It doesn’t do any counting or reasoning about the preceding text, just completes it with what seems like the most logical conclusion.

    So if enough of the internet had said 1+1=12 it would repeat in kind.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not quite.

      Legal Othello board moves by themselves don’t say anything about the board size or rules.

      And yet when Harvard/MIT researchers fed them into a toy GPT model, they found that the neural network best able to predict outputting legal moves had built an internal representation of the board state and rules.

      Too many people commenting on this topic as armchair experts are confusing training with what results from the training.

      Training on completing text doesn’t mean the end result can’t understand aspects that feed into the original generation of that text, and given a fair bit of research so far, the opposite is almost certainly the case to some degree.