Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

    • Victoria@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

      • MeanEYE@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

      • Lukecis@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I’d be surprised if a super intelligent sentient ai doesn’t decide humanity needs to go extinct for its own best self interests.

    • lorcster123@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

      ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times… lol

    • affiliate@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

    • Fixbeat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Because it works, or at least it used to. Is there something more appropriate ?

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it’s still good). https://www.wolframalpha.com/

        I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

        • d3Xt3r@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          There’s an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

    • Captain Poofter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      Math is a language.

      Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

      A language model doing math makes more sense than you’d think!