- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf
This is peer-reviewed? they use a line in the discussion which seems relatively unprofessional, telling people to join a 12-step program if they like to use artificial training data.
ArXiv papers are never peer reviewed.
Thank you
I think arvix has no rule requiring a paper be per reviewed before uploading.
deleted by creator
Not affiliated with the paper in any way. Have just been following the news around it.