• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 days ago

    “despite the many people who have shown time and time and time again that it definitely does not do fine detail well and will often present shit that just 10000% was not in the source material, I still believe that it is right all the time and gives me perfectly clean code. it is them, not I, that are the rubes”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      The problem with stuff like this is not knowing when you dont know. People who had not read the books SSC Scott was reviewing didnt know he had missed the points (or not read the book at all) till people pointed it out in the comments. But the reviews stay up.

      Anyway this stuff always feels like a huge motte bailey, where we go from ‘it has some uses’ to ‘it has some uses if you are a domain expert who checks the output diligently’ back to ‘some general use’.

    • pipes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      4 days ago

      Ahah I’m totally with you, I just personally know people that love it because they have never learned how to use a search engine. And these generalist generative AIs are trained on gobbled up internet basically, while also generating so many dangerous mistakes, I’ve read enough horror stories.

      I’m in science and I’m not interested in ChatGPT, wouldn’t trust it with a pancake recipe. Even if it was useful to me I wouldn’t trust the vendor lock-in or enshittification that’s gonna come after I get dependent on aa tool in the cloud.

      A local LLM on cheap or widely available hardware with reproducible input / output? Then I’m interested.