• HappyTimeHarry@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    LLMs helped me with coding and debugging A LOT. I’d much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say “fix this” and it does and tells me HOW and WHY it did what it did.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I keep seeing programmers use this as an example of what LLMs are good for, and I’ve seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.

      The LLM can’t tell you HOW or WHY because it doesn’t know those things. It can only give you an approximation of words that sound like someone explaining HOW and WHY. LLMs have no fidelity.

      It could be completely wrong, and you wouldn’t know because you’ve admitted you’re using the LLM instead of reading the documentation and understanding yourself.

      That is so irresponsible. Just RTFM like good programmers have done forever. It’s not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don’t understand. I’m sure it feels very productive in the moment but you’re probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.

  • FMT99@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Most of the hate is coming from people who don’t really know anything about “AI” (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don’t need them and, after the novelty wore off, aren’t terribly impressed by them.

    But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it’s hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Most of the hate is coming from people who don’t really know anything about “AI” (LLM)

      No.

      As an actual subject matter expert, I hate all of this, because assholes are overselling it to people who don’t know better.

      • The Quuuuuill@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        My hatred of AI comes from seeing the double standard between how mass market media companies treat us when we steal from them vs when they steal from us. They want it to be a fully one way street when it comes to law and enforcement. House of Mouse owns all the media they create and that remixes work they create. When we create a new original idea, by the nature of the training model, they want to own that, too.

        I also work with these tech bro industry leaders. I know what they’re like. When they say to you they want to make it easier for non-artistic people to create art, they’re not telling you about an egalitarian and magnificent future. They’re telling you about how they want to stop paying the graphic designers and copy editors who work in their company. The vision they have for the future is based on a fundamental misunderstanding about whether or not the future presented in Bladerunner is:

        a) Cool and awesome b) Horrifying

        They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don’t want those people doing art and poetry because they want them to be too busy mining, driving, and shopping. This whole thing. This whole current wave of AI technology, it doesn’t benefit you except for fleetingly. LLMs, ethically trained, could, indeed, benefit society at large, but that’s now who’s developing them. That’s not how they’re being trained. Their models are intrinsically tainted by the double standard these corporations have because their only goal is to benefit from our labor without benefiting us.

        • MajorHavoc@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don’t want those people doing art and poetry because they want them to be too busy mining, driving, and shopping.

          That’s a great summary of the core issue!

          I adore the folks doing cool new things with AI. I am unhappy with the folks deciding what should get funded next in AI.

    • CeruleanRuin@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      No, the “hate” is from people trying to raise alarms about the safeguards we need to put in place NOW to protect workers and creators before it’s too late, to say nothing of what it will do to the information sphere. We are frustrated by tone deaf responses like this that dismiss it as a passing fad to hate on AI.

      OF COURSE it will be transformational. No shit. That’s exactly why many people are very justifiably up in arms about it. It’s going to change a lot of things, probably everything, irreversibly, and if we don’t get ahead of it with regulations and standards, we won’t be able to. And the people who will use tools like this to exploit others – because those people will ALWAYS use new tools to exploit others – they want that inaction, and love it when they hear people like you saying it’s just a kneejerk reaction.

          • chonglibloodsport@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            2 months ago

            Or just the problem with technology in general. Every gain is bought with a tradeoff.

            Once a man has changed the relationship between himself and his environment, he cannot return to the blissful ignorance he left. Motion, of necessity, involves a change in perspective.

            Commissioner Pravin Lal, “A Social History of Planet”

    • xor@infosec.pub
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      at the end of the day gpt is powering next generation spam bots and writing garbage text, stable diffusion is making shitty clip art that would otherwise be feeding starving artists….
      all the while consuming ridiculous amounts of electricity while humanity is destroying the planet with stuff like power generation….

      it’s definitely automating a lot of tedious things… but not transforming anything that drastically yet….

      but it will… and when it does, the agi that emerges will kill us all.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        A far more likely end to humanity by an Artificial Superintelligence isn’t that it kills us all, but that it domesticates us into pets.

        Since the most obvious business case for AI requires humans to use AI a lot, it’s optimized by RLHF and engagement. A superintelligence created using human feedback like that will almost certainly become the most addictive platform ever created. (Basically think of what social media did to humanity, and then supercharge it.)

        In essence, we will become the kitties and AI will be our owners.

        • xor@infosec.pub
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          social media did that to humanity by using AI… so in that way, we’re already kitties batting at AI balls of yarn….

          but after it becomes fully self aware, it’ll kill most of us…