I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

  • BranBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    23 days ago

    It’s not that I don’t think there aren’t legitimate uses for AI or that it could be used as a learning tool.

    It’s that I doubt it’s better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you’re describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 days ago

      Perhaps only because ubiquity and speed favour sloppiness. As a thought experiment, imagine if you could only use AI once a day, for one question. Asking questions would suddenly become expensive.

      They would require careful thinking and pre-planning, followed by careful rumination on the answer and possible follow-ups.

      That’s obviously an extreme example, but it’s not that dissimilar to how people use tools like LexisNexis or IBISWorld - expensive research tools where the cost naturally forces you to think about the question before asking it.

      In that sense the issue may not be the medium itself so much as the cost structure of the interaction.

      When answers are instant and effectively unlimited, people tend to outsource thinking. When access is constrained, the incentive flips and the thinking moves back to the question.

      Which is to say: the tool probably amplifies existing habits rather than creating them. People who already interrogate sources will interrogate AI outputs. People who don’t, won’t.

      • BranBucket@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 days ago

        I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that’s just repetition of information and devoid of any new reasoning or insight.

        I would carefully ruminate on this reply, and find that at best, it’s factually correct because it’s an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

        But, that’s not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

        Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

        • SuspciousCarrot78@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 days ago

          OK but is that an AI problem or a people problem?

          I think the Postman point is a fair one. The way information is presented absolutely affects how people reason with it. A fluent conversational answer can feel authoritative in a way that a messy set of search results doesn’t.

          But that problem isn’t unique to LLMs. Every medium that compresses information into something smooth and persuasive has created the same concern.

          Books did it, newspapers did it, television did it, and search engines arguably did it as well.

          The real question is whether the medium determines behaviour or just amplifies existing habits.

          People who already interrogate sources tend to interrogate AI outputs as well. People who don’t… won’t.

          I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

          We’ve become (for lack of better words) mentally flabby - me included.

          • BranBucket@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 days ago

            If I’m arguing in good faith, it’s both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I’m going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

            In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we’re going to have to contend with it until it undeniably starts costing more than it’s worth, and if that cost is cultural or societal instead of financial, it might never go away.

            I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

            I don’t pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860’s with the telegraph.