• EatATaco@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

    This is part of what makes ai so “scary” that it can basically know so much.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I guess it comes down to a philosophical question as to what “know” actually means.

        But from my perspective is that it certainly knows some things. It knows how to determine what I’m asking, and it clearly knows how to formulate a response by stitching together information. Is it perfect? No. But neither are humans, we mistakenly believe we know things all the time, and miscommunications are quite common.

        But this is why I asked the follow up question…what’s the effective difference? Don’t get me wrong, they clearly have a lot of flaws right now. But my 8 year old had a lot of flaws too, and I assume both will get better with age.

        • flere-imsaho@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          i guess it comes down to a philosophical question

          no, it doesn’t, and it’s not a philosophical question (and neither is this a question of philosophy).

          the software simply has no cognitive capabilities.

          • EatATaco@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            I’m not sure I agree, but then it goes to my second question:

            What’s the effective difference?

            • braxy29@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              don’t know why you got downvoted, an LLM is essentially a chinese room, and whether such a room “knows” is still the question.

            • flere-imsaho@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              (…) perception, attention, thought, imagination, intelligence, comprehension, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem-solving and decision-making (…)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          nearly every word of your post demonstrates a comprehensively thorough lack of understanding of how this shit works

          it also demonstrates why you’re lost about the “effective difference”

          I don’t mean this aggressively, but you really don’t have any concrete idea of wtf you’re talking about, and it shows

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            The dehumanization that happens just because people think LLMs are impressive (they are, just not that impressive) is insane.

            • ebu@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              need to be able to think LLM’s are impressive, probably

              surely tech will save us all, right?

        • YouKnowWhoTheFuckIAM@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Yeah, it’s a philosophical question, which means you need a philosophical answer. Spitballing won’t help you figure shit out a priori because it turns out that learning how to think a priori effectively takes years of hard graft and is called “studying philosophy”. You should be asking people like me what “know” means in this context and what distinguishes memory in human beings from “memory” in an LLM (a great deal, as it happens!)

    • exanime@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Because a machine that “forgets” stuff it reads seems rather useless… considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Chat GPT had the book entirely memorized

        I feel like this exposes a fundamental misunderstanding of how LLMs are trained.

      • booly@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

        The types of multiple choice questions aren’t simple recall of learned facts. It requires application of abstract concepts to new facts, with a lot of red herrings. Here’s a real question:

        A father lived with his son, who was an alcoholic. When drunk, the son often became violent and physically abused his father. As a result, the father always lived in fear. One night, the father heard his son on the front stoop making loud obscene remarks. The father was certain that his son was drunk and was terrified that he would be physically beaten again. In his fear, he bolted the front door and took out a revolver. When the son discovered that the door was bolted, he kicked it down. As the son burst through the front door, his father shot him four times in the chest, killing him. In fact, the son was not under the influence of alcohol or any drug and did not intend to harm his father.

        At trial, the father presented the above facts and asked the judge to instruct the jury on self-defense.

        How should the judge instruct the jury with respect to self-defense?

        (A) Give the self-defense instruction, because it expresses the defense’s theory of the case.

        (B) Give the self-defense instruction, because the evidence is sufficient to raise the defense.

        © Deny the self-defense instruction, because the father was not in imminent danger from his son.

        (D) Deny the self-defense instruction, because the father used excessive force.

        Studying for the bar exam starts with memorizing a bunch of rules, but actually getting out and applying them is a separate skill.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        I absolutely agree. However, if you think the LLMs are just fancy LUTs, then I strongly disagree. Unless, of course, we are also just fancy LUTs.

        • captainlezbian@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          You ever meet an ai researcher with a background in biology? I’ve discussed this stuff with one. She disagrees with Turing about machines thinking including when ai is in the picture. They process information very differently from how biology does

          • EatATaco@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            This is a vague non answer, although I agree it’s done very differently because our process is biological and ai is not.

            But as I asked elsewhere, what’s the effective difference?

      • Phoenixz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Well… I do agree with you but human brains are basically big prediction engines that use lookup tables, experience, to navigate around life. Obviously a super simplification, and LLMs are nowhere near humans, but it is quite a step in the direction.