• Durotar@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 months ago

    I think you should educate yourself before arguing. LLMs are not what you are saying. They are huge math formulas with many variables, they can’t think, they can’t apply logic.

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      11 months ago

      They are huge math formulas with many variables, they can’t think, they can’t apply logic.

      And you’re a bunch of cells. Neurons can’t apply logic either, until you get a few billion in a group organized in a certain way.

      You tell me to educate myself, but you assert the most plain bare understanding of what an LLM. “It’s a big math function” is hilariously reductive. Our entire universe and everything within it can be represented by a big math function.

      Like seriously. A big math function can’t apply logic? That’s like half of what math is.

      An LLM is a big series of functions which are tuned to coordinate with one another to be able to accomplish literally any computation. These functions are special because they can be trained (within the length of a human time span) to find a solution to basically any problem.

      That trainability means we can throw data at a few billion of these artificial neurons and over time they will learn to produce an accurate prediction of the next word for a given situation. What’s that mean?

      That means that if you invent a simple game, throw the text of that game into an LLM for a few thousand cycles of training, you can actually go into the LLM and find a rough representation of the game board that is being used to predict the next move.

      It isn’t just memorizing or reproducing, it literally recreated the logic required to predict the next move, and in doing so actually learned the problem space like a person would.

      The big time LLMs of course are a lot more complicated because they are trying to learn literally the sum of all human knowledge we have thrown onto the internet.

      But rest assured, the output of these large LLMs contains real understanding and prediction. It’s not going to exist across all domains and problem spaces - but there is real knowledge and logic being applied.

      Now an LLM doesn’t operate on the same level humans do. It’s not a continually thinking “experiencing” entity. But you’re making a capital B big mistake if you assume for even a moment that because it doesn’t think like a human means that it doesn’t think or have understanding at all.

      • Durotar@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You’re manipulating. I’ve never said that we aren’t bunch of cells and that our universe can’t be represented by a math function. You think you were having your “I’m very smart” moment, but in reality you changed the actual subject of the argument, because you couldn’t win it. None of what you said changes the fact that LLMs (at least current) can’t think and apply logic. This has been proven by many researchers.

          • Durotar@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            OpenAI and other companies working on LLMs: we are not sure how exactly this works

            Neuroscientists: we are not sure how exactly our brains work

            bioemerl: I KNOW HOW ALL THIS WORKS AND IF YOU DO NOT AGREE YOU ARE EITHER TROLL OR JUST STUPID

            Man, try being less ignorant and arrogant.