• Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    LLM’s have two phases, the training phase, and deployment phase. During deployment, it is incapable of taking in or “learning” new information. You an tell it things and it may remember them for a short time, but that data is not incorporated into it’s weights and biases and is therefore more similar to short term memory.

    It can only learn during the training phase, generally when it is pitted against another AI designed to find it’s flaws, and mutated based off of it’s overall fitness level. I’m other words, it has to mutate to learn. Shut off mutation, and it simply doesn’t learn.

    It seems likely to me that any LLM that is sent out in deployment would therefore be incapable of sentience, and that involves reacting in novel ways to new experiences. Whereas deployed AI will always behave in the way it’s neural network was trained.

    Tl;Dr: you can’t ask chatGPT to print out it’s training data. Even if you ask it multiple times, it was designed to not do that. That sort of limiting factor prevents it from learning, and therefore sentience.

    • UraniumBlazer@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 months ago

      Correct. So basically, you are talking about it adjusting its own weights while talking to you. It does this in training but not in deployment. The reason why it doesn’t do this in deployment is to prevent bad training data from worsening the quality of the model. All data needs to be vetted before training.

      However, if you look at the training phase, it does this as you said. So in short, it doesn’t adjust its weights in production because it can’t, but because WE have prevented it from doing so.

      Now about needing to learn and “mutate” to be sentient in deployment. I don’t think that this is necessary for sentience. Take a look at Alzheimer’s patients. They remember shit from decades ago while forgetting recent stuff. Are they not sentient? An Alzheimer’s patient wouldn’t be able to take up a new skill (which requires adjusting of neural weights). It still doesn’t make them non sentient, does it?

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        That’s a tough one. Honestly, and I’m probably going to receive hate for this, but my gut isntinct would be that no, they are not sentient in the traditional sense of the word. If you harm them and they can’t remember it a moment later, are they really living? Or are they just an echo of the past?

        • UraniumBlazer@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          This just shows that we have different definitions for sentience. I define sentience as the ability to be self aware and the ability to link senses of external stimuli to the self. Your definition involves short term memory and weight adjustment as well.

          However, there is no consensus in the definition of sentience yet for a variety of reasons. Hence, none of our definitions are “wrong”. At least not yet.