• dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    1 month ago

    Because humans can do introspection and think and reflect about our own knowledge against the perceived expertise and knowledge of other humans. There’s nothing in LLMs models capable of doing this. An LLM cannot asses it own state, and even if it could, it has nothing to contrast it to. You cannot develop the concept of ignorance without an other to interact and compare with.