Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • lime!
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 hours ago

    i think the first question to ask of this graph is, if “human intelligence” is 10, what is 9? how you even begin to approach the problem of reducing the concept of intelligence to a one-dimensional line?

    the same applies to the y-axis here. how is something “more” or “less” of a word predictor? LLMs are word predictors. that is their entire point. so are markov chains. are LLMs better word predictors than markov chains? yes, undoubtedly. are they more of a word predictor? um…


    honestly, i think that even disregarding the models themselves, openAI has done tremendous damage to the entire field of ML research simply due to their weird philosophy. the e/acc stuff makes them look like a cult, but it matches with the normie understanding of what AI is “supposed” to be and so it makes it really hard to talk about the actual capabilities of ML systems. i prefer to use the term “applied statistics” when giving intros to AI now because the mind-well is already well and truly poisoned.