Predictions about the potential impacts of generative AI may be hugely overblown because of "many serious, unsolved problems" with the technology according to Gary Marcus, one of the field's leading voices.
Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word “Lemmy” would be “lem” + “my”.
So let’s give our model the prompt “my favorite website is”
It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.
Woah what happened there? That’s not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the token area, there might be polluted training data, etc. These models are massive and so determine why it’s incorrect in this case is tough.
But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it’s not a hallucination anymore.
Let me expand a little bit.
Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word “Lemmy” would be “lem” + “my”.
So let’s give our model the prompt “my favorite website is”
It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.
“My favorite website is”
"My favorite website is "
“My favorite website is lem”
“My favorite website is lemmy”
“My favorite website is lemmy.”
“My favorite website is lemmy.org”
Woah what happened there? That’s not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the token area, there might be polluted training data, etc. These models are massive and so determine why it’s incorrect in this case is tough.
But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it’s not a hallucination anymore.