- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- technology@hexbear.net
- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- technology@hexbear.net
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Didn’t read your comment but you’re dumb
On the other hand, if we move from larger and larger models with as much data they can gather to less generic and more specific high quality datasets, I have a feeling there’s still a lot to gain. But quality over quantity takes a lot more effort to maintain.
The video is more about the diminishing returns when it comes to increasing size of training set. It’s following a logarithmic curve. At some point, just “adding more data” won’t do much because the cost will be too high compared to the gain in accuracy.