- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- technology@hexbear.net
- cross-posted to:
- technology@lemmy.ml
- technology@beehaw.org
- technology@hexbear.net
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125
On the other hand, if we move from larger and larger models with as much data they can gather to less generic and more specific high quality datasets, I have a feeling there’s still a lot to gain. But quality over quantity takes a lot more effort to maintain.