Coffeezilla asks: “Is the LAM a Scam? Down the rabbit hole we go.”
Pretty much everything AI is a scam, I mean it has its uses but isn’t exactly as claimed yet. Pretty much every non phone AI gadget I’ve seen so far definetly is a scam.
If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.
There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.
Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.
I think it’s becoming fair to label a lot of commercial AI “scams” at this point, considering the huge gulf between the hype and the end results.
Open source projects are different due to their lack of commercialisation.
Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.
And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.
We have:
Suno’s music generation
NVidia’s upscaling
Midjourney’s Image Generation
OpenAI’s ChatGPT
Etc.So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.
Machine translation has been used by large organizations for years. Anyone saying AI is a scam doesn’t realize it’s been around, and useful, for quite a while
The issue is AI is just too broad of a term. It’s also not a magic bullet and comes with its own problems so it’s not even the best tool for the job many times.
It’s not too broad imo, there’s just a lot of different types. So saying “AI is a scam” is incorrect, because there are types of AI that are enterprise level, and are being used currently by all the largest companies in the world to save time and money - like Machine Translation
So when I hear someone say it’s a scam, it just tells me they aren’t very familiar with AI, and only have experience with it in very limited forms and settings
I mean, LLaMA is open-source and it’s made by Facebook for profit, there’s grey areas. Imo tho, any service that claims to be anything more than a fancy wrapper for OpenAI, Anthropic, etc. API calls is possibly a scam. Especially if they’re trying to sell you hardware, or the service costs more than like $10/month, LLM API calls are obscenely cheap. I use a local frontend as an AI assistant that works by making API calls through a service called openrouter (basically a unified service that makes API calls to all the major cloud LLM providers for you). I put like $5 in it 3 or 4 months ago and it still hasn’t run out.
I find there’s 4 kinds of folks talking about AI.
There’s folks who think it’s as amazing as all the tech firms tell us:
- And we’re all gonna die
Or
- And life will be amazing
Then there’s folks who think AI is hype whack bananas
- And think it’s a scam.
And lastly,
- The folks who see that we’ve already changed life as we know it with AI. That there’s still massive potential, but folks in categories 1 and 2(, and 3,) are all kinda nuts.
4 gang.
There’s a 5th type - those of us who understand that the technology itself isn’t a scam and has valid uses (even if many “AI” startups actually are scams), but think there isn’t that much potential left with current methods due to the extreme amount of data and energy required (which seems to be supported by some research lately, but only time will tell).
That’s in line with type 4. I guess it needs subtypes like 1 and 2 or whatever.
I agree with your view there, but also believe that it won’t take much to get from where we are to where we begin to have novel solutions/approaches to things like quantum computation, superconductors, cold fusion, nuclear fusion, et al.
Should that happen, I would disagree. Until some other stronghold gives way, I agree.
Quantum computers seem most plausible.
I’m in whatever subset of 4 believes advancements in AI are necessary almost to the point of being an ethical obligation at this point. Transhuman or bust.
4 gang Or more accurately (from what I see) 4th 1 gang
deleted by creator
I don’t think you realize just how widely used some of those other models are… For instance, in gaming, DLSS is supported with every Nvidia GPU starting with the 20 series and up.
DLSS uses multiple machine learning models for things like predicting object/pixel movement, generating new frames between what you would normally be able to achieve, and then upscaling that image. Which is also why you want to download the latest drivers since those models are better trained for the more recently released games.
I wouldn’t call that a “small fraction” by any means.
But, maybe your referring to the amount of focus that the news has on LLMs like ChatGPT?
deleted by creator
This is because dedicated consumer AI hardware is a dumb idea. If it’s powerful enough to run a model locally, you should be able to use it for other things (like, say, as a phone or PC) and if it’s sending all its API requests to the cloud, then it has no business being anything but a smartphone app or website.
I can’t agree with that. ASICs can specialize to do one thing at lightning speeds, and fail to do even the most basic of anything else. It’s like claiming your GPU is super powerful so it should be able to run your PC without a CPU.
That’s fair, dedicated ASICs for AI acceleration are totally a valid consumer product, but I meant more along the lines of independent devices (like Rabbit R1 and the AI Pin), not components you can add to an existing device. I should have been more clear.
It’s very much fake it till you make it.
Just go all out, and gamble that in 5 years the technology will be here to actually make it all function like your dreamt it would. And by then you are the defacto name within that space and can take advantage of that.
Go to bed Musk.
No, Google and Amazon were actually well run businesses with sensible business plans to meet needs in the market and did it well.
Sure, but competition is way higher now inn the upstart/ emergent tech market
I use chat gpt occasionally. It’s not a scam, it’s useful for what I need it to do. I’m just not fooled by the notion that these LLM know factual data or can do much more than generate text. If you accept that, LLMs are pretty darn useful.
Investments in AI are in the billions. With that kind of money flying around, it’s going to attract a lot of snake oil salesmen. It didn’t help that for the general public and investors, any sufficiently advanced technology is indistinguishable from magic, and LLMs reached that point for many.
Just keep the hype cycle in mind. It’ll all go downhill after the point of inflated expectations. With AI, it always does.
Clueless investors will never stop enabling obvious tech start-up scams.
It’s gambling. They’re gambling addicts. They all want to be on ground zero for the next Google or Amazon.
Sounds like a mnemonic for something
Artificial, yes. Intelligent, no.
What else do you expect from a company that first started with NFTs?
of course they are going to scam and gaslight you
They clearly don’t want you to know that, granted that they’ve conveniently renamed their company, and announced they don’t want anything to do with crypto, right before the Rabbit announcement went live
BUT THE LAM! People reported on the “large action model” like it was real. It always sounded like bullshit, in this case. Even if they were selling ideas they feel are obvious and inevitable.
I dunno. It sounds like a somewhat feasible thing that could be kinda useful if done right. It just doesn’t actually exist which is the problem here. It doesn’t sound too crazy which is why people bought this thing. The part I struggle with conceptually is that LAM would essentially weaponize bots - the same thing all these stupid captchas are meant to stop. Also it would drive users away from websites and therefore away from ads. This would be all out war and the money (I. E. Websites with ad revenue) would ultimately win unfortunately.
I agree that it’s logical to do.
And here we are now with Microsoft’s rewind thing. And rumors about Apple giving Siri the ability to see your screen visually.
It actually seems inevitable. But Rabbit was spinning tales.
The scammer even looks high AF every time he does an interview. This guy is a fucking joke.
Not sure I agree with high AF, but definitely sounds like he’s lying or making up bullshit all the time
I am not surprised that it’s just ChataGPT in a box lol, not at all.
“Fraud” is the correct term here.
Here is an alternative Piped link(s):
https://piped.video/zLvFc_24vSM
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Why his eyebrows (the Rabbit guy) look so fake? Is he wearing makeup?