Reminder: This post is from the Community Actual Discussion. You’re encouraged to use voting for elevating constructive, or lowering unproductive, posts and comments here. When disagreeing, replies detailing your views are appreciated. For other rules, please see this pinned thread. Thanks!

PREFACE:

These dumb chat “A.I.” programs are… not A.I. and even people selling it even recognize that.

THE CRUX:

We don’t have real A.I. - we have generative models trained on massive amounts of data which in effect attempts to compress it down into a trained model which it can run to try and regenerate answers based on the data it was trained from. It is a lossy compression, as the model itself is too small to contain the whole of the information it ingests. As such it makes things up along the way in order to fill in the blanks. You can see this in how chat models like ChatGPT will confidently give you incorrect information. Researchers call this “hallucinating”.

The model doesn’t actually have any core understanding of the material it ingests - it can’t, since it isn’t actually an artificial intelligence. It can infer what things should look like, and it can do so well enough now to start fooling humans into thinking it knows what it’s doing. We’re in the ‘uncanny valley’ of generative language and code models. So that’s one problem. It makes things up without understanding it, and can’t reliably reproduce correct answers, only things that kinda look correct.

It’s absolutely infuriating to people who actually understand the technology that we’ve taken to calling it “AI” at all. It’s a stupid techbro marketing stunt and unfortunately for all of us it has stuck, and as a result we now all have to call it A.I., and only those of us with the right tech background to know better will understand just how misleading that label is.

The output is still garbage, but it’s dangerously believable garbage.

Remember all those shitty chat bots that circulated around for a while? This is just that, but way more complex and easier to mistake for real intelligence. Imagine now, if you will, an internet full of such chat bots all set up by techbros and lazy hacks trying to cash in on the sudden easy ability to generate ‘content’ that can get past regular spam filters at a rate so fast that no human team can keep up with checking it all, and they’re pulling this stuff down from the internet en masse to train their buggy models, then submitting it back to places that are indexed online where the next set of buggy models can ingest it, like an infinite Ouroboros of shit, so next thing you know you can’t trust a damn thing you read anywhere, because it’s all garbage generated from other people’s garbage, and companies like IBM and Microsoft are even getting in on it.

And because the models learn based on statistical trends and averages over a large set of data? Guess what? This huge flood of new “A.I.” generated data is now the norm, and as such it takes precedence over human generated data that by natural limitations cannot keep up with the speed at which the A.I. generated data is flooding the internet.

That’s basically what’s happening now. Because the average person making decisions about how to leverage this new, lucrative technology for profit doesn’t understand (or care to understand) how it works or why it’s a bad idea. All they see is the short term dollar signs from getting leg up on the competition by churning huge quantities of shit out faster and cheaper than any human can, in a market where increasingly only quantity matters, not quality.

It’s already replacing journalists and authors as newspapers and publishing houses are getting backed up with a flood of “AI” generated submissions from people trying to cash in on it. A huge amount of recent content on the internet is entirely made up, imagined by these models, and very difficult to tell apart from actual researched information by real knowledgeable experts. Throwing this into the mix with the already problematic ecosystem of disinformation from entities like Cambridge Analytica, and even writing children’s books to help human children learn to read? The future is very bleak indeed.

THINGS I HAVEN’T SPOKEN ABOUT (or only alluded to):

  • The massive power usage
  • Putting it into software that absolutely does not need it
  • “Necromancing” dead people for clicks
  • Making search nigh-unusable
  • Further reducing the value of actual writers
  • Mass layoffs because the idiots in charge think the tech can replace people (Spoiler - no, it can’t)
  • You know those shitty auto-generated “Radiant AI” quests in Skyrim that everyone hated? You know how whenever there’s a randomly generated room in a game how you can tell just by looking at it that it wasn’t designed with any semblance of thought? Like that but they want to use it for everything in games now.

Some Sources:

A ‘Shocking’ Amount of the Web Is Already AI-Translated Trash, Scientists Determine

How Bad Are Search Results?

  • Ace T'Ken@lemmy.caOPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I don’t know about that, but that’s certainly a good opening statement.

    I think one of the few things that would show a “true A.I.” to me would be it doing something it was never programmed to do (or in effect, not being bound by base code).

    Artificial means “created” to me. Any actual intelligence from a machine would be Artificial Intelligence to my mind. What we have currently is simply artificial marketing buzz because that’s what was taught to business grads two years ago.

    Source - recent business school textbooks constantly refreshing their “next big thing” every two years. Sever previous examples are housing market ownership, then VR, then metaverses, then crypto, then NFTs, and most recently AI. Did you ever wonder why every company comes out with these initiatives at the same time? That’s why.

    I’m not legally allowed to link textbooks, but you can find them without issue as every business school text has these in them. Don’t take my word for it, go look.