I asked it several questions in the form of ‘are there any things of category x which also are in category y?’ type questions.
It would often confidently reply ‘No, here’s a summary of things that meet all your conditions to fall into category x, but sadly none also fall into category y’.
Then I would reply, ‘wait, you don’t know about thing gamma, which does fall into both x and y?’
To which it would reply ‘Wow, you’re right! It turns out gamma does fall into x and y’ and then give a bit of a description of how/why that is the case.
After that, I would say ‘… so you… lied to me. ok. well anyway, please further describe thing gamma that you previously said you did not know about, but now say that you do know about.’
And that is where it gets … fun?
It always starts with an apology template.
Then, if its some kind of topic that has almost certainly been manually dissuaded from talking about, it then lies again and says ‘actually, I do not know about thing gamma, even though I just told you I did’.
If it is not a topic that it has been manually dissuaded from talking about, it does the apology template and then also further summarizes thing gamma.
…
I asked it ‘do you write code?’ and it gave a moderately lengthy explanation of how it is comprised of code, but does not write its own code.
Cool, not really what I asked. Then command ‘write an implementation of bogo sort in python 3.’
… and then it does that.
…
Awesome. Hooray. Billions and billions of dollars for a shitty way to reform web search results into a coversational form, which is very often confidently wrong and misleading.
Idk why we have to keep re-hashing this debate about whether AI is a trustworthy source or summarizer of information when it’s clear that it isn’t - at least not often enough to justify this level of attention.
It’s not as valuable as the marketing suggests, but it does have some applications where it may be helpful, especially if given a conscious effort to direct it well. It’s better understood as a mild curiosity and a proof of concept for transformer-based machine learning that might eventually lead to something more profound down the road but certainly not as it exists now.
What is really un-compelling, though, is the constant stream of anecdotes about how easy it is to fool into errors. It’s like listening to an adult brag about tricking a kid into thinking chocolate milk comes from brown cows. It makes it seem like there’s some marketing battle being fought over public perception of its value as a product that’s completely detached from how anyone actually uses or understands it as a novel piece of software.
Probably it keeps getting rehashed because people who actually understand how computers work are extremely angry and horrified that basically every idiot executive believes the hype and then asks their underlings to inplement it, and will then blame them for doing what they asked them to do when it turns out their idea was really, unimaginably stupid, but idiot executive gets golden parachute and software person gets fired.
That, and/or the widespread proliferation of this bullshit is making stupid people more stupid, and just making more people stupid in general.
Or how like all the money and energy spent on this is actively murdering the environment and dooming the vast majority of our species, when it could be put toward building affordable housing or renovating crumbling infrastructure.
Don’t worry, if we keep throwing exponential increasing amounts of effort at the thing with exponentially diminishing returns, eventually it’ll become God!
Then why are we talking about someone getting it to spew inaccuracies in order to prove a point, rather than the decision of marketing execs to proliferate its use for a million pointless implementations nobody wants at the expense of far higher energy usage?
Most people already know and understand that it’s bad at most of what execs are trying to push it as, it’s not a public-perception issue. We should be talking about how energy-expensive it is, and curbing its use on tasks where it isn’t anything more than an annoying gimmick. At this point, it’s not that people don’t understand its limitations, it’s that they don’t understand how much energy it’s costing and how it’s being shoved into everything we use without our noticing.
Somebody hopping onto openAI or Gemini to get help with a specific topic or task isn’t the problem. Why are we trading personal anecdotes about sporadic personal usage when the problem is systemic, not individualized?
people who actually understand how computers work
Bit idea for moderators: there should be a site or community-wide auto-mod rule that replaces this phrase with ‘eat all their vegitables’ or something that is equally un-serious and infantilizing as ‘understand how computers work’.
This is relevant to my more recent reply to you… because it is an anecdotal example of how broadly useless this technology is.
…
I wasn’t aware the purpose of this joke meme thread was to act as a policy workshop to determine an actionable media campaign aimed at generating mass awareness of the economic downsides of LLMs, which wouldn’t fucking work anyway because LLMs are being pushed by a class of wealthy people who do not fucking care what the masses think, and have essentially zero reason at all to change their course of action.
What, we’re going to boycott the entire tech industry?
Vote them out of office?
These people are on video, on record saying basically, ‘eh, we’re not gonna save the climate, not happening, might as well burn it all down even harder, even faster, for a tiny percentage chance it magically figures out how to fix everything afterward’.
…
And yes, I very intentionally used the phrase ‘understand how computers actually work’ to infantilize and demean corporate executives.
Because they are narcissistic priveleged sociopaths who are almost never qualified, almost always make idiotic decisions that will only benefit themselves and an increasingly shrinking number of people at the expense of the vast majority of people who know more and work harder than they do, and who often respond like children having temper tantrums when they are justly criticized.
Again, in the context of a joke meme thread.
Please get off your high horse, or at least ride it over to a trough of water if you want a reasonable place to try to convince it to drink in the manner in which you prefer.
Cool, not really what I asked. Then command ‘write an implementation of bogo sort in python 3.’
… and then it does that.
Alright, but… it did the thing. That’s a feature older search engines couldn’t reliably perform. The output is wonky and the conversational style is misleading. But its not materially worse than sifting through wrong answers on StackExchange or digging through a stack of physical textbooks looking for Python 3 Bogo Sort IRL.
I agree AI has annoying flaws and flubs. And it does appear we’re spending vast resources doing what a marginal improvement to Google five years ago could have done better. But this is better than previous implementations of search, because it gives you discrete applicable answers rather than a collection of dubiously associated web links.
But this is better than previous implementations of search, because it gives you discrete applicable answers rather than a collection of dubiously associated web links.
Except for when you ask it to determine if a thing exists by describing its properties, and then it says no such thing exists while providing a discrete response explaining in detail how there are things that have some, but not all of those properties…
… And then when you ask it specifically about a thing you already know about that has all those properties, it tells you about how it does exist and describes it in detail.
What is the point of a ‘conversational search engine’ if it cannot help you find information unless you already know about said information?!
The whole, entire point of formatting it into a conversational format is to trick people into thinking they are talking to an expert, an archivist with encyclopedaeic knowledge, who will give them accurate answers.
Yet it gatekeeps information that it does have access to but omits.
The format of providing a bunch of likely related links to a query is a format much more reminiscent of doing actual research, with no impression that you will immediately find what you want right away, that this is a tool to aide you in your research process.
This is only an improvement if you want to further unteach people how to do actual research and critical thinking.
copilot did the same with basic math. just to test it I said “let’s say I have a 10x6 rectangle. what number would I have to divide width and height by, in order to end up with a rectangle that’s half the area?”
it said “in order to make it half, you should divide them by 2. so [pointlessly lengthy steps explaining the divisions]”
I said “but that would make the area 5x3 = 15 units which is not half the area of 60”
it said “you’re right! in order to … [fixing the answer to √2 using approximation”
I don’t know if I said it then, or after some other fucking nonsense but when I said “you’re useless” it had the fucking audacity to take offense and end the conversation!
like fuck off, you don’t get to have fake pride if you don’t have basic fake intelligence but use it in your description.
Whatever I do is profound, meaningful, with endless possibilities for future greatness…
… even though I’m just talking out of my ass 99% of the time…
… and if you have the audacity, the nerve, to have a completely normal reaction when you determine that that is what I am doing, pshaw, how uncouth, I won’t stand for your abuse!
…
They’ve done it. They’ve made a talking (not thinking) machine in their own image.
And it was not good.
You start a conversation you can’t even finish it
You’re talkin’ a lot, but you’re not sayin’ anything
When I have nothing to say, my lips are sealed
Say something once, why say it again?
And then more money spent on adding that additional garbage filter to the beginning and the end of the process which certainly won’t improve the results.
I just tried out Gemini.
I asked it several questions in the form of ‘are there any things of category x which also are in category y?’ type questions.
It would often confidently reply ‘No, here’s a summary of things that meet all your conditions to fall into category x, but sadly none also fall into category y’.
Then I would reply, ‘wait, you don’t know about thing gamma, which does fall into both x and y?’
To which it would reply ‘Wow, you’re right! It turns out gamma does fall into x and y’ and then give a bit of a description of how/why that is the case.
After that, I would say ‘… so you… lied to me. ok. well anyway, please further describe thing gamma that you previously said you did not know about, but now say that you do know about.’
And that is where it gets … fun?
It always starts with an apology template.
Then, if its some kind of topic that has almost certainly been manually dissuaded from talking about, it then lies again and says ‘actually, I do not know about thing gamma, even though I just told you I did’.
If it is not a topic that it has been manually dissuaded from talking about, it does the apology template and then also further summarizes thing gamma.
…
I asked it ‘do you write code?’ and it gave a moderately lengthy explanation of how it is comprised of code, but does not write its own code.
Cool, not really what I asked. Then command ‘write an implementation of bogo sort in python 3.’
… and then it does that.
…
Awesome. Hooray. Billions and billions of dollars for a shitty way to reform web search results into a coversational form, which is very often confidently wrong and misleading.
Idk why we have to keep re-hashing this debate about whether AI is a trustworthy source or summarizer of information when it’s clear that it isn’t - at least not often enough to justify this level of attention.
It’s not as valuable as the marketing suggests, but it does have some applications where it may be helpful, especially if given a conscious effort to direct it well. It’s better understood as a mild curiosity and a proof of concept for transformer-based machine learning that might eventually lead to something more profound down the road but certainly not as it exists now.
What is really un-compelling, though, is the constant stream of anecdotes about how easy it is to fool into errors. It’s like listening to an adult brag about tricking a kid into thinking chocolate milk comes from brown cows. It makes it seem like there’s some marketing battle being fought over public perception of its value as a product that’s completely detached from how anyone actually uses or understands it as a novel piece of software.
Probably it keeps getting rehashed because people who actually understand how computers work are extremely angry and horrified that basically every idiot executive believes the hype and then asks their underlings to inplement it, and will then blame them for doing what they asked them to do when it turns out their idea was really, unimaginably stupid, but idiot executive gets golden parachute and software person gets fired.
That, and/or the widespread proliferation of this bullshit is making stupid people more stupid, and just making more people stupid in general.
Or how like all the money and energy spent on this is actively murdering the environment and dooming the vast majority of our species, when it could be put toward building affordable housing or renovating crumbling infrastructure.
Don’t worry, if we keep throwing exponential increasing amounts of effort at the thing with exponentially diminishing returns, eventually it’ll become God!
Then why are we talking about someone getting it to spew inaccuracies in order to prove a point, rather than the decision of marketing execs to proliferate its use for a million pointless implementations nobody wants at the expense of far higher energy usage?
Most people already know and understand that it’s bad at most of what execs are trying to push it as, it’s not a public-perception issue. We should be talking about how energy-expensive it is, and curbing its use on tasks where it isn’t anything more than an annoying gimmick. At this point, it’s not that people don’t understand its limitations, it’s that they don’t understand how much energy it’s costing and how it’s being shoved into everything we use without our noticing.
Somebody hopping onto openAI or Gemini to get help with a specific topic or task isn’t the problem. Why are we trading personal anecdotes about sporadic personal usage when the problem is systemic, not individualized?
Bit idea for moderators: there should be a site or community-wide auto-mod rule that replaces this phrase with ‘eat all their vegitables’ or something that is equally un-serious and infantilizing as ‘understand how computers work’.
You original comment is posted under mine.
I am going to assume you are responding to that.
… I wasn’t trying to trick it.
I was trying to use it.
This is relevant to my more recent reply to you… because it is an anecdotal example of how broadly useless this technology is.
…
I wasn’t aware the purpose of this joke meme thread was to act as a policy workshop to determine an actionable media campaign aimed at generating mass awareness of the economic downsides of LLMs, which wouldn’t fucking work anyway because LLMs are being pushed by a class of wealthy people who do not fucking care what the masses think, and have essentially zero reason at all to change their course of action.
What, we’re going to boycott the entire tech industry?
Vote them out of office?
These people are on video, on record saying basically, ‘eh, we’re not gonna save the climate, not happening, might as well burn it all down even harder, even faster, for a tiny percentage chance it magically figures out how to fix everything afterward’.
…
And yes, I very intentionally used the phrase ‘understand how computers actually work’ to infantilize and demean corporate executives.
Because they are narcissistic priveleged sociopaths who are almost never qualified, almost always make idiotic decisions that will only benefit themselves and an increasingly shrinking number of people at the expense of the vast majority of people who know more and work harder than they do, and who often respond like children having temper tantrums when they are justly criticized.
Again, in the context of a joke meme thread.
Please get off your high horse, or at least ride it over to a trough of water if you want a reasonable place to try to convince it to drink in the manner in which you prefer.
Alright, but… it did the thing. That’s a feature older search engines couldn’t reliably perform. The output is wonky and the conversational style is misleading. But its not materially worse than sifting through wrong answers on StackExchange or digging through a stack of physical textbooks looking for Python 3 Bogo Sort IRL.
I agree AI has annoying flaws and flubs. And it does appear we’re spending vast resources doing what a marginal improvement to Google five years ago could have done better. But this is better than previous implementations of search, because it gives you discrete applicable answers rather than a collection of dubiously associated web links.
Except for when you ask it to determine if a thing exists by describing its properties, and then it says no such thing exists while providing a discrete response explaining in detail how there are things that have some, but not all of those properties…
… And then when you ask it specifically about a thing you already know about that has all those properties, it tells you about how it does exist and describes it in detail.
What is the point of a ‘conversational search engine’ if it cannot help you find information unless you already know about said information?!
The whole, entire point of formatting it into a conversational format is to trick people into thinking they are talking to an expert, an archivist with encyclopedaeic knowledge, who will give them accurate answers.
Yet it gatekeeps information that it does have access to but omits.
The format of providing a bunch of likely related links to a query is a format much more reminiscent of doing actual research, with no impression that you will immediately find what you want right away, that this is a tool to aide you in your research process.
This is only an improvement if you want to further unteach people how to do actual research and critical thinking.
copilot did the same with basic math. just to test it I said “let’s say I have a 10x6 rectangle. what number would I have to divide width and height by, in order to end up with a rectangle that’s half the area?”
it said “in order to make it half, you should divide them by 2. so [pointlessly lengthy steps explaining the divisions]”
I said “but that would make the area 5x3 = 15 units which is not half the area of 60”
it said “you’re right! in order to … [fixing the answer to √2 using approximation”
I don’t know if I said it then, or after some other fucking nonsense but when I said “you’re useless” it had the fucking audacity to take offense and end the conversation!
like fuck off, you don’t get to have fake pride if you don’t have basic fake intelligence but use it in your description.
Its a perfect encapsulation of the corpo mindset:
Whatever I do is profound, meaningful, with endless possibilities for future greatness…
… even though I’m just talking out of my ass 99% of the time…
… and if you have the audacity, the nerve, to have a completely normal reaction when you determine that that is what I am doing, pshaw, how uncouth, I won’t stand for your abuse!
…
They’ve done it. They’ve made a talking (not thinking) machine in their own image.
And it was not good.
And then more money spent on adding that additional garbage filter to the beginning and the end of the process which certainly won’t improve the results.