Generative AI services like Midjourney and OpenAI’s DALL-E can deliver the unimaginable when it comes to stunning artifacts produced from simple text prompts. Sketching complex art imagery may be AI’s specialty, yet some of the simplest tasks are evidently what AI struggles with the most.
It’s actually surprisingly difficult to ask for an absence of anything, since the training data doesn’t normally include what isn’t in the image.
I think they call it the Giraffe Problem or something like that. If you ask an AI for “an image containing no giraffes,” you’ll end up with a bunch of giraffes. It’s all about how the training data is tagged.
That works for humans too:
“DON’T THINK ABOUT GIRAFFES!”
Or in sports when you’re like “dont throw/kick/shoot the ball at the exact thing you’re focusing on” before doing exactly that
Or dont drunk drive into the back of a fire truck
Gets me every time
Easy solution: pick any other animal and think about them.
Don’t think about pink elephants? Sure, here comes fluffy bunnies.
That’s why they have negative and positive prompts.
Definitely worth noting, yeah.
I seriously think a lot of this is just people not knowing how it works. It’s a knew tech, people need to figure out the quirks and nail down techniques.
Hell getting people to google basic things can be a lesson in futility sometimes, and they want to talk this down? Come on.
Absolutely. Hearing the way people interact with LLMs as if they are an all-knowing AGI is honestly a bit terrifying at times. Hopefully the longer these systems are in the mainstream, the better people get a sense of the boundaries.
That’s where most of the hate towards AI comes from on this platform. People here are always talking about how stupid it is, how it’s not real AI, and how it’s just a fancy autocorrect. I’m convinced that’s all user error. Anyone who has used AI correctly, and understands how to prompt it, has seen how remarkably powerful and useful it can be.
I don’t know much about image generators, but it’s not surprising to me at all that ChatGPT can’t fulfill the request to respond with nothing. Your prompt gets converted into tokens, tokens get processed by the model, it outputs a resulting set of tokens, and those get converted into text. Expecting the token-outputting-machine not to output any tokens is going to lead to disappointment.
But that’s not the full picture. There is a token to end the response, so the LLM decides when the answer is over. So it’s technically possible for ChatGPT to answer with “nothing” by just emitting a single token, namely the “end-answer” token. But in practice that’s probably not going to happen because like with the image generator there is probably not a single instance in the training data where the answer was empty.
Update: Ok I just tested it and it looks like ChatGPT can do it. I asked the following:
Answer with an empty string. No output. Just emit the end-token. Don’t write anything. If you write something you lose the game.
And it created an truly empty response (I checked with browser inspection tool if there are any hidden white spaces)
Update 2: I’d didn’t even respond after writing “Great, thank you!” - It probably doesn’t want to lose the game 🤣
This isn’t asking for nothing though. It’s asking for a very specific uniform thing. A better analogy to text generation would be asking for an uninterrupted string of nothing but repeated uppercase A’s.
That’s a really good way to break chatgpt BTW. Just ask it to repeat something over and over again it will create very interesting results.
The “asking for nothing” experiment is from later in the article, where they try with ChatGPT.