AI Gibberish
Why we will end up with a world of nonsense if we don't start thinking
If you occasionally read the news or posts on social media, you will have noticed that there is a phenomenon much talked about — AI.
Actually, AI is the most talked about topic, being everywhere and being seen as the future for humanity in just about every thinkable way.
As with most new technologies, there is a tendency to overestimate the potential, believing that this is indeed the Philosophers’ Stone. Of course, the technology is promising and has produced surprising results, often making people feel that this really must have something big in it.
But try having a closer look at the AI-generated illustration of this article: Do you see it? While, at a first glance, everything seems good and well-made, a closer look will reveal that most of what is in the rich picture is indeed nonsense.
The text elements are almost all wrong, either being graphically distorted or wrongly spelled, or both. In fact, it is clear that the generative AI producing this didn’t see it as text, didn’t try to put meaning in it and correct errors — it just tried to repeat some visual patterns in an artistically free way.
The image parts are almost just as wrong, even thought that may be harder to spot, as we are used to slightly distorted images. Our human image recognition capabilities are trained in seeing what isn’t there, fill in the blanks, so to speak, and to correct small errors. This, as a result of the limitations of our vision capabilities — our eyes. They do not provide us with a complete and detailed picture of the world, even though we often believe that, so the brain has to step in and assist, providing whatever meaning might be missing from what information the eyes supply.
That image recognition flaw in the human cognitive systems is beneficial for a fast-moving life on the savanna or in the jungle, as we need to be able to react fast, cannot wait for all the details to be clarified. So, the different parts of us collaborate to produce an understanding of the surrounding reality. This understanding will mostly be partly wrong, but it is sufficient for a high percentage of us to survive, not being eaten by predators and ourselves finding food, in part by catching other prey.
We don’t need to see details, we need to understand details.
In this way, the generative AI created images is really a good simulation of the human capabilities, or — and that’s the catch here — part of it. It provides more or less the same kind of distorted image of the world as our eyes do, but it doesn’t combine it with the mental processing, as the human brain would do it.
So, we get an incomplete, erroneous picture out of it.
As for the graphical parts, as mentioned, it is not even certain that you have noticed the problems, because you are used to filling in the blanks from graphical elements. It doesn’t matter to your brain if your eyes see a lion with three or four legs — what matters is that it looks like a lion, and you understand that, meaning that you can immediately decide to run.
But the text is a different matter. There, we are used to seeing something perfect. And even though we can easily overlook a typo or other smaller problems with a text, even in a large scale, as known from various fun exercises that often circulate on social media, often under a headline like “95% of people cannot read this — can you?”, and you can then feel special because you can (as can most people). An example here:
It may take a moment to adjust to the strange shape of the text, and it is probably easier for those who have English as their native language, as this is the “underlying” language used. This kind of distorted text illustrates how your brain can adjust to an image of the world that isn’t perfect — it can quickly replace elements and larger fragments of the image with something from memory.
So far, so good. The example distorted text follows some pattern rules. That’s why your brain can decode it.
The text in the AI-generated image, however, is often a mix of different graphical text elements, and it is not always clear what the origin was, as it may not be one simple origin. It is just a mix of different texts.
Now consider what happens with other kinds of AI-generated output.
You get some text from ChatGPT, for instance, and it looks like nice text. It does that because ChatGPT contains mechanisms for spelling and grammar control, making sure that some of the structures of language are being observed.
We are used to seeing well-written text as more correct than badly written text. This is a problem for dyslectic people entering social media, and when the blogging wave was over us, it led to many blogs being ignored by readers, simply because the bloggers behind them weren’t capable of creating correct text, and that made the readers believe that they were stupid or unknowing. Needless to say that this phenomenon is also limiting the success of non-native writers of any language.
A perfect writing, on the other hand, will lead you to believe that the contents are correct. You get convinced by the AI’s well-spokenness.
So, when you read the beautiful text that ChatGPT wrote for you in a couple of seconds, you can easily get so amazed that you forget to check if the contents really are correct. It is all well-written, so it leads you to believe that it must be correct.
And that is the big danger with generative AI.
Very often, the written text is a bad as the AI-generated image above. We just don’t see it. And if we use these texts in professional contexts, such as for making business decisions, or presenting evidence and references cases in court (as a famous example showed), we may actually create a very problematic situation.
I believe that every single person, who ever wrote anything themselves, have been considering if what they wrote was correct. This, however, doesn’t seem to happen when we make AI write for us. Maybe we intuitively compare the situation with one where another person is writing for us, thereby, just as intuitively, expecting that the quality step has been taken?
Whatever is the problem now, needs to be corrected!
Anyone using AI-generated text really should read through it before distributing it or making decisions on its basis, and they should check every claim made.
AI isn’t aware of whether it speaks the truth or not. It doesn’t have any ability to understand the difference between truth and lie. It just repeats what it has seen elsewhere, mixed up to a new shape. And that new shape may even contain a mix, that, obviously spoken, is pure gibberish.
Words may follow each other in a correct order, but the meaning behind it is completely missing. AI doesn’t know anything, doesn’t mean anything, doesn’t want to tell anything — it is simply a mechanism that generates nice patterns, just like you would get from a kaleidoscope, in graphical or text format.
Beautiful, but without any inherent meaning.
Fascinating, but for no good reasons.
Hi Jorgen, just visiting from Medium and a new subscriber here on your Substack.
Thanks for writing about the gibberish!
I saw some of the cost numbers for the latest LLMs, such as “o3” and it’s getting so expensive to run, the subscription cost will likely keep rising to the point that most folks won’t be able to afford to upgrade.
This is the same thing we faced when the Internet became a thing, and we were all warned not to copy-paste without proofreading, except now it's not only being used by lazy high schoolers. I loved what you said about dyslexia. My husband is dyslexic, and for grammar purposes, he might seem unfit or even not so smart, but I swear he's super smart. I like that technology can help people like him, but, of course, we can't rely on AI to think for us.