Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza’s worldview, with a few errors.
I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.
The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.
We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn’t even mention emergent capabilities, or quote any of the many contrary scientific views.
Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it’s an uncritical parroting of what people like to think anyways to feel supreme and secure.
LLMs are definitely not intelligent. If you understand how they work, you’ll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.
That’s especially fun when you ask the same question in two different languages and get different results or even just gibberish in the other, usually non-English language. It clearly has more training data in English than it does for some other languages.
That very much depends on what you define as “intelligent”. We lack a clear definition.
I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.
And still, we can already have more intelligent conversations with them than with most humans.
It’s not a fair comparison though. It’s as if we’d compare the language region of a toddler with a complete brain of an adult. Let’s see what the next few years bring.
I’m not making that point, just mentioning it can be made on an academic level: There’s a paper about the surprising emergent capabilities of ChatGPT 4.0, titled “Sparks of AGI”.
That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.
I’ve read a few texts from the same source and they read quite childish.
It felt like reading essays from very young children: there is some degree of coherence, some information is there but it lacks actual advancement on the subject.