Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

It is common for people to confuse fluent speaking with fluid cognition.

People are so used to assuming that fluent language comes from a thinking, feeling person that evidence to the contrary might be difficult to accept. How will individuals traverse this relatively unexplored territory? Because there is a persistent inclination to equate fluent expression with fluent cognition, it is easy – but possibly incorrect – to believe that if an artificial intelligence model can articulate itself fluently, it also thinks and feels like humans.

As a result, it's somewhat unexpected that a former Google developer recently claimed that Google's AI system LaMDA has a sense of self since it can create writing elegantly about its own sentiments. This occurrence, as well as the ensuing media attention, prompted a number of appropriately dubious articles and blog posts questioning the idea that computer versions of human language are sentient, or capable of thinking, feeling, and experiencing.

The subject of what it means for an AI model to be sentient is actually rather difficult (see, for example, our colleague's perspective), and the objective of this paper is not to resolve it. However, as language experts, we can utilize our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of presuming that a sentient, conscious, or intelligent being can use language fluently.

Using AI to generate human-like language

Material created by models such as Google's LaMDA can be difficult to discern from human-written text. This remarkable accomplishment is the outcome of a decades-long effort to develop models that create grammatical, meaningful language.

Early versions, known as n-gram models, dating back to at least the 1950s, simply counted up occurrences of specific phrases and used them to estimate what words were likely to appear in given circumstances. For example, "peanut butter and jelly" is a more likely expression than "peanut butter and pineapples." If you read enough English material, you will come across the phrase “peanut butter and jelly” again and again but might never see the phrase “peanut butter and pineapples.”

Today's models, which are collections of data and rules that imitate human language, vary from these early attempts in key respects. First, they are trained on the entirety of the internet. Second, they may learn correlations between words that are thousands of miles distant rather than merely words that are neighbors. Third, they are tweaked by a massive number of internal "knobs," so numerous that even the engineers who build them struggle to comprehend why they create one sequence of words over another.

The models' objective, though, remains the same as it was in the 1950s: predict which word will follow next. They are now so skilled at this task that virtually all of the sentences they make appear fluent and correct.

Peanut butter and pineapples?

We asked GPT-3, a big language model, to finish the phrase "Peanut butter and pineapples___." It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.”If someone stated this, it may be assumed that they had tasted peanut butter and pineapple together, developed an opinion, and shared it with the reader.

But how did GPT-3 arrive at this paragraph? By producing a term that corresponded to the circumstances we gave. Then there was another. Then there was another. The model never saw, felt, or tasted pineapples; it just analyzed all of the online documents that mentioned them. Even a Google engineer's imagination can be led to envision GPT-3 as a sentient entity capable of reasoning about peanut butter and pineapple meals after reading this line.

The human brain is programmed to derive meaning from words. Every time you have a discussion, your mind creates a mental picture of your conversation partner. The words they speak are then used to fill in the model with that person's objectives, feelings, and beliefs.

The transition from words to mental model is effortless, and it is activated every time you get a completely formed sentence. This cognitive process saves you a lot of time and effort in everyday life, making social interactions much easier.

In the case of AI systems, however, it fails by creating a mental model out of thin air.

More investigation can disclose the gravity of this blunder. Consider this prompt:“Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”

The text in this situation is equally fluid as in our pineapple example, but the model is expressing something far less rational. One begins to believe that GPT-3 has never tasted peanut butter and feathers.

Giving intelligence to machines while denying it to humans

The terrible irony is that the same cognitive bias that leads people to attribute humanity to GPT-3 can lead them to treat genuine humans inhumanely. According to sociocultural linguistics, the study of language in its social and cultural context, presuming an overly tight relationship between fluent speech and fluent thought can lead to bias against those who speak differently.

People with a foreign accent, for example, are frequently viewed as less clever and are less likely to get hired for positions for which they are qualified. Similar prejudices exist against speakers of non-prestigious dialects, such as Southern English in the United States, as well as against deaf persons who use sign languages and those with speech difficulties such as stuttering.

These prejudices are extremely detrimental, frequently leading to racist and sexist preconceptions, and have been repeatedly demonstrated to be false.

Fluent language alone does not imply humanity

Will AI ever become sentient? This is a difficult subject that philosophers have debated for decades. However, researchers have shown that you cannot just believe a language model when it tells you how it feels. Words may be deceptive, and it is all too easy to confuse fluent speaking with fluid thought.

This article was first published in The Conversation.

Previous Post Next Post