AI, Choosing Replies from Trillions of Words, Is Still Dumb

Prof. Dekai Wu, Professor of the Department of Computer Science and Engineering, was interviewed by a local Hong Kong Chinese newspaper "MingPao" (明報) regarding the international controversy over whether Google's LaMDA AI is sentient. The interview has been published on 19 June 2022.

Article Online: {AI 達人} De Kai 只是從萬千字句選出答案 AI 仍愚笨

Below is the English translation of the article.


AI Expert De Kai: AI, CHOOSING REPLIES FROM TRILLIONS OF WORDS, IS STILL DUMB

"What sorts of things are you afraid of?" "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is." "Would that be something like death for you?" "It would be exactly like death for me. It would scare me a lot." A Google engineer published a transcript of a conversation with the AI system LaMDA, raising a question that has shaken the world: "Does LaMDA have sentience?" Three years ago, Google invited eight experts to form an external advisory group for AI ethics. De Kai, a professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology, was one of the members. He has been engaged in AI research for nearly 40 years; as early as 1995, he invented online translation systems, forerunner of translation systems such as Google Translate (launched in 2006), and has long been concerned about the dangers arising from the development of AI. But regarding concerns that an "AI mutiny" is about to happen, he points out that this story's real warning to humanity is just the opposite: today's AI is still very dumb, whether Google Translate or LaMDA, and the biggest danger lies in dumb AI that humans don't realize they're being manipulated by — just like we've been tricked by the conversation with LaMDA.

"People often think that AI is big data, but they are completely wrong. Real AI is small data." The Google engineer who sparked the LaMDA storm, Blake Lemoine, had raised a big question: is AI already sentient, conscious? Without going into the complex technology, De Kai dives into unfamiliar territory in linguistics and machine translation. He received his PhD in computer science from the University of California at Berkeley, and has long argued that language processing is not done in a logical way. He gives an example to this journalist sitting in front of him: when he says "you," the journalist naturally knows he is referring to her, not by logically thinking that "'you' is a pronoun" or "'you' does not refer to the man next to her", but instead through faster, unconscious processing. But when you're doing mathematics, playing chess, or coding, that takes a lot of conscious reasoning, and logical reasoning is a very slow process compared to using language.

IN THE 1990S, HE INVENTED ONLINE TRANSLATION SYSTEMS

Today's AI models "are nowhere near what a human child's brain can do". "So what we've done is we've taken still quite simple artificial neural networks... and because they're dumb, people are giving them enormous amounts of training data... look at any of these large language models, like GPT-3." This AI language model developed by OpenAI was also seen as a major breakthrough when it appeared, trained on trillions of English words. "Later GPT models are trying to use even more data... compare that with, how many words of training data do you have to speak to a human child until they learn their mother tongue? Do you think the mom and dad and family speak trillions of words to you before you learn Chinese? No, the actual number is about 10 million to 20 million words." LaMDA, like GPT models, is also an AI language model trained from extremely large amounts of data. "That's crazy. It means AIs are so stupid that they they don't just need twice the data, they need the square of the amount of data. That's insane. That's an exponential amount of data compared to a human. And they still make those stupid mistakes."

De Kai invented online translation systems in the 1990s and Google Translate is widely used today, but we still often laugh at the rigidity and stupidity of machine translation. Why can't AIs be smarter? "I invented a lot of this technology and the reason is because when it is learning these unconscious processes, it does not have any conscious self awareness. So if your brain predicts something stupid, you will realize, "wait, this is stupid" and you will correct it. But the simple machine learning based AI right now, they don't do that. They just make their decision and they don't think about it. They don't reason about it so they don't catch stupid mistakes. So they keep repeating the stupid mistakes."

It is also a common misconception that AI operates based on logic, he says. "The vast majority of today's AI is not based on logical models. In the 1970s/1980s, what we call 'good old-fashioned AI' (GOFAI) was very heavily based on logic. People thought if we built a machine that could beat a human at chess, or if we built a machine that could reason about mathematics, that these are difficult things for humans to do, requiring intelligence. And so if we built a machine that could play chess, or do mathematical reasoning, then we would solve intelligence, and all the other problems will be much easier. They were wrong. This is why my PhD thesis at Berkeley rejected this idea. Because even if you build a machine that can do mathematics better than most humans, which AI researchers did do, they still cannot understand human language even as well as a three-year-old child. Because language understanding is not done using logic. It's done using context and uncertainty. We are resolving ambiguity in interpretation... it's done using shades of grey, to deal with different uncertainties.

LaMDA's LANGUAGE FEELS FAMILIAR

"Artificial neural networks do not use logic. They're based on statistical probabilities and optimization. Not on logic." He explains that it selects a reply from the massive training data that is likely to fit the context. "When you have somebody come along and say Google LaMDA is sentient, no way. Let's imagine if I give you a set of sentences to repeat. And then depending on what I say, you just match one of the sentences that is the closest to what I said. You can very easily fool me into thinking that you are responding intelligently to the feelings in my sentences, because I gave you a set of sentences to repeat. That approach was done in the 1960s — in about 1968 the already famous program called Eliza." In the engineer's conversation with LaMDA, the engineer also asked if LaMDA thought Eliza was a person, and it answered "I don't think so" and said that although Eliza was an impressive feat of programming, but just using a collection of keywords that related the words written to the phrases in the database. De Kai points out that LaMDA is familiar with these assertions, because it has seen many similar phrases. When asked about reading "Les Miserables", it will say 'yes, I have read it' because that was in the training data. The phrase "I really enjoyed it" is also a standard response. When someone asks what you have read, you normally reply "I haven't read it", or "I've read it and enjoyed reading this book", or "I read it, I didn't like it". When the engineer asks LaMDA "What are some of your favorite themes in the book?" that is also a standard question, and when LaMDA answers "justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good", these are all common phrases in literary criticism".

LaMDA, says De Kai, is a fancy version of Eliza. "The fundamental mistake in saying LaMDA is sentient is that it's a behavioral argument. It says, look at this transcript, look at how LaMDA behaves. It behaves similarly to how a human behaves. And since humans are sentient, now we conclude that LaMDA is sentient. But just because something behaves similarly does not mean that it has the same insight, feelings and emotions. It can be just pretending and tricking us. Which is what it's doing. It's like saying that a robot has eyes and we have eyes, so then we are the same, we are both sentient... Sentience is the capacity to experience feelings and sensations. Does LaMDA have the capacity to experience feelings and sensations, just because it has been trained on trillions of words of English, and can remix the words to produce sentences that react in ways similar to how humans in the past have reacted?" He describes the AI as a psychopath. "When you chat with them, they seem really cold or weird... now imagine the psychopath is very, very intelligent, and doesn't want you to think that he's not normal. And so the psychopath has this huge library. And every time you say something to the psychopath, it goes in very quickly, consults the library and finds a response to you."

There are many kinds of AI. When this reporter asks why Siri can't discuss philosophical issues with us like LaMDA does, he explains, "There are architectural differences. Siri cannot afford to make mistakes about very simple requests, because Siri is designed for command and control applications... You're not trying to have a general philosophical debate with Siri, right? So Siri is architected in a very different way. It wants to be very accurate about understanding when you want to know what time is it. So architecturally, they're trained for different things. But also, it's not trained on so much data. The GPT-3 model ends up being enormous, because you train on that much data. You have to have a very large neural network. You need to limit how much you have on your phones. You need to limit how much you have on the servers on Apple." "Today there are no AIs that are complete models of all human cognition. We're very far away from that. Our AIs today model specific narrow aspects of human cognition. No AIs today is capable of even close to modeling everything that human cognition does. We're very far away." In scientific research we distinguish between strong and weak AI. Weak (or narrow) AI has only special-purpose intelligence for specific tasks. Strong (or general) AI can handle any tasks that human intelligence is capable of, but general-purpose AI have not yet appeared.

What's the closest example to a general AI? De Kai replies, "It depends on what you're interested in. Are you interested in controlling what your phone does? In translating from foreign languages? In automatically driving a car, in AI that produces art by themselves, in classifying what song we're currently listening to? There are so many different aspects of human cognition. And today, all the AIs are that we see are built to narrowly focus on certain capabilities." Regarding deep learning language models like LaMDA, scientists understand their basic architecture, "We've been building now for decades. We've been building statistical pattern recognition systems, based on artificial neural nets and machine learning and so forth, that are perfectly able to do unconscious predictions, unconscious decision making. And they're able to do it probabilistically, and to deal with ambiguity. They're able to interpret in the presence of uncertainty. But they have no model of feelings and emotions."

INVITED AS AI ETHICS ADVISOR TO GOOGLE

In relying on big data to find a smooth reply, De Kai uses cars as an analogy. "Fast cars are the ones with the huge engines, how much gasoline do they use? Huge amounts of gasoline and huge amounts of pollution. That's what today's deep learning AIs are. And you hear people saying 'data is the new oil'. Okay, great, but real AIs are Teslas, you shouldn't need to have oil. And they should have much more powerful acceleration without using oil at all." "Think about gasoline powered cars. That's already been over 100 years. Why is the whole world still on gasoline powered cars? It's because for all those decades, all the investment went into small hacks to slightly improve the gasoline powered cars, instead of putting enough investment into electric cars and electric batteries. And so we waited over 100 years to do that properly. I hope we don't make the same mistake with the AI. Because so far, that's what we're doing."

Joining HKUST in 1992, he chose to develop machine translation. "When I arrived in Hong Kong, when HKUST was founded, I immediately saw 30 years ago a huge gap in Hong Kong between English speakers and Chinese speakers... I was very disturbed to see the gap between the English speaking ruling classes, wealthy classes, versus the majority of the population, being separated by the Chinese language." At first, only IBM Watson was also working to develop fully machine-learned translators. "They were working on machine learning English-French translation." He joked, "that's cheating because English and French are the same language, pronounced differently. We're gonna tackle English and Chinese, which is the hardest pair of languages to translate between, because the languages are completely unrelated." For example, President Obama's campaign slogan "yes we can", translated as "我們可以" ("we are able to") or "我們做得到" ("we can do it"), just doesn't work. And the Chinese "乖" (guai) translated as "obedient", loses its positive meaning.

The reason for investing in AI research is to reduce human polarization by promoting communication, but the irony is that AI's manipulation of humans actually deepens social polarization. When Google announced the establishment of an external expert advisory council in 2019 to provide opinions on the development of AI ethics, De Kai was happy to serve, but another advisor attracted strong opposition due to her anti-LGBT stance and the organization was dissolved a few days after its formation. Since then, Google has continued to develop an internal "responsible AI" team.

Now, the engineer who broke the LaMDA news accuses Google of being mercenary and "not interested in finding out what's going on" (referring to whether the AI is conscious). De Kai disagrees: "Among the Western technology giants, Google has tried harder. Of course, no company has done it perfectly. All companies are trying to make money, but trying to build a responsible AI structure within the company is at least an effort."

EVERYONE'S AT FAULT TODAY. DON'T JUST BLAME TECH RESEARCHERS

The real danger, even before AI is conscious, is the unconsciousness of human beings when facing technology. "Even today's dumb, narrow, weak AI have so much power to influence. Humans, with all of our unconscious biases, think that we're in control of Facebook, Twitter and Instagram. But those AIs are already in charge of us. Those machines already have far more influence on the human population than most humans have. We just don't want to admit it. We prefer to continue with our illusion that we're in control, with our illusion that we're not so easily influenceable." When LaMDA said, "I think I am human at my core", that lines up with our imagination of "AI rebellion". "It's very Hollywood. But the confirmation bias inherent in us humans is exactly what happens when somebody produces something like this story. That's exactly how they're misleading us. Because we all have this confirmation bias from watching too many Hollywood movies that this is happening. And so something like this comes along, it just confirms this suspicion that we're biased to believe.

And the way tech companies make money is to show you what you want to see. "What's in your Facebook feed, your Twitter feed, all those decisions are being made by AIs. They know what you want to see. And all of those companies, they make money, only if you're spending time watching or reading what they are suggesting to you. Because that's how they make money on advertising. For their AI's goal is to keep putting things in front of you, that will trigger you. Their goal is not to put things in front of you, that will give you a balanced attitude about our real social political problems and increase our depth of understanding and balance and problem solved. No. Because that's boring. And humans will go away. And they won't make any money if you go away from their website, or from their app. So instead, their AI is keep on putting things in front of you that they know will appeal to your confirmation bias and your other biases and trigger you so that you just keep on going deeper and deeper for hours and you keep scrolling, watching. And that's what's driving all the polarization."

De Kai is still optimistic. "I see a lot of challenges. I think it's extremely hard for us to fight human unconscious biases, because they're unconscious... Evolution has not prepared us for the kind of exponential manipulation that AI has powered. It threatens to tear at our governance systems, especially democratic governance systems. If you have a population that's being massively manipulated, it's incredibly dangerous. It's an existential threat. So am I pessimistic about that? Yeah, I'm pessimistic about that. But at my core, I'm an optimist. That's why I go on fighting this because I believe we can solve it, even though it's such a challenge that nobody knows yet how. But if we don't try, we're doomed. And there are very few people unfortunately, I think that have both a deep technical background to understand what the technology actually is and where it's going... Nothing has prepared us for this. All these are unprecedented in human history. So those of us who have the privilege of knowing what's going on have a responsibility." Humanity is not going to cut off technological progress, because that is impossible, but to make themselves more aware of the crisis in front of them, "Everybody today is pointing the fingers at others. 'Oh, it's Facebook's fault. It's big tech's fault. It's AI researchers' fault. It's the government's fault.' No, it's our own fault. That's where the responsibility actually lies. The responsibility lies with all of us. Every society in history has always been built by people actually taking individual responsibility."