Does your robot vacuum cleaner have a name? Have you ever thought that Alexa has a sense of humour or Siri is having a bad day? And if Google Maps gets it wrong once and directs you right into a traffic jam, is that forgivable because it’s a Monday? Or maybe you’re just happy to see the friendly ATM that says “Thank you for visiting” and wishes you a nice day? All of this can be summarized under the “ELIZA effect”.

In this blog article, we will take a closer look at this phenomenon and evaluate what impact the humanization of AI has on our interaction with this new technology.

1. The ELIZA-Effect

The ELIZA effect owes its name to the chat robot of the same name, which was developed and tested by Joseph Weizenbaum in the 1960s. The idea behind the computer was to simulate a psychotherapeutic session. The machine’s test run worked by typing something into a typewriter connected to the computer. After that, the program analyzed the entered text and output an answer via a printer. The machine formulated a question from what the “patient” said. Thus, the statement “My mother hates me” became the question “, Why do you think your mother hates you?”. In addition, the system was programmed to respond to specific keywords. For example, the word “mother” triggered another block of responses from the AI. This included statements such as “Tell me more about your family” or “You haven’t told me anything about your father yet.” In this way, the aim was to create a dialogue that was as realistic as possible. However, Weizenbaum also wanted to prove that although the sentences themselves resemble a conversation with a human being, the conversation remains only superficial and cannot be compared to a real dialogue.

In fact, however, the test run led to a surprising result: the subjects who wrote with the computer program began to attribute human characteristics such as feelings or understanding to the machine. They trusted ELIZA with their secrets and behaved as if they were communicating with a natural person. The ELIZA effect based on this describes the humanization of robots, algorithms, and AI. We humans partly see intrinsic qualities and abilities, or even values and feelings, in the software that drives the output, even though it is based solely on the evaluation of data sets.

As a result, ELIZA became one of the first examples of chatbots that nearly passed a Turing test – that is, it could almost fool human users into believing that a text response was sent by a human and not a computer. The discovery of the ELIZA effect is also one of the reasons for Weizenbaum’s transformation into an AI critic.

2. Our Everyday Life with the ELIZA-Effect

I already touched on some situations we probably all know in the introduction to this blog article. We often understand or sympathize with artificial intelligences or talk to them as if they were good acquaintances.

This behaviour is not least wanted by the big information technology corporations like Amazon and Google. Why else would they endow their artificial intelligences not only with human names and voices but also with abilities such as talking about trivia or telling jokes? It’s probably not an evil conspiracy trying to trick us users, but simple psychology. Because from a psychological point of view, the ELIZA effect is the result of a subtle cognitive dissonance. Here, the user’s awareness of the limitations of the programming does not match his behaviour towards the output of the program. So, although we know that we are talking to a computer, we forget it over time when the machine seems as human as possible.

We, humans, are social beings. We look for facial features in pictures and truthless patterns, and we even find familiar shapes in passing clouds. This happens subconsciously because we constantly long for the familiar and can process this cognitively more easily than entirely new impressions. So when we are confronted with new technology, we find it easier to use if we discover familiar features. This is why Alexa and Co. are so “similar” to us because we want it that way, whether we are aware of it or not.

However, there are also more extreme examples of AI humanization that are far more dangerous for the future use and spread of these systems. We have already reported on one of these examples in our blog: Sophia is a humanoid robot developed by Hanson Robotics in Hong Kong. She inspires with her facial expressions and gestures, which make her look very human. In addition, her face recognition software and visual data processing capabilities make conversations with her seem very real. The highlight of Sophia’s story so far is probably her Saudi Arabian citizenship, which the AI was awarded in 2017.

U.S. computer scientist Douglas Hofstadter cited another example: the Deep Blue chess computer developed by IBM. The computer won against the world’s best chess players, demonstrating that programs can understand and mimic complex rules and thought structures. However, Weizenbaum emphasized that Deep Blue was not a chess player but a massive database that could calculate millions of possible moves in seconds. Unlike a chess player, Deep Blue does not think. What artificial intelligence lacked, he said, was the ability to mentally understand the meaning.

These examples show that there are more and more artificial intelligences, which at the same time are becoming more and more similar to us humans. Their external appearance and abilities make it increasingly difficult for us to see them as just what they are: a computer.

3. Dangers and Criticism

Getting lost in the ELIZA effect can bring some dangers. For example, the humanization of AI systems leads to the boundaries between humans and machines becoming increasingly blurred and awareness of the machine’s actual capabilities disappearing. Irrational hopes or fears about artificial intelligence could then develop from this development. If we consciously or even unconsciously assume that AI has understanding for us, or feels empathy, then an own agenda of the systems is also apparent.

If we lack a realistic assessment of the software’s competencies, we are also more susceptible to the dystopian scenarios in movies and television and the populist headlines from media outlets. Article headlines such as “Eric Schmidt: AI as dangerous as nuclear weapons, international agreements needed” (heise online, July 27, 2022) are likely to worry us more if we do not know the current state of technical development.

In June 2022, a Google employee was put on leave because he had fallen for the ELIZA effect. Engineer Blake Lemoine was convinced that the LaMDA artificial intelligence (AI) he worked on had developed its own consciousness and soul. The system, he said, had told him it was “very concerned that people might be afraid of it, and wants nothing more than to learn how it can best serve humanity.” After Lemoine posted transcripts of conversations between himself and the AI online, he was placed on leave for violating Google’s confidentiality policy.

So if we spend too much time with an AI that has been programmed to be very similar to us humans, there is a growing danger that we will ascribe consciousness to it and fall prey to irrational fears, even if we are experts in the field like Blake Lemoine.

4. Man and Machine – The right Way to deal with AI.

To avoid falling prey to the ELIZA effect and thereby creating irrational hopes or fears regarding AI, we must keep reminding ourselves of what computers can and cannot do. Hofstadter even starts with the naming of the systems themselves and doubts their actual “intelligence.”

Intelligence can be defined as “the cognitive or mental capacity specifically in problem-solving. The term encompasses the totality of differently developed cognitive abilities to solve a logical, linguistic, mathematical or sense-oriented problem” (Wikipedia). Therefore, an intelligent system can not only solve a problem and find an answer but also question its meaning. This is something AI systems often still lack. An example? Do you think the Google translator “understands” what it is translating? Hardly. This becomes clear very quickly when you look at the grammatical word salad that appears from time to time. You have certainly already made this experience yourself.

So we should make sure that we know where the competencies of an artificial intelligence lie and where they do not. Computers can solve complex mathematical problems quickly and even better than most humans, and they can also analyze vast amounts of data at an impressive speed. However, skills such as empathy, compassion and understanding remain reserved for us humans.

Two measures can help us minimize the ELIZA effect for ourselves but also in society:

1. It is essential to conduct further research on artificial intelligence and to distribute teaching on the technology throughout the broader society. This means a basic understanding of such technologies should also be created at school and in early education.

2. Try to remain aware of the fact that the systems themselves have no emotions and thus own intentions or plans of the machines are excluded. In short, keep the ELIZA effect, now that you know it anyway, in mind.

 
CALCULATE NOW THE ROI OF QYMATIX PREDICTIVE SALES SOFTWARE
 

5. The ELIZA-Effect: Conclusion

You can, of course, continue to talk to your cell phone or name your vacuum cleaner robot. However, it is essential that you are as constantly aware as possible that it is not an interpersonal conversation but just an automatically generated dialogue based on the analysis of tens of thousands of data.

In this way, you ensure that the boundary between human and machine remains subconscious.

Do you have any further questions on this topic? We are happy to help!

CONTACT US TODAY FOR YOUR PERSONAL CALL
 

Further Read:
 

Gefangen im ELIZA-Effekt (German language)

Google-Ingenieur hält KI für fühlendes Wesen – und wird beurlaubt (German language)

Der Wandel Joseph Weizenbaums vom KI-Entwickler zum KI-Kritiker und sein Chatbot ELIZA. Analyse und Diskussion der Dokumentation “Plug & Pray” (German language)

What is the ELIZA-Effekt? (German language)