Advanced AI can change our relationship with technology. How can we mitigate this?

October 30, 2023

OpenAI recently announced the addition of new 'see, hear and speak' capabilities to ChatGPT. This means that this model becomes a multimodal AI: it is no longer only capable of processing and generating text, but can also interact with other types of data, 'see' images, 'listen' to audio, and speak.

ChatGPT is moving towards a more capable and versatile system with this evolution, which increases its potential applications through image recognition, audio recognition, or audio and image generation using Dall-E, also recently incorporated.

The best use of Artificial Intelligence is the one we make when we are aware of its potential and also of its limitations.

The 'Eliza effect' and its modern manifestation

This evolution of ChatGPT also has an impact on how people perceive and relate to this technology, which can provoke or intensify the Eliza effect.

The Eliza effect refers to the phenomenon of attributing human capabilities to a machine, even when we know we are interacting with a computer.

This phenomenon was documented following the Eliza computer program developed by Joseph Weizenbaum at MIT in 1966. Eliza, the program that gives its name to the effect, was designed to simulate in parody form the interaction of a therapist giving conversation to the user through preconfigured questions designed to maintain a basic dialogue between the user and the computer.

Advanced AI models foster the Eliza effect, as their capabilities make them appear more human-like.

Many people who interacted with Eliza at the time came to believe that the program somehow understood their problems and emotions, even though the program actually just followed a set of predefined rules (IF, THEN, PRINT…) to generate responses and continue the conversation.

So even though Eliza was simply a text-based conversational program with simple rules, users often interacted with it as if it were a human therapist, confiding in it intimate thoughts and emotions. As Weizenbaum wrote at the time, Eliza "induced powerful delusional thinking in quite normal people”.

“Her”: an extreme case of the Eliza effect and a reflection on human-IA interaction

Spike Jonze's film 'Her' (2013) presents a scenario in which a man falls in love with an advanced Artificial Intelligence digital assistant, known as Samantha.

This relationship transcends the traditional boundaries of human-machine interaction and addresses issues such as love, relationships, or loneliness. The story tells how a person can attribute human qualities to a machine, even when fully aware of its artificial nature.

In “Her”, the Eliza effect is magnified precisely because of Samantha's advanced capabilities. Not only is she an effective assistant, but she also demonstrates emotions, learns, and has the ability to make personal connections with the protagonist, who comes to consider her a life companion. It would be an extreme case of the Eliza effect.

Recognizing and understanding the Eliza phenomenon is critical to the proper development of AI-based technologies.

"Her", however, serves as a preview of the implications that advanced AI can have on human-machine interaction. The film depicts an AI so realistic that it challenges our current concept of what a 'normal' relationship is: it shows how the lines between humans and machines can blur when AI systems advance to a point where they can replicate or even surpass certain human capabilities.

Measures to mitigate the Eliza effect

As was the case in the movie 'Her' the Eliza effect raises important questions. For example, whether it is ethical for an AI to induce emotions in a human being, especially if those emotions can be misleading or harmful to the person.

This can happen with virtual assistants such as Aura or Siri, and most particularly in the case of children, who might come to form emotional attachments to interactive toys that make use of basic AI models.

The user experience of an AI plays a very influential role in human perception, and therefore in the Eliza effect

It is essential to take care of the user experience and adopt a responsible approach to AI design to mitigate the Eliza effect, helping to ensure that:

  1. The AI model is accessible and easy to use for the user, but without giving rise to misunderstandings about its artificial nature.
  2. The user is clear about how it works and what the capabilities and limitations of that AI are.

A proper overall design of AI-based technologies promotes a more informed, aware, and safe interaction with this technology. To achieve this, it is necessary to consider aspects such as:

  • Transparency in its design and operation, so that users are aware that they are interacting with an AI and not with a human being.

  • Set clear boundaries about AI capabilities and limitations to help users understand at all times that they are interacting with a computer program.
  • Provide consistent and realistic responses that help users keep in mind at all times that they are interacting with a machine.

  • Educate users about the capabilities and limitations of AI and provide information about how it works, what kind of data it uses to generate responses, and how those responses should be interpreted.
  • User-centric interfaces minimize the chances of users attributing cognitive or emotional capabilities to AI.

  • Regularly reviewing and evaluating AI models allows to detect if they foster the Eliza effect.

    Ensure protection of user privacy to mitigate the security consequences of the Eliza effect.

Using Midjourney to create social media content