Advanced AI can change our relationship with technology. How can we mitigate this?
OpenAI recently announced the addition of new 'see, hear and speak' capabilities to ChatGPT. This means that this model becomes a multimodal AI: it is no longer only capable of processing and generating text, but can also interact with other types of data, 'see' images, 'listen' to audio, and speak.
ChatGPT is moving towards a more capable and versatile system with this evolution, which increases its potential applications through image recognition, audio recognition, or audio and image generation using Dall-E, also recently incorporated.
The best use of Artificial Intelligence is the one we make when we are aware of its potential and also of its limitations.
The 'Eliza effect' and its modern manifestation
This evolution of ChatGPT also has an impact on how people perceive and relate to this technology, which can provoke or intensify the Eliza effect.
The Eliza effect refers to the phenomenon of attributing human capabilities to a machine, even when we know we are interacting with a computer.
◾ Example: typing 'please' in ChatGPT before a query or saying 'thank you' when Aura warns that it is going to rain would be mild cases of the Eliza effect.
This phenomenon was documented following the Eliza computer program developed by Joseph Weizenbaum at MIT in 1966. Eliza, the program that gives its name to the effect, was designed to simulate in parody form the interaction of a therapist giving conversation to the user through preconfigured questions designed to maintain a basic dialogue between the user and the computer.
Advanced AI models foster the Eliza effect, as their capabilities make them appear more human-like.
Many people who interacted with Eliza at the time came to believe that the program somehow understood their problems and emotions, even though the program actually just followed a set of predefined rules (IF
, THEN
, PRINT
…) to generate responses and continue the conversation.
So even though Eliza was simply a text-based conversational program with simple rules, users often interacted with it as if it were a human therapist, confiding in it intimate thoughts and emotions. As Weizenbaum wrote at the time, Eliza "induced powerful delusional thinking in quite normal people”.
👩💻 Simple conversational programs such as Eliza work from a predefined set of rules and patterns programmed by the developers.
○ These rules dictate how the system should respond to different user inputs. For example, respond 'Hello, how are you' if the user says 'Hello'.
○ This limits the complexity and traversal of conversations and can leave the program unable to coherently continue the conversation in case it does not find any applicable rules.
In contrast, advanced AI models such as ChatGPT are not governed by fixed rules.
○ Instead, they learn from large amounts of data during their training, allowing them to generate responses based on patterns and connections identified within that data.
○ This gives them the ability to generate more complex responses in a more flexible and coherent manner, derived from the vast information they were trained on.
“Her”: an extreme case of the Eliza effect and a reflection on human-IA interaction
Spike Jonze's film 'Her' (2013) presents a scenario in which a man falls in love with an advanced Artificial Intelligence digital assistant, known as Samantha.
This relationship transcends the traditional boundaries of human-machine interaction and addresses issues such as love, relationships, or loneliness. The story tells how a person can attribute human qualities to a machine, even when fully aware of its artificial nature.

In “Her”, the Eliza effect is magnified precisely because of Samantha's advanced capabilities. Not only is she an effective assistant, but she also demonstrates emotions, learns, and has the ability to make personal connections with the protagonist, who comes to consider her a life companion. It would be an extreme case of the Eliza effect.
Recognizing and understanding the Eliza phenomenon is critical to the proper development of AI-based technologies.
"Her", however, serves as a preview of the implications that advanced AI can have on human-machine interaction. The film depicts an AI so realistic that it challenges our current concept of what a 'normal' relationship is: it shows how the lines between humans and machines can blur when AI systems advance to a point where they can replicate or even surpass certain human capabilities.
⚠ Risks and threats
○ The Eliza effect can be exploited to manipulate users, making them trust an AI model more and share personal or sensitive information, believing that they are interacting with an empathetic, sympathetic, and honest entity.
○ Obtaining such information can violate user privacy and use the collected data for malicious purposes, such as selling information to third parties, blackmail, or social engineering.
○ Also, the predisposition to view AI as 'human' can be used to influence users' decisions, guiding them towards actions or choices that benefit self-interested or malicious actors, all under the veil of genuine, well-intentioned interaction.
○ Furthermore, overestimating the capability of advanced AI that 'sees' and 'hears' can mask potential vulnerabilities to attacks that exploit these sensory channels. For example, images that hide malicious prompts that the AI will 'unintentionally' execute when analyzing the image.
Measures to mitigate the Eliza effect
As was the case in the movie 'Her' the Eliza effect raises important questions. For example, whether it is ethical for an AI to induce emotions in a human being, especially if those emotions can be misleading or harmful to the person.
This can happen with virtual assistants such as Aura or Siri, and most particularly in the case of children, who might come to form emotional attachments to interactive toys that make use of basic AI models.
The user experience of an AI plays a very influential role in human perception, and therefore in the Eliza effect
It is essential to take care of the user experience and adopt a responsible approach to AI design to mitigate the Eliza effect, helping to ensure that:
- The AI model is accessible and easy to use for the user, but without giving rise to misunderstandings about its artificial nature.
- The user is clear about how it works and what the capabilities and limitations of that AI are.
A proper overall design of AI-based technologies promotes a more informed, aware, and safe interaction with this technology. To achieve this, it is necessary to consider aspects such as:
- Transparency in its design and operation, so that users are aware that they are interacting with an AI and not with a human being.
Transparency aligns user expectations with reality
- Set clear boundaries about AI capabilities and limitations to help users understand at all times that they are interacting with a computer program.
- Provide consistent and realistic responses that help users keep in mind at all times that they are interacting with a machine.
If an AI provides responses that are too human-like or emotional, it will encourage the Eliza effect.
- Educate users about the capabilities and limitations of AI and provide information about how it works, what kind of data it uses to generate responses, and how those responses should be interpreted.
- User-centric interfaces minimize the chances of users attributing cognitive or emotional capabilities to AI.
For example, an intentionally artificial or neutral tone of voice will help to mitigate this phenomenon
- Regularly reviewing and evaluating AI models allows to detect if they foster the Eliza effect.
Ongoing evaluation allows to intervene and make changes to the model or interface if necessary.
Ensure protection of user privacy to mitigate the security consequences of the Eliza effect.
The user will have confidence that even if the effect occurs, their personal information will not be collected or misused.