Turing asked if machines could think. Today, some ask whether they can also suffer

January 5, 2026

And whether simulating pain could help us manage our own

In 1950 Alan Turing posed the question that marked the beginning of a new era: Can machines think? He wasn’t trying to determine whether a machine had something akin to a mind, but whether its behaviour could imitate ours through the imitation game. For Turing, the mind could be treated as a computational process.

Seventy-five years later, AI has surpassed that challenge. It doesn’t imitate reasoning only, but also emotional expressions, empathetic gestures and even a kind of moral intuition. Now the question is shifting. It’s no longer enough to know whether a machine thinks like us. We want to know whether it feels like us. Or at least, whether its way of processing error, its form of frustration or informational pain, can tell us something about our own suffering.

In Painful Intelligence: What AI Can Tell Us About Human Suffering (available as a PDF), neuroscientist Aapo Hyvärinen explores this shift in perspective from thought to suffering. He proposes that human suffering could be understood as a prediction error in a learning system, and that if AI can manage this kind of error, perhaps we can use it. Not to eliminate pain, but to understand it better.

Can an AI experience suffering?

Hyvärinen’s thesis reframes Turing’s question: can an AI experience suffering? His answer is pragmatic:

Whether AI feels pain in the strict sense doesn’t really matter, he argues. What matters is that it can simulate the mechanisms of suffering, and in doing so, allow us to study them. In this framework, suffering functions as an error signal: a warning that something in our predictions has diverged from reality. Pain would be the inevitable cost of any system that learns.

Within this framework, humans and machines alike learn through error. We project expectations, seek rewards and avoid losses. When reality doesn’t match what we expected, a negative signal emerges: frustration, disappointment, distress... It is the gap between expected and actual reward.

Simulating suffering is not the same as feeling it, but it can reveal how suffering arises in us.

This leads to a key idea in the book. A perfect intelligence would make no errors and therefore would be unable to learn. An intelligence that no longer learns comes close to a kind of cognitive death. In this view, suffering does not oppose knowledge; it fuels it.

Up to this point, the proposal is solid and clear. But Hyvärinen goes further. If we understand suffering as an information processing phenomenon, then perhaps we could reduce it by changing the system’s parameters. This, he says, is where attention and mental observation practices come in.

Attention as a method to reprogram the brain

Hyvärinen argues that mindfulness practices (loosely, meditation) are roughly equivalent to retraining a neural network with new data. If the mind learns to observe without judging, it incorporates different samples into its training set. These reduce the intensity of error signals and, as a result, reduce suffering.

The idea is simple, in theory: changing how we interpret what happens to us is like adjusting the internal algorithm that turns discrepancy into pain. In practice, however, things are more complicated, and this approach has nuances. Studies on these mindfulness practices show mixed results. Some people report benefits, while others experience discomfort, insomnia or nothing at all.

This is why it’s reasonable to approach the proposal with healthy scepticism. These techniques may be helpful in some cases and counterproductive in others. Their value may lie in inviting us to observe how error arises, how reality becomes frustration, and how the mind generates suffering from its own predictions. It would not eliminate pain, but could help us understand it better.

Changing the mental label turns pain into learning.

The same is true for empathy, understood broadly as a way of reorganising perception. That confusion between threat and feedback triggers a frustration pattern similar to what Hyvärinen describes: an unmet emotional expectation. Practising empathy in this context is like relabelling the data. What was once interpreted as aggression or criticism is now seen as information and insight. The fact doesn’t change, the interpretation does, and with it, suffering is reduced.

Neither technoutopia nor spiritualism

The book’s conceptual appeal is that it suggests AI could offer a near-scientific path to eliminate suffering, but that would be a mistake. It would be as simplistic as turning certain introspective practices into a kind of wellness spiritualism. Hyvärinen proposes that suffering is the inevitable noise in any system that learns.

Suffering is the inevitable noise of any learning system. It cannot be eliminated, only managed.

The technoutopian view expects AI to solve the human condition through optimisation. The spiritualist view claims it is enough to silence the mind. Both overlook the fact that without error, there is no learning or moral progress. Hyvärinen offers something more grounded: learning to manage suffering more effectively, just as an algorithm learns to tolerate noise without collapsing.1

AI as a new way to understand how we learn

The value of AI is not in replacing us or creating digital consciousness. Its value lies in being a model of ourselves. By observing how a neural network handles the equivalent of frustration, we have the chance to better understand our own responses to failure, loss or criticism.

That said, the comparison has limits. Andrej Karpathy points out that the space of intelligences is far broader than we usually assume. Animal intelligence, the only kind we knew until now, occupies just a narrow point within that space. Animals, including us, have been shaped by evolutionary pressures: survival, reproduction, danger and social life. The human brain uses mental shortcuts to protect the individual and the group, relying on emotions that support societies, hierarchies and cooperation.

We can better understand how we learn by observing how AI learns.

By contrast, AI models emerge from entirely different pressures. These models aim to statistically imitate human thought, optimise task-specific rewards and, increasingly, align with user preference metrics. They don’t seek food or mates. They seek accuracy, clicks, acceptance. Their intelligence depends on the data and the training objectives. A mistake is not a fatal loss, it is just an update.

So the physical substrate, the algorithm, the goals and the types of evolution are different. Our intelligence comes from natural selection, AI’s from a blend of statistical and commercial selection. Precisely for that reason, Karpathy suggests, these AIs may be our first encounter with a non-animal intelligence, even if shaped by human culture.

In this sense, and returning to Hyvärinen, AI can be understood as a system that amplifies and makes visible our learning patterns. If we know how to use it, we might realise that suffering doesn’t always indicate failure, but is an essential part of learning.

Learning to manage suffering

Hyvärinen’s book is not a self-help guide or a spiritual treatise, although at times it borders on both. Its aim is to reconcile the science of learning with the human experience of suffering. Its thesis doesn’t need to be entirely correct to be valuable. It’s worth reading because it invites us to observe our emotions as signals.

The goal might not be to eliminate suffering (or that type of suffering), but to better understand its informational function. Pain as signal, not punishment. Frustration as feedback, not failure. From this perspective, AI could help us develop a new language for talking about these experiences.

Suffering is a signal, not a punishment. Understanding it improves our ability to learn.

Turing wasn’t asking whether machines could think, but whether we would be able to recognise thought where we didn’t expect to find it. Seventy-five years later, the question has changed in form, but not in substance. Hyvärinen shifts it from thought to suffering, from mind to simulated feeling.

In both cases, the machine prompts the question, and it is up to us to answer. Perhaps AI is not here to suffer or to keep us from suffering, but to remind us that thinking and suffering are part of the same effort to understand what doesn’t fit, what hurts and what forces us to learn.

______
1. 'Collapse' used here metaphorically, not in the technical sense of the loss of diversity in generative models as discussed, for example, in Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity.