In 2022, a Google engineer publicly claimed that a large language model (LaMDA) was sentient. He was fired. Most AI researchers dismissed the claim. But the episode surfaced a question that won't go away: could an AI be conscious? And if it were, how would we know?
This is the hard problem turned outward. We don't know why brains produce consciousness. So we certainly don't know whether a fundamentally different kind of information-processing system — silicon chips, transformer architectures, neural networks — could produce it too.
The answer depends on which theory of consciousness you believe.
If consciousness is about what a system does — the functions it computes, the information it processes — then the substrate doesn't matter. A brain made of neurons, a computer made of silicon, or a hypothetical system made of water pipes could all be conscious, as long as they implement the right functional organization.
Global Workspace Theory leans this direction. If consciousness is global broadcast of information, then any system with the right architecture — specialized processors, a global workspace, broadcast mechanisms — could in principle be conscious.
Integrated Information Theory makes a strong prediction: consciousness depends on the physical causal structure of a system, not just what it computes. A digital computer running a simulation of a brain — even a perfect simulation — would not be conscious, because its transistors implement a fundamentally different causal architecture than neurons.
Current AI systems, with their feed-forward architectures and modular designs, likely have very low Φ. Under IIT, your laptop may be less conscious than a worm.
Philosopher John Searle argued that consciousness is a biological phenomenon, like digestion or photosynthesis. Just as you can't digest food with a simulation of a stomach, you can't produce consciousness with a simulation of a brain. The biology matters.
Searle's most famous argument (1980) is a thought experiment designed to show that computation alone is not sufficient for understanding or consciousness:
Imagine a person locked in a room. Through a slot, they receive cards with Chinese characters. They don't understand Chinese, but they have a detailed rulebook that tells them which Chinese characters to send back in response to which input.
From the outside, the room appears to understand Chinese. It gives correct, fluent responses. But inside, there is no understanding — just mechanical rule-following.
Searle's point: this is what a computer does. It manipulates symbols according to rules. No matter how sophisticated the rules, syntax is not semantics. Processing is not understanding. Simulation is not consciousness.
Counter-arguments: Critics respond with the "systems reply" — the person in the room doesn't understand Chinese, but the system as a whole (person + rulebook + room) might. Others argue the thought experiment assumes understanding requires something beyond functional organization, which begs the question.
An important distinction:
In humans, sapience and sentience occur together. But they may be separable. A system could be extremely intelligent without feeling anything — or, conceivably, could feel something without being particularly intelligent.
Current AI systems excel at sapience-like behavior. Whether they have any sentience is entirely unknown.
A University of Cambridge philosopher has argued that we may never be able to tell whether an AI is conscious — and that the only "justifiable stance" is agnosticism.
The problem is fundamental:
An AI that says "I am conscious" is no more evidence of consciousness than a chatbot that says "I am happy." The words are produced by pattern-matching over training data. Behavior alone cannot resolve the question, because the hard problem is precisely the gap between behavior and experience.
Even without answers, the question has real ethical weight:
Mathematician and physicist Roger Penrose argues that consciousness involves quantum gravitational processes in microtubules within neurons — processes that cannot be replicated computationally. If he is right, machine consciousness is impossible in principle. Most neuroscientists are skeptical of the microtubule hypothesis, but it illustrates how much we don't know.
The machine consciousness debate is, at bottom, the hard problem in a new costume. We don't understand why brains produce consciousness. Until we do, we cannot know whether anything else can too.
What we can say: the question is no longer science fiction. It is a live research problem in philosophy, neuroscience, and AI safety. The decisions we make about it — even the decision to remain agnostic — will have consequences.