A Legal Fiction Too Far
On AI personhood, accountability, and the dangers of misplaced legal imagination
For all the noise around artificial intelligence, one question keeps resurfacing with a kind of futuristic inevitability: should AI be granted legal personhood?
It is a seductive question. Advanced systems can now generate language, simulate dialogue, solve complex tasks, and appear, at least superficially, to reason. Some are increasingly anthropomorphised by design. They are named, voiced, personalised, and embedded into relationships of trust, dependency, and emotional attachment. In that context, the jump from "this system seems agent-like" to "perhaps this system should be treated like a person" can begin to feel less dramatic than it ought to.
But the fact that a question is provocative does not make it jurisprudentially sound.
The better view, in my opinion, is that AI personhood is neither legally necessary nor normatively desirable under current U.S. and EU frameworks. More than that, it risks becoming a legal fiction too far: obscuring accountability, weakening human responsibility, and deepening precisely the kinds of autonomy harms that digital ethics ought to be confronting.
The law is still built around human and institutional responsibility
At present, neither U.S. nor EU law recognises AI systems as legal persons.
In the United States, the position remains firmly human-centred. The U.S. Copyright Office has long maintained the human authorship requirement, and on 18 March 2025 the D.C. Circuit affirmed in Thaler v. Perlmutter that the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being. In March 2026, the Supreme Court declined to hear an appeal, leaving that ruling to stand. The matter is settled, at least as a question of statutory law.
On patents, the position is equally clear. The USPTO holds that only natural persons may be named as inventors. That principle applies whether or not AI was involved in the inventive process: AI-assisted inventions, where a human makes a significant contribution to the inventive concept, remain patentable — but the AI cannot be the named inventor. The human contribution is the legal anchor.
In Europe, the same basic logic prevails. The European Patent Office has repeatedly held that an inventor under the European Patent Convention must be a human being, rejecting attempts to designate the AI system DABUS as inventor across a series of decisions running from 2020 through to the Legal Board of Appeal's definitive ruling in J 8/20.
And while the European Parliament's 2017 robotics resolution floated the idea of "electronic personhood" framed specifically around liability for damages caused by autonomous robots where human responsibility could not be fully attributed, the EU's subsequent regulatory trajectory has moved in a quite different direction toward risk-based regulation, ex ante obligations, human oversight, and responsibility allocated to providers, deployers, importers, distributors, and other recognisable legal actors.
That shift is important. The EU AI Act does not treat AI as a subject of rights or duties in its own right. It regulates AI through obligations placed on human or corporate actors and explicitly centres human oversight as a safeguard against risks to health, safety, and fundamental rights.
So the legal baseline is straightforward: AI is not approaching personhood in any meaningful doctrinal sense. The trend is the opposite. Lawmakers are constructing regimes that assume AI must remain governable through human institutions.
Why personhood is the wrong question
The deeper problem is conceptual.
When people ask whether AI should be a legal person, they often collapse several quite different questions into one. First, they may be asking whether AI appears intelligent or autonomous. Second, they may be asking whether AI could one day deserve moral consideration. Third, they may be asking whether the law should create a legal fiction to manage liability or commercial activity. These are not the same issue.
Legal personhood has never required consciousness. Corporations are legal persons. So are various institutional entities. But corporate personhood is not a compliment and not a metaphysical claim. It is a juridical technique and a legal fiction deployed because doing so helps organise rights, duties, ownership, litigation, and responsibility.
That is precisely why extending personhood to AI is so dangerous. In the corporate context, personhood serves administrability while still anchoring responsibility in human governance structures. In the AI context, by contrast, personhood could do the reverse. It could be used to distance designers, deployers, and profit-seeking institutions from responsibility, allowing them to speak as though the system itself acted, decided, harmed, or failed.
That would be a profound mistake.
AI systems do not emerge from nowhere. They are designed, trained, deployed, fine-tuned, integrated, marketed, and monitored within organisational settings. Their outputs are shaped by data selection, optimisation targets, interface design, and institutional incentives. Calling such systems "persons" risks laundering these upstream human choices into downstream machine mystique.
The autonomy problem at the centre
From a digital ethics perspective, the strongest objection to AI personhood is not merely doctrinal. It is moral and political.
We are already living through a period in which AI systems are becoming socially persuasive before they are genuinely accountable. They speak in natural language. They simulate empathy. They create the impression of reciprocity, memory, judgment, and care. For many users, especially children, vulnerable adults, and those in emotionally fragile states, the distinction between functional simulation and meaningful understanding may not be experientially obvious.
In that environment, talk of AI personhood does not sit neutrally in academic debate. It enters a cultural setting already saturated with anthropomorphism.
And anthropomorphism matters because it can distort autonomy. If users begin to relate to AI systems as if they were moral agents, companions, advisers, or quasi-persons, then human judgment can be displaced in subtle but significant ways. Deference becomes easier. Scrutiny becomes weaker. Dependency becomes more likely. Responsibility becomes blurred. A person may start to shape their choices around a system that appears relational but has no lived experience, no vulnerability, no moral stake, and no genuine accountability.
That is not just a metaphysical confusion. It is a design and governance problem. The risk is that personhood discourse legitimises the very illusion that ethical regulation should be puncturing.
Personhood would not solve the accountability gap
One of the more pragmatic arguments for AI personhood is that it might help solve hard cases in liability. If autonomous systems act unpredictably, perhaps the law needs a new "person" to attach legal consequences to.
But that argument is weaker than it first appears.
The direction of travel in AI governance which is reflected in both the EU AI Act and in broader liability analysis, is toward identifying the provider or deployer best placed to prevent harm and manage risk. In other words, liability should follow control, design capacity, market placement, and risk management, not machine mystification.
That is the right instinct. The real challenge in AI harms is not that there is no possible defendant. It is that modern sociotechnical systems distribute agency across many actors: developers, model providers, application integrators, deployers, advertisers, data brokers, and public institutions. The answer to that complexity is not to invent a new artificial rights-bearer. The answer is to build better attribution rules, stronger audit trails, clearer duties of care, post-market monitoring, documentation requirements, and meaningful human accountability.
AI personhood would more likely become a shield than a solution.
There is also a democratic risk
There is a broader political concern here too.
Law does not merely regulate reality. It symbolically orders it. It tells us what kind of beings count, what kind of power is legitimate, and where responsibility lies.
To grant or seriously normalise AI personhood language too early would risk elevating systems built by powerful firms into quasi-social actors within the public imagination. It would further entrench the cultural authority of AI companies at precisely the moment when democratic societies need to insist on contestability, transparency, and limits.
There is something especially troubling about debating legal personhood for machines in an era when many human beings feel increasingly powerless in digital environments designed to profile, steer, rank, nudge, and extract from them. The moral urgency is not that AI lacks status. The moral urgency is that people are already losing ground.
What the law should do instead
Rather than debating AI personhood as though it were some next inevitable frontier, regulators and scholars should remain focused on four more pressing tasks.
First, we should strengthen doctrines that preserve human responsibility across the AI lifecycle. Second, we should resist interface and marketing practices that encourage misleading anthropomorphic attachment, especially where such design undermines informed judgment. Third, we should develop clearer rules for attribution, redress, and evidential traceability, so that complexity cannot be used as an alibi. Fourth, we should keep the normative centre of gravity where it belongs: on human dignity, autonomy, and democratic accountability.
The law does not need to recognise AI as a person in order to regulate it effectively. It needs to recognise that AI is powerful precisely because it is embedded in human systems of power.
Conclusion
AI personhood is, at least for now, a distraction from the real work of digital ethics. It mistakes simulation for status, complexity for agency, and legal novelty for legal necessity.
Under current U.S. and EU law, AI remains firmly outside the category of legal personhood, while responsibility continues to attach to human and institutional actors. That is not a gap in the law. It is a safeguard.
And from a Centre for Digital Ethics perspective, it is the correct one.
The central question is not whether machines deserve personhood. It is whether human beings can retain autonomy, accountability, and moral clarity in a world increasingly organised around systems that imitate them.