Synthetic Labour
Human in the loop or human in the line of fire.
Background
A new workplace bargain is emerging.
Employees are being told to use AI to increase productivity, often without meaningful training, clear governance, or a realistic explanation of how these systems work. They are also being told that they remain responsible for checking AI outputs, correcting mistakes, protecting confidentiality, avoiding bias, and ensuring that the final work product is accurate.
This is difficult enough. But the contradiction runs deeper.
Workers are being asked to supervise systems that are marketed as more capable than they are, while still being held responsible when those systems fail. They are expected to trust the machine, defer to the machine, accelerate their work through the machine, and then somehow stand above the machine as its final guarantor.
That is not empowerment. It is risk transfer.
Madeleine Clare Elish’s concept of the “moral crumple zone” helps explain why this is so dangerous. Elish uses the term to describe situations where responsibility for an automated system’s failure is misattributed to a human actor who had limited control over the system. In her metaphor, the human becomes like a crumple zone in a car: the part that absorbs the impact. But unlike a physical crumple zone, which protects the human, the moral crumple zone protects the technological system at the expense of the nearest human operator.
This is exactly the risk now emerging in AI-mediated workplaces.
When AI works, the organisation claims productivity, innovation, and efficiency.
When AI fails, the worker may be blamed for not checking properly.
The new workplace double bind
The modern AI worker is placed in an impossible position.
They are told:
- Use AI because it is faster than you.
- Check AI because it cannot be trusted.
- Rely on AI because productivity now demands it.
- Do not rely on AI too much because you remain accountable.
- Train the system through your work.
- Accept that the system may eventually reduce the need for your role.
This is the workplace version of the moral crumple zone.
The human remains “in the loop”, but often only for the purposes of blame. They may not have sufficient technical knowledge, sufficient time, sufficient authority, or sufficient insight into the system to exercise meaningful control.
Elish’s central point is that automated systems disturb the relationship between control and responsibility. Control becomes distributed across designers, developers, managers, deployers, data systems, interfaces, organisational incentives, and users. Yet responsibility often collapses back onto the visible human operator closest to the failure.
In the workplace, that visible human is the employee.
The myth of meaningful human oversight
Employers often reassure themselves that AI use remains safe because there is a “human in the loop”. But that phrase can conceal more than it reveals.
A human being cannot meaningfully oversee a system merely by being positioned downstream of it. Oversight requires understanding, time, authority, training, and the practical ability to intervene.
Without those conditions, “human oversight” becomes decorative. Worse, it becomes a liability shield.
The worker is not truly empowered to control the system. They are simply placed where blame can land.
This is particularly acute where AI tools are introduced into professional, administrative, legal, medical, educational, financial, or public-sector work. In those contexts, the human worker may be responsible for serious consequences, but may not understand the model’s training data, limitations, error patterns, confidence levels, hallucination risks, or embedded assumptions.
This is not a genuine human-machine partnership. It is a hierarchy in which the machine accelerates the work and the human absorbs the consequences.
Automation, deskilling, and the handoff problem
Elish’s analysis of Air France Flight 447 is especially useful here. In that case, automation had taken over much of the ordinary work of flying. When the system failed, human pilots were expected to intervene under precisely the conditions in which intervention was most difficult: stress, confusion, compressed time, degraded situational awareness, and reduced manual practice.
The lesson for workplace AI is obvious.
Automation does not simply remove tasks. It changes human capability. It can deskill workers, reduce situational awareness, and make them dependent on systems whose failures they are then expected to correct.
This creates the handoff problem in ordinary office work.
A worker may use AI to draft, summarise, code, analyse, classify, translate, assess, screen, recommend, or decide. Over time, they may lose the habit of doing parts of that work unaided. But when the AI output is wrong, incomplete, fabricated, biased, or misleading, the worker is expected to detect and repair the failure.
The human is asked to intervene at the hardest point: after the system has already shaped the work, compressed the context, and produced an apparently plausible result.
That is a recipe for error. It is also a recipe for burnout.
The psychological hazard
The psychological hazard of workplace AI is not only that workers may be replaced. It is that they may be made responsible for their own replacement process while still being treated as the weak link in the system.
Workers face several overlapping pressures.
First, cognitive overload. They must complete the original task, operate the AI system, evaluate the output, detect errors, manage confidentiality, consider bias, and preserve professional standards.
Second, speed pressure. Once AI is introduced, the expected pace of work often increases. The supposed efficiency gain may not be returned to workers as rest, autonomy, or breathing space. It may be captured as higher output.
Third, status anxiety. Workers must prove they are adaptable and “AI-literate” while also fearing that the more effectively they use AI, the more they demonstrate that their role can be automated.
Fourth, responsibility anxiety. Workers may know that the system is unreliable but also know that failure to use it could be treated as inefficiency. They are trapped between over-reliance and under-adoption.
Fifth, surveillance pressure. Where keystrokes, clicks, screen activity, edits, or workflows are monitored to train AI systems, the worker’s ordinary conduct becomes performative. Every pause, hesitation, correction, and workaround becomes data.
The workplace then becomes a psychologically hostile environment: not necessarily because anyone is shouting, but because the worker is continuously observed, continuously accelerated, and continuously exposed to blame.
The moral hazard
There is also a moral injury dimension.
Workers may be required to use AI systems that they do not trust. They may be asked to apply machine-generated recommendations to clients, customers, patients, students, citizens, or colleagues. They may be asked to sign off on work they did not fully author and cannot fully verify.
Worse still, they may be required to train the systems that will eventually replace them.
This is not ordinary technological change. It is a form of synthetic labour extraction.
The worker’s judgement, skill, tacit knowledge, corrections, habits, and professional instincts are converted into training data. Their labour does not merely produce the employer’s immediate output. It also improves the machine.
The worker therefore performs two jobs at once:
- the visible job they are paid to do; and
- the hidden job of training the system that may later devalue or replace them.
This has a distinct moral character. It asks workers to participate in the erosion of their own bargaining power while calling the process innovation.
The legal hazard
The legal implications are serious.
The first issue is training. In the EU, the AI Act requires providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy among staff and others dealing with AI systems on their behalf. That matters because an instruction to “use AI” is not enough. AI literacy must be role-specific, contextual, and proportionate to the risks of the system.
The second issue is workplace surveillance. Monitoring keystrokes, screenshots, clicks, edits, and work patterns for AI training engages data protection principles of transparency, fairness, necessity, proportionality, purpose limitation, and data minimisation. Workers do not lose their privacy simply because they are at work.
The third issue is performance management. If AI adoption leads to increased productivity targets, employers may face disputes over whether those targets are reasonable, discriminatory, unsafe, or based on flawed assumptions about what AI can reliably do.
The fourth issue is liability allocation. If a worker signs off on AI-generated work that later causes harm, who is responsible? The worker? The manager who required the tool? The employer who deployed it? The vendor who designed it? The compliance team that approved it? The executive team that demanded productivity gains?
Elish’s framework shows why it is dangerous to answer this question by pointing only to the final human reviewer. That person may be the moral crumple zone, not the true locus of control.
Where future employment litigation may arise
Future workplace AI litigation is likely to arise in several areas.
Failure to train
Employees may argue that they were required to use AI without adequate instruction, role-specific guidance, supervision, or time to verify outputs.
Unfair discipline
Workers may challenge warnings, dismissals, or negative reviews based on AI-related errors where the employer failed to provide adequate safeguards.
Constructive dismissal and workplace stress
Where AI deployment increases workload, surveillance, anxiety, or impossible accountability burdens, workers may argue that the employer breached duties of care.
Data protection complaints
Keystroke logging, screen monitoring, behavioural analytics, productivity inference, and AI training based on employee activity may generate claims around transparency, lawful basis, proportionality, and excessive monitoring.
Discrimination claims
AI-driven productivity expectations may disadvantage disabled workers, neurodivergent workers, older workers, carers, workers with language differences, or workers whose roles require slower reflective judgement.
Professional liability disputes
Professionals may argue that employers pressured them to use tools that were not fit for purpose, while still expecting them to carry personal professional responsibility.
Whistleblowing and retaliation
Workers who object to unsafe, misleading, discriminatory, or unlawful AI deployment may need protection from retaliation.
The common thread is the same: responsibility without meaningful control.
Autonomy and responsibility
For the Centre for Digital Ethics, the core issue is autonomy.
AI at work may support autonomy where it reduces drudgery, assists judgement, improves accessibility, and gives workers more control over their time and attention.
But AI undermines autonomy where it is imposed without understanding, used to intensify labour, deployed to monitor behaviour, or designed to extract human knowledge into systems that workers cannot contest.
The moral crumple zone helps sharpen this point. The autonomy harm is not only that the worker is watched or accelerated. It is that the worker may be treated as the author of an outcome they did not meaningfully control.
That is a deep misrecognition of agency.
A person is harmed when they are made answerable for a system they cannot understand, cannot challenge, cannot override, and cannot refuse.
This is why “human in the loop” must never be treated as a sufficient safeguard on its own. The real question is whether the human has substantive autonomy:
Do they understand the system?
Can they contest the output?
Can they override it?
Do they have enough time to intervene?
Are they trained for likely failure modes?
Are responsibility and control aligned?
Are they protected from unfair blame?
If the answer is no, then the worker is not an overseer. They may become a liability sponge.
Worker’s Rights
The CDE calls for a specific AI at Work Rights Framework.
1. A right to AI literacy before AI use is mandated
No worker should be required to use an AI system without adequate, role-specific training.
AI literacy should not be reduced to a generic compliance module. It must be tied to the actual work being performed.
2. A right to meaningful human oversight
Employers should not be permitted to rely on “human oversight” unless the worker has real power to oversee.
Without sufficient time, knowledge, or authority, oversight is symbolic.
3. A right against moral crumple zoning
Workers should have a legal protection against being blamed, disciplined, dismissed, or made professionally liable for AI-mediated failures where they lacked meaningful control.
Responsibility should track actual agency, not mere proximity to the output.
4. A right to refuse unsafe AI use
Workers should be able to refuse to use AI systems where they reasonably believe the system creates legal, ethical, professional, safety, confidentiality, or discrimination risks.
This should be treated as a workplace protection, not insubordination.
5. A right to know when work activity is used to train AI
Workers should receive clear notice where their work, keystrokes, communications, edits, corrections, screenshots, behavioural patterns, or workflow activity are used to train, fine-tune, test, or evaluate AI systems.
6. A right to opt out of behavioural training datasets
Where AI training depends on granular behavioural monitoring, workers should have a meaningful opt-out unless the employer can show a compelling, proportionate, and independently assessed justification.
7. A right to non-replacement transparency
Where employees are asked to train systems that may automate their tasks, employers should be required to disclose that possibility.
Workers should not be conscripted into training their replacements under the language of productivity.
8. A right to collective consultation
AI deployment should trigger consultation duties where it materially affects workload, monitoring, job design, performance assessment, deskilling, redundancy risk, or professional responsibility.
This is particularly important because AI changes not only tools, but power.
9. A right to worker AI impact assessments
Before deploying workplace AI systems, employers should conduct a Worker AI Impact Assessment addressing:
This assessment should examine the whole human-machine system, not merely the technical tool.
10. A right to contest AI-mediated decisions
Workers should have the right to challenge AI-influenced decisions concerning recruitment, promotion, allocation of work, productivity scoring, pay, discipline, redundancy, or performance review.
This right should apply not only to fully automated decisions, but also to hybrid systems where a manager formally decides but the AI materially shapes the decision or outcome.
CDE position
The future of work cannot be built on a contradiction.
Employers cannot claim that AI is powerful enough to transform productivity but harmless enough not to require serious governance. They cannot say that AI is too useful to ignore but too unreliable for the employer to accept responsibility. They cannot ask workers to supervise systems they do not understand, then blame them when those systems fail. They cannot extract workers’ behavioural knowledge to train replacement systems while presenting that extraction as ordinary workplace efficiency.
This amounts to what might be termed AI-mediated responsibility displacement which occurs where workers are required to use, supervise, correct, or train AI systems in circumstances where responsibility is imposed without meaningful control.
It is closely related to Elish’s moral crumple zone. But in the workplace context, it has a specific labour dimension. The worker does not merely absorb blame after an automated failure. The worker may also be monitored, accelerated, deskilled, and converted into training data.
That is why Synthetic Labour is the right term.
It captures the hidden human work and sacrifice beneath the appearance of automation.
Conclusion
The central question is not whether AI can make workers more productive.
The question is who benefits from that productivity, who bears the risk, and who absorbs the blame when the system fails.
If AI reduces drudgery, supports judgement, and gives workers more control over their working lives, it can be genuinely emancipatory.
But if AI is imposed without training, used to intensify work, deployed as surveillance, and structured so that the worker remains responsible for machine failure, then it becomes a new form of labour extraction.
The worker becomes the human residue of automation: still needed, still watched, still blamed, but increasingly denied meaningful agency.
That is not innovation.
That is synthetic labour.