
Author
Malik James-Williams
Key Concepts
- ai
- labour
- future-of-work
7 min read
AI feels new. The labour pattern isn't.
Every few decades, a technology arrives that triggers the same prediction - this one will finally make people obsolete. The technology changes, but the mistake doesn't.
AI feels new. The labour pattern isn't.
Every few decades, a technology arrives that triggers the same prediction: this one will finally make people obsolete. The technology changes, but the mistake doesn't, and it's always the same. We confuse the tool with the work.
When spreadsheet software arrived in the early 1980s, it genuinely eroded the value of manual bookkeeping, but accounting didn't disappear. It changed shape. The spreadsheet took over the calculation layer, while financial logic, anomaly detection, and judgment remained firmly in human hands. The advantage moved toward people who understood the work well enough to direct it, evaluate it, and step in when the tool failed.
AI is following the same pattern, but faster, broader, and reaching deeper into knowledge work than anything before it. In the US, 40% of employees reported using AI at work in 2025, up from 20% in 2023 [2], and globally, one in four workers is in an occupation with some exposure to generative AI [3]. But the same ILO research also points out that most occupations still involve tasks that require human input, making job transformation more likely than wholesale replacement [4].
The real question isn't whether AI changes work. It obviously does. The question is what happens to the people who haven't had enough time at work to understand what they're losing.
The bottom of the ladder is disappearing
A Stanford study using ADP payroll data found that workers aged 22 to 25 in highly AI-exposed occupations experienced a 13% relative decline in employment, while experienced workers in those same occupations remained stable [6]. The declines were concentrated in occupations where AI was used to automate rather than augment work.
That finding deserves more attention than it's getting, because entry-level work has always been how people build the judgment that makes them valuable later. Junior analysts learn what a good analysis looks like by doing hundreds of mediocre ones. Junior writers develop voice by producing a lot of bad copy first. That process looks like busywork from the outside, but it's actually the apprenticeship, and it's how every industry has historically built its pipeline of experienced workers.
If AI eats the bottom of the ladder, we don't just lose entry-level jobs. We lose the mechanism that produces the experienced workers that everyone says AI can't replace. The same companies celebrating that AI can now handle junior-level work are the ones who will be wondering, in five years, why they can't find anyone with the judgment to oversee it. I don't think most organisations have even started thinking about this, let alone have an answer for how to replace that apprenticeship at scale.
IMF analysis of millions of online job postings found that roughly one in ten listings in advanced economies now requires at least one new skill, with AI-related skills tending to raise wages while also deepening labour market polarisation [7]. The gains aren't shared automatically. They accrue to firms and workers who already hold scarce context, decision rights, and higher-value judgment, which means the people best positioned to benefit from AI are the ones who already completed the apprenticeship that AI is now eroding for the next generation.
The execution layer is what's being automated
AI is eating into drafting, summarising, classifying, and generating first-pass output. If your value sits mostly in doing those things at a basic level, you're already in trouble, and that part isn't hypothetical. But most jobs are bundles of tasks rather than a single indivisible thing, and AI tends to attack the repetitive, low-discretion, document-heavy parts first. The near-term pattern looks more like role redesign than full role deletion.
The layer above execution, judgment, context, accountability, and evaluation is harder to automate than the hype suggests, and it becomes more valuable as the generated output gets cheaper. When a model can produce ten plausible answers in seconds, the scarce skill is knowing which one is right, which one is dangerous, and which one only looks right if you don't understand the domain.
I've watched models produce confident financial analyses that missed basic commercial context, the kind of miss an experienced analyst would catch in seconds. The model wasn't broken. It just didn't know what it didn't know, and the person reviewing it didn't have enough domain experience to see the gap.
What survives when the tool gets better
In one system I helped build, we had AI generating client-facing summaries in seconds. The model was good enough to be useful, and the output sounded authoritative. The problem was everything around it. Reviewers didn't have enough domain knowledge to catch when a summary was confidently wrong. There was no feedback loop to surface errors before they reached the client. A summary that was subtly incorrect went out the door looking identical to one that was right.
The fix wasn't a better model. It was a review process designed by someone who understood the work well enough to know where it would fail.
That experience reinforced three capabilities that I've seen hold their value through every tool improvement I've watched:
Domain knowledge. Enough subject-matter depth to catch what the tool gets wrong. An AI can draft a contract. A lawyer who understands the commercial context knows whether it actually protects the client.
Evaluation. More output doesn't reduce the need for judgment. It multiplies it. Someone has to decide whether the output is correct, useful, and fit for purpose, and the cheaper the output gets, the more this matters.
System design. If a workflow can't catch bad output before it reaches a customer, client, or decision-maker, it's not a serious system.
The data doesn't support the apocalypse either
It's worth being honest about what the aggregate data actually shows, because the picture is more nuanced than either the doom narrative or the everything-will-be-fine narrative suggests.
In an OECD survey of SMEs using generative AI, 83% said it had no effect on overall staffing needs, with only 9% reporting decreased needs and 6% reporting increased ones [5]. Twice as many said generative AI increased skill requirements as said it decreased them. The WEF projects that 22% of jobs will be disrupted by 2030, but with a net gain of 78 million jobs [8].
Work is changing shape rather than disappearing. The historical pattern is holding. But the distribution of who benefits and who gets displaced is uneven in ways that should concern anyone thinking seriously about this, because the displacement is concentrating at the bottom of the experience curve, exactly where the people with the least ability to adapt are standing.
What this means in practice
If you're worried about AI taking your job, basic tool literacy matters, but I wouldn't mistake it for a durable advantage. The more mature these tools get, the less valuable interface fluency becomes on its own. The work that holds its value is the work above the interface: judgment, evaluation, domain expertise, and the ability to design systems that don't quietly fail.
If you lead a team or an organisation, I think the question worth asking isn't which roles AI can replace but which parts of each role are routine execution and which parts require judgment you still need to build, supervise, and compound. And more urgently, how do you plan to develop that judgment in new hires when the traditional path for building it is being automated away?
The workers most at risk aren't necessarily the ones using AI least. They're the ones whose value sits mainly in narrow, repeatable execution. The workers with the best odds understand what good looks like, why it matters, and how to build a system around it. That was true before AI. The difference now is speed, and the fact that we're quietly dismantling the way people learn to do the work well before they've had the chance to learn it at all.
References
- Pew Research Center — "U.S. newsroom employment has fallen 26% since 2008" (Mason Walker, 2021) https://www.pewresearch.org/short-reads/2021/07/13/u-s-newsroom-employment-has-fallen-26-since-2008/
- Anthropic Economic Index — September 2025 Report (40% figure via Gallup survey) https://www.anthropic.com/research/anthropic-economic-index-september-2025-report
- ILO — "Generative AI and Jobs: A 2025 Update" https://www.ilo.org/publications/generative-ai-and-jobs-2025-update
- ILO — "Generative AI and Jobs: A Refined Global Index of Occupational Exposure" https://www.ilo.org/publications/generative-ai-and-jobs-refined-global-index-occupational-exposure
- OECD — "Generative AI and the SME Workforce" (Nov 2025) https://www.oecd.org/en/publications/generative-ai-and-the-sme-workforce_2d08b99d-en.html
- Stanford DEL / Brynjolfsson, Chandar, Chen — "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Generative AI" (Nov 2025) https://digitaleconomy.stanford.edu/app/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf
- IMF — "Bridging Skill Gaps for the Future" (Jan 2026) https://www.imf.org/en/publications/staff-discussion-notes/issues/2026/01/09/bridging-skill-gaps-for-the-future-new-jobs-creation-in-the-ai-age-572136
- WEF — Future of Jobs Report 2025 https://www.weforum.org/publications/the-future-of-jobs-report-2025/