
Author
Malik James-Williams
Key Concepts
- ai
- tools
- workflows
5 min read
Prompting Is Not the Moat
Prompt engineering courses are booming, but the skill they teach is depreciating fast. The durable value in AI sits above the interface.
Prompting is not the moat
There are now courses, bootcamps, and certifications built around the promise that learning to talk to AI properly will keep you valuable. Basic AI literacy genuinely matters, while the specific skill being marketed has one of the shortest shelf lives in the industry right now.
On Indeed, searches for "prompt engineer" spiked to 144 per million job queries in April 2023 [11]. By 2025, they'd fallen to 20-30 per million. The role didn't disappear because the skill became useless. It disappeared because the skill became table stakes, the kind of baseline competency that gets absorbed into every role rather than sustaining a dedicated one.
The melting ice cube
This pattern has precedent. Before online legal research became the standard, access to case law was itself a significant advantage. Lexis and Westlaw made that layer dramatically cheaper, and lawyers didn't become obsolete, but the interface improved while the judgment behind it remained scarce. Finding the relevant case was trivial, while knowing whether it actually mattered to the client remained just as hard.
AI is broader than legal research and encompasses writing, coding, analysis, planning, and a significant portion of knowledge work; however, the same dynamic applies. The better the tool gets at understanding what you mean, the less value there is in mastering its awkwardness.
You can see this in how AI products are evolving. OpenAI's 2025 developer roundup described a shift from "prompting step-by-step" toward delegating work to agents as models improved at planning, tool use, and longer-horizon tasks [1]. Anthropic has moved in a similar direction, from prompt engineering toward what they call context engineering, where the hard problem is shaping the full environment around the model rather than just wording the prompt more carefully [5]. That's what better tools are supposed to do: push the skill requirement up a layer.
Prompting still matters
OpenAI and Anthropic both publish detailed guidance on prompting because prompt quality still affects output quality, steerability, and consistency [2, 5]. I don't think prompt engineering is no good, and I don't think the people teaching it are wrong when they say it helps. But both companies point to an important truth: not every failure should be solved with a better prompt.
OpenAI now treats evals as essential infrastructure for understanding how your application actually performs [3, 4]. Anthropic notes that some failures are better addressed by choosing a different model, simplifying the task, or improving the surrounding setup rather than endlessly iterating on phrasing [5]. Prompt engineering is a real skill, but the question is how long the current version of it remains the bottleneck, and from what I've seen, that window is closing faster than the training market has acknowledged.
Where the bottleneck actually lives
In production systems, the bottleneck is rarely "we need a better prompt." It's usually something more structural. Does anyone know whether this system is actually working? Does anyone know when it fails? Is there someone accountable when fluent output turns out to be wrong? Does anyone reviewing the output understand the domain well enough to spot what the model confidently gets wrong?
I've watched teams spend months improving prompts for systems that had no meaningful monitoring, weak review processes, and nothing in place to catch drift or silent failures. The prompts got better. The system didn't become trustworthy. The gap between those two things is where I think the real skill shortage lives, and it's a gap that prompt engineering courses are almost entirely silent on.
Anthropic's context engineering framing is useful because it names this problem directly [5]. The hard part is increasingly how you structure the information, memory, tools, constraints, and instructions around the model so it can succeed in a real environment, and that's architecture work rather than copywriting. NIST's recent work on deployed AI systems found broad agreement that post-deployment monitoring is necessary while acknowledging that methods and terminology remain immature [9]. Everyone building serious AI systems knows monitoring matters.
What the hiring market is actually paying for
The labour story is covered in more depth in Part 1 of this series, but the skill signal here is worth pulling out separately.
LinkedIn's 2026 Jobs on the Rise data lists AI Engineer as the fastest-growing role [10]. The roles gaining ground treat AI as a component within a larger system, emphasising integration, evaluation, and orchestration rather than prompt crafting. Meanwhile, the Indeed data tells the other side of the same story, with the prompt engineer category dissolving as the skill gets absorbed into broader roles [11].
The market is already pricing in the shift from interface skill to systems skill. The question is whether the training market catches up before people invest significant time and money learning the layer with the shortest half-life.
The gap that's wide open right now
If you're spending serious time trying to become good at AI, ask yourself, "Which layer am I investing in?"
Getting better at interface techniques tied to the current model generation is useful today and will depreciate with every model improvement. Getting better at evaluation, context design, workflow architecture, monitoring, model selection, and domain-specific judgment is a fundamentally different kind of investment because those skills compound rather than erode as the tools improve.
Right now, the market is flooded with courses teaching prompt technique and almost silent on how to evaluate output, design reliable workflows, or build systems that fail gracefully. The skill with the shortest shelf life is attracting the most investment, while the skills that compound are left to on-the-job discovery, which connects directly to the apprenticeship problem from Part 1: the people most likely to invest in prompt engineering courses are often the same early-career workers who most need the greater skills that nobody is teaching them yet.
That gap won't last forever, but it's wide open right now, and I think the people who close it first will be the ones building the next layer of AI infrastructure rather than optimising for the current one.
References
- OpenAI — "OpenAI for Developers in 2025" https://developers.openai.com/blog/openai-for-developers-2025/
- OpenAI — "Prompt engineering" guide https://developers.openai.com/api/docs/guides/prompt-engineering/
- OpenAI — "Working with evals" https://developers.openai.com/api/docs/guides/evals/
- OpenAI — "Evaluation best practices" https://developers.openai.com/api/docs/guides/evaluation-best-practices/
- Anthropic — "Effective context engineering for AI agents" https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
- Anthropic — "Anthropic Economic Index: September 2025 Report" https://www.anthropic.com/research/anthropic-economic-index-september-2025-report
- Indeed Hiring Lab — "AI at Work Report 2025" https://www.hiringlab.org/2025/09/23/ai-at-work-report-2025-how-genai-is-rewiring-the-dna-of-jobs/
- OECD — "Generative AI and the SME Workforce" https://www.oecd.org/en/publications/generative-ai-and-the-sme-workforce_2d08b99d-en.html
- NIST — "Challenges to the monitoring of deployed AI systems" (NIST.AI.800-4) https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-4.pdf
- LinkedIn — "Jobs on the Rise 2026" https://www.linkedin.com/pulse/linkedin-jobs-rise-2026-25-fastest-growing-roles-us-linkedin-news-dlb1c
- Fortune — Indeed data on prompt engineer search collapse https://fortune.com/2025/05/07/prompt-engineering-200k-six-figure-role-now-obsolete-thanks-to-ai/