The Myth of the Jagged Frontier: Why AI is No Longer Limited (and How it Changes Everything)

AI isn't jagged anymore. This is the thought that keeps me up at night.
For three years, the "Jagged Frontier" has been the organizing frame for AI strategy. The idea was simple: AI is incredible at some things and terrible at others. We assumed this was an inherent property of machine intelligence.
We were wrong.
Jaggedness was never a property of the AI. It was an artifact of how we were asking it to work.
The 2022 Paradigm is Dead
When we asked ChatGPT to solve a problem in a single turn, we were asking a brilliant analyst to solve a complex issue in 30 seconds with no notes, no colleagues, and no ability to retry.
Of course the results were jagged. A single error midway through would propagate until the end, and the model had no mechanism to stop and say, "Wait, this doesn't make sense."
Now, we are entering the era of Inference Computing. Models like o1 or the latest GPT iterations don't just output text; they think. They use tokens to self-correct, verify, and pivot.
The frontier is smoothing out.
The Harness: The Secret of Agents
The biggest shift isn't happening inside the model; it's happening in the scaffolding we build around it. I call this the "Harness."
A harness is the infrastructure that allows an AI agent to operate over long horizons: task files, persistent memory, and the ability to use tools in a loop.
When you put an agent in the right harness, "jagged" problems disappear. Suddenly, the model isn't just writing code; it's building an entire web browser from scratch.
The Cursor Case: When Code Meets Research Math
The ultimate proof came from Cursor. They took a harness designed for coding and pointed it at a research-grade mathematics problem—one that had never been published, meaning it wasn't in the training data.
The agent ran for four days. No hints, no human nudges.
It didn't just solve the problem; it found a solution superior to the original human-written one from Stanford and MIT academics.
This is a watershed moment. It suggests that the architecture of agents—decomposition, parallelization, verification, and iteration—is generalizable. It works for anything that is verifiable.
Organizational Intelligence: Machines Learning to Manage
Look at the structures Anthropic, Google, and OpenAI have independently built. They look exactly like human teams:
- Planner: Decomposes the problem into sub-tasks.
- Worker: Executes specific tasks in isolation.
- Judge: Verifies the output and decides whether to proceed or restart.
This isn't just AI. This is management.
We are replicating the organizational intelligence humans have used for centuries and embedding it into autonomous systems.
What This Means for Your Work
If AI is becoming "smooth" in execution, your value shifts entirely.
I call this the "Sniff Check."
Your skill will no longer be doing the work, but knowing if the work is correct, maintainable, and aligned. These are the "Meta-Skills."
A product manager won't write a PRD; they will sniff-check the PRD the agent generated. A marketer won't design a campaign; they will sniff-check the strategy the agents executed.
The Question You Must Ask
The question is no longer "Can AI do my job?"
The question is: "Can my work be decomposed into verifiable sub-problems?"
The answer is almost always yes. Far more often than we are comfortable admitting.
The winning organizations won't be those waiting for the "next model," but those building the harnesses that allow current intelligence to scale.
The world is no longer jagged. It’s waiting for you to smooth your path into it.
What complex task in your business are you still afraid to delegate to an agent, simply because you believe it's 'not smart enough' for it?