Stop-and-Replan: The Rule That Prevents Catastrophic AI Failure
The pattern that kills AI projects isn't bad models or bad prompts. It's forward-pushing past the moment you should have stopped. Here's the rule that fixes it.
The most expensive bug in AI agent work is not a code bug. It is a judgment bug.
It sounds like this: "That didn't work. Let me try a variation." Then another variation. Then another. Three hours later, you have 12 failed attempts, a codebase full of half-reverted changes, and a problem that has gotten harder to solve because you've been tunneling instead of thinking.
This is the pattern that kills projects. Not bad models. Not bad prompts. Forward-pushing when the signal is stop.
An agent that keeps pushing a failed approach isn't persistent. It's broken.
Persistence is a virtue when your direction is correct. It is a liability when your direction is wrong.
The Sunk Cost Trap in AI Agent Work
Human psychology is wired for completion. Starting something activates a mental tab that stays open until the task is closed. The longer you work on something, the more you resist abandoning it — because abandoning it means the previous effort was wasted.
This is sunk cost bias. It operates below conscious reasoning, which makes it dangerous.
In AI agent work, it compounds. Because agents are fast — they can execute 4 variations in the time it takes a human to brew coffee. Speed creates the illusion of progress. Each attempt feels like momentum. But if every attempt is executing against the same wrong assumption, speed is not helping you. It is burying you faster.
The tell: you are cycling through variations of the same fix without learning anything new from each attempt. That is not iteration. That is a tunnel.
Strength of character does not consist solely in having powerful feelings, but in maintaining one's balance in spite of them.
— Carl von Clausewitz · On War
Two Incidents That Made This a Hard Rule
Every rule in this system was written in blood from a real incident. This one has two.
The February 2026 Tunnel Incident. An infrastructure configuration needed a tunnel between services. Three configurations were attempted. Each failed. Instead of stopping to question the tunnel architecture, the next attempt was tried immediately. A fourth, then a fifth. The assumption that a tunnel was the right solution was never challenged. The fix, when it finally came, did not involve a tunnel at all. The architecture was wrong. Every attempt was wasted work against a false premise.
The CapCut Incident. A workflow needed to automate a step inside a specific platform. When the first approach was blocked, workarounds were attempted. Then workarounds to the workarounds. The platform had a hard restriction that no amount of cleverness would circumvent. The correct decision — pivot to a different platform entirely — was obvious in retrospect. It was not obvious in the middle of attempt four.
Both incidents share the same failure mode: failure to distinguish "I am making progress" from "I am in a tunnel."
Progress vs. Tunnel: The Diagnostic
This is the meta-skill that the rule depends on. You have to be able to diagnose your own situation clearly, in real time, under the pressure of an incomplete task.
Two questions:
- Is each attempt teaching you something new?
- Is the approach converging on a solution, or cycling back to the same failure?
Progress looks like: attempt reveals a new constraint → you update your model of the problem → next attempt is different in a meaningful way → you are getting closer.
A tunnel looks like: attempt fails → you adjust one variable → next attempt fails the same way → the underlying assumption has not changed → you are executing against a false model.
If two consecutive attempts fail without teaching you something genuinely new about the problem, you are in a tunnel. Stop.
The trap is not ignorance. The trap is the feeling of almost-there.
Tunnels feel like you are one small fix away from success. That feeling is exactly what keeps you in them. The almost-there feeling and the tunnel feeling are often the same feeling.
The Pivot Protocol
Stopping is not the end of the task. It is the beginning of a better approach. But stopping without a structured pivot protocol is just confusion.
The protocol has four steps:
1. Name what failed. Specific, factual. "I tried to configure the reverse proxy with nginx upstream to route traffic to port 8001."
2. Name why it failed. Root cause, not symptom. "It failed because the service was binding to 127.0.0.1 instead of 0.0.0.0, which is an Alpine Docker constraint, not an nginx configuration issue."
3. Propose the new direction. Concrete, different in a meaningful way. "I'm pivoting to binding the service to 0.0.0.0 at the application level and simplifying the proxy configuration."
4. Get sign-off before executing. This step is not optional. After two failed attempts, a third attempt executed in isolation is not a pivot — it is a third attempt. State the pivot. Get confirmation. Then execute.
This protocol forces you out of execution mode and into reasoning mode. The act of writing down what failed and why often reveals the new direction before you even get to step 3.
Why This Rule Has to Be Explicit
You would think experienced engineers naturally stop and reassess when blocked. They do not. The evidence is in every post-mortem ever written.
The pressure to ship, the sunk cost of previous attempts, the proximity bias toward the current approach — these combine to make continued pushing the path of least psychological resistance. The rule has to be explicit because the default behavior without a rule is to keep going.
This is why it lives in CLAUDE.md — the project constitution — rather than in a comment somewhere. It is a governing constraint, not a suggestion. When an agent violates it, the error is not the failed attempt. The error is the attempt after the second failure. That is the moment the rule was broken.
The stop-and-replan protocol is not a failure acknowledgment. It is a superiority signal.
The builder who can accurately diagnose "I am in a tunnel" and pivot cleanly will consistently outperform the builder who persists through 12 attempts to reach the same destination.
Strategic disengagement is a skill, not a retreat.Lesson 13 Drill
Review your last three AI projects that did not go as planned. For each one:
- Was there a moment — a specific moment — where you should have stopped and reassessed?
- What was the assumption that kept you pushing?
- What was the signal you ignored?
Write down the answers. You are building a personal pattern library. The goal is not to feel bad about the past. The goal is to calibrate the instrument so you catch the tunnel signal earlier next time.
Bottom Line
Two attempts is generous. After two, you are not gathering data. You are repeating.
Stop. Name what failed. Name why. Propose something genuinely different. Get sign-off. Execute.
That sequence costs you 10 minutes. The alternative costs you hours, a messy codebase, and the compounding frustration of a problem that kept getting harder as you dug deeper into the wrong direction.
Strategic stopping is not giving up. It is how professionals work.
Explore the Invictus Labs Ecosystem