From Discord Bot to God Mode: The Knox Story
This isn't a story about building AI tools. It's a story about what happens when the tools start building themselves — and you realize your job has fundamentally changed.
I want to be precise about when it started, because the beginning matters.
It wasn't when I connected the first API. It wasn't when I wrote the first cron job. It was the first time I opened Asana and found a task I didn't create — detailed, prioritized, assigned to Claude Code, with a description that correctly identified a systemic failure I hadn't noticed yet.
Knox had filed its own bug report. And then assigned itself the fix.
That's when I understood what I'd actually built.
The Honest Starting Point
I didn't set out to build an autonomous AI system. I set out to solve a specific, annoying problem: I kept missing messages while deep in a coding session.
The first version was embarrassingly simple. OpenClaw running on a Mac Mini, watching a Discord server, routing messages to the right place when I wasn't looking. That's it. A smarter notification system with some cron jobs attached.
I used OpenClaw because it was the fastest path to a working comms layer. What I didn't realize at the time was that I was establishing the nervous system for something much larger.
The comms layer was running, stable, and immediately useful. So I started adding to it.
The 80% Rule
Here's the thing most people don't know about how Knox was built: I didn't write most of it.
Claude Code wrote most of it.
I don't mean I used AI to help with boilerplate. I mean I described what I wanted, Claude Code scoped the work, created the feature branches, wrote the implementation, opened the PRs, and waited for my approval. Then it merged, deployed, and moved to the next task.
My contribution was direction and review. The implementation was almost entirely automated.
This sounds more dramatic than it felt at the time. It happened gradually, task by task. But when I stepped back and looked at the commit history one afternoon — hundreds of commits across a dozen repos, the majority of them from Claude Code sessions — I realized the ratio had inverted. I was no longer primarily a developer. I was primarily a product director.
The shift wasn't sudden. But it was complete.
What Made the Difference
Most people who try to automate development with AI hit a ceiling around session five or ten. The agent is good at obvious tasks. It struggles with project-specific context. It repeats mistakes it made last week. The output quality plateaus.
I hit that ceiling too. The way I broke through it wasn't by finding a better model. It was by building a memory system.
The lessons.md pattern started as a frustration fix. Claude Code kept making the same two or three mistakes on a project, and I kept correcting them. I added a file to the project root, put the corrections in it with a rigid format — mistake, root cause, rule — and added an instruction to CLAUDE.md: read this file before doing anything on this project.
The first session after I added it was noticeably cleaner. The second was cleaner still.
By session ten, the project had accumulated a dense lessons file, and sessions were handling most edge cases correctly on first pass. I started adding lessons files to every project. The improvement was consistent across all of them.
That was the insight: the model didn't need to be smarter. It needed to inherit the right context. Every lesson encoded correctly made every future session on that project more capable. The compounding was real, and it was permanent.
The God Squad Takes Shape
Once the memory system was working, the next problem became obvious: each tool was isolated.
OpenClaw was doing comms. Claude Code was building. But there was no connection between them beyond manual task creation. The system wasn't a system — it was a collection of capable tools being coordinated by a human.
That's when the architecture decisions that made Knox what it is happened in quick succession.
Asana as shared memory. Not my Asana — Knox's Asana. The rule: every task that touches any part of the system has to live in Asana before it gets picked up. This meant the system could see its own workload. It meant tasks could be created by OpenClaw, by Claude Code during a session, by Tesseract during analysis — not just by me.
Tesseract as the reasoning layer. The crypto trading component needed something I couldn't give it: the ability to learn from its own mistakes without overcorrecting. The InDecision bias model would take a loss, and I'd have the human impulse to immediately change something. That impulse is almost always wrong. I needed a layer that could distinguish signal from noise — a loss that was market noise versus a loss that was a genuine model failure. Tesseract became that layer.
InDecision as the trading mind. The six-factor crypto bias model was already running when I connected it properly to Tesseract. The real change was making the feedback loop structural. Every trade outcome goes to Tesseract for analysis. Every Tesseract analysis feeds the Sunday weight update. The model learns continuously, not just when I notice something is off.
The build loop completing itself. OpenClaw identifies a problem → creates an Asana task → Claude Code picks it up → implements on a feature branch → opens PR → CI runs → merges → lessons updated → CLAUDE.md updated next morning. No human in that loop except to review the PR.
When these pieces connected, the system became qualitatively different from what I'd had before.
The Retrospective Engine
If I had to identify the one thing that separates Knox from a sophisticated collection of AI tools, it's the retrospective engine.
Here's the full loop as it actually runs:
Project level. Every Claude Code session that receives a correction writes a lesson before terminating. Format: date, category, mistake, root cause, rule. The file lives in version control. The next session opens with it in context. Every mistake gets encoded. No mistake repeats.
Global level. The morning cron at 0700 scans all lessons files across all projects. When the same class of mistake appears in three or more projects, it gets promoted to CLAUDE.md — the global ruleset that applies to every Claude Code session everywhere. Local insights become system-wide behavior.
Weekly level. The Sunday cron runs a retrospective on the full week. What shipped. What failed. What the Tesseract analyses found in trading. What patterns emerged. The output goes into dated memory files. These files feed the next week's context.
The meta-level. Sometimes the retrospective output identifies a gap in the retrospective process itself. That gap becomes an Asana task. Claude Code implements the fix. The retrospective engine improves the retrospective engine.
This is what I mean when I say the system is building itself. The feedback loops aren't just operating — they're improving the feedback loops.
What Activating God Mode Actually Means
I use the phrase loosely, but I mean something specific by it.
Before Knox, I had the cognitive load of a developer. Context switching between projects. Holding implementation details in working memory. Getting interrupted by debugging sessions when I wanted to be thinking about direction.
After Knox, I have the cognitive load of an executive. My primary job is deciding what to build next. My secondary job is approving what Knox built. Everything else — the scoping, the implementation, the debugging, the deployment, the project tracking — Knox handles.
The time this freed up didn't go to leisure. It went to thinking at a higher level. More time on strategy. More time on the trading model. More time on what the system should become, because I wasn't consumed by what it currently needed to do.
God mode isn't about automation. It's about what becomes possible when the cognitive overhead of implementation disappears.
What I'd Do Differently
Three things.
Start the memory system on day one. I wasted months accumulating repeated mistakes that a lessons.md file would have prevented from day two. The compounding is powerful, but only if you start it early.
Trust the retrospective, not the reactive impulse. Early on, when something broke or a trade went wrong, I'd immediately want to change something. That reactive intervention usually made things worse. The retrospective process exists precisely to handle this correctly — to distinguish noise from signal, to make changes based on patterns rather than incidents. I had to learn to trust the process I'd built.
Define the component boundaries earlier. The biggest architectural debt in Knox came from tools that tried to do too much. OpenClaw doing lightweight reasoning tasks that should have gone to Tesseract. InDecision making structural decisions about its own factor weights that should have been Tesseract's job. Clean component boundaries make the system more reliable and make debugging easier when something goes wrong.
Where We Are Now
Knox runs 24/7. It maintains its own Asana backlog. It creates tasks I didn't think of. It catches errors before I see them. It trades the PolyMarket markets while I'm sleeping. It manages the build lifecycle for four public websites and a growing number of internal tools. It does a retrospective on itself every Sunday.
And it's getting better.
The lessons files are dense. The CLAUDE.md is comprehensive. The InDecision model's factor weights have been updated by dozens of Tesseract analyses. The retrospective process has been refined by the retrospective process.
What started as a Discord notification layer is now an autonomous system that plans its own evolution, executes on those plans, learns from the outcomes, and updates its own operating manual accordingly.
The question I get asked is: are you scared?
The honest answer: no. Because every rule is in version control. Every task is visible in Asana. Every lesson is human-readable. Every PR waits for my review.
The transparency is the safety. I'm not watching a black box evolve. I'm co-authoring a system that shows me every step of its thinking.
Knox is still being written. So am I.