The Signals Were Real: InDecision Framework Hits 93% Win Rate in Live Markets
The bot was cycling every 2 minutes — its own watchdog killing it every 129 seconds. The signals inside were perfect: 86–100/100, 92% accuracy, calling direction while the market priced uncertainty at 50/50. One coding session fixed the infrastructure. The rest is on-chain.

The bot was killing itself every two minutes.
Not metaphorically. Literally: it would start, begin evaluating markets, and then its own self-healing watchdog would fire a SIGTERM 129 seconds into the loop. The wrapper catches the exit, waits 10 seconds, restarts. Repeat, indefinitely. Every cycle the InDecision Framework was scoring XRP 86/100 BULLISH, ETH 89/100 STRONG UP, BTC calling clear direction while the market priced the same move at 50/50 — and the bot would die before placing a single order.
The infrastructure was eating itself. The signals were never wrong.
One coding session. Three precision fixes. By end of day: 55 trades, 51 wins, +$378.64, 92% session win rate, 90.2% rolling over 7 days.
The signals were real. They just needed the infrastructure to be as precise as they were.
What InDecision Actually Is
The name is deliberately counterintuitive. InDecision doesn't mean uncertain — it means precisely calibrated about uncertainty. It's a 6-factor scoring engine built to answer one question: does this market have a conviction gap right now?
Most markets are fairly priced most of the time. The edge isn't being smarter than the market. The edge is identifying moments when the market's own pricing reflects less conviction than the underlying data warrants — and positioning on the right side before the price catches up.
The framework runs five scoring modules against real-time data, assigning weights to both bull and bear cases simultaneously:
Each factor contributes to either the bull accumulator, the bear accumulator, or both — weighted by direction strength. The spread between accumulators becomes the conviction score. High spread means clear directional edge. Low spread means the data is ambiguous. The framework's output is binary in label but continuous in precision: it tells you how much the competing forces disagree, not just which side is louder.
This is the difference between InDecision and most signal frameworks. It doesn't just output a direction. It outputs a confidence architecture.
The Dual-Feed Architecture
InDecision doesn't run a single engine. It runs two independent analysis pipelines calibrated for different market timeframes.
IntraBiasFeed fires every 5 minutes, running the IntraCaseAggregator. Optimized for sub-hourly signals: RSI divergence, MACD cross velocity, Bollinger Band compression. When the bot evaluates a 5m or 15m market window, this feed provides the directional context — real-time, fresh, calibrated for short-window binary markets.
DailyBiasFeed runs every 4 hours via the DualCaseAggregator. Pattern-focused, volume-weighted, timeframe-aligned against daily structure. It's the macro lens — the trend that 15m noise either confirms or contradicts. When the daily feed and intraday feed agree on direction, the InDecision score to PolyEdge can jump 20+ points. When they conflict, the system stays NEUTRAL by design.
The dual-feed architecture was built for one purpose: never trade from stale context. A 4-hour-old signal injected into a 5-minute window is noise. The intraday feed eliminates that category of error entirely.
How PolyEdge Uses InDecision
InDecision isn't a tiebreaker in the PolyEdge system. It's the backbone — the factor with the highest possible score impact (-10 to +25 points depending on alignment and conviction) in a 0-to-100 scoring framework.
When InDecision is BULLISH with moderate-to-strong conviction, it injects +15 to +25 points into the PolyEdge score. When the market is NEUTRAL (spread under 10%), the injection is zero. When InDecision is BEARISH and PolyEdge wants to go UP, the score takes a -10 point hit. The system is designed to disagree with itself when the data conflicts. That self-correction is the entire thesis.
Today's Session: Three Fixes That Changed Everything
By this point in the project's life, the analytical engine was mature. The issue today was infrastructure — the kind of problem that only surfaces when a system scales past its original assumptions.
Fix 1: Break-Even Categorization
The first fix was subtle but changed the integrity of every metric in the system.
In Polymarket binary markets, you can be directionally correct and still lose money after fees. Buy a UP token at 87¢, it resolves UP, your gross win is 13¢, fees are 14¢ — you lost money on a correct prediction. The database records this as outcome='win' because the direction was right. But every stats calculation was counting it in the win rate numerator.
The fix: break-even trades are their own category. A break-even is outcome='win' AND pnl_net ≤ 0. Win rate now calculates as profitable_wins / (profitable_wins + true_losses). Break-evens excluded from both numerator and denominator.
Why this matters for strategy: A growing break-even rate is a signal that the bot is entering positions at market extremes — buying UP tokens already priced at 85¢+ where the fee floor eliminates all margin. The distinction between "correct direction, wrong entry price" and "wrong direction" is analytically important. They require different fixes.
The previous win rate was inflated. The real win rate is 92%. That number is now trustworthy.
Fix 2: Kelly Bet Sizing
The Kelly criterion — the mathematically optimal bet sizing formula derived from information theory — was supposed to be live. It wasn't. A dynamic post-processing block in the execution path was running after Kelly's output and overwriting it with a flat conviction multiplier.
With Kelly active and the correct bankroll wired to live wallet balance, position sizes now scale proportionally to edge strength. Strong signals bet more. Moderate signals bet less. The system allocates capital the way every quantitative trader knows it should be allocated — not uniformly.
Fix 3: The Watchdog Architecture
This is the story the logs tell best.
At 21:34:41, the Coinbase price feed went down. The bot correctly switched to Binance WebSocket fallback — exactly as designed. What wasn't designed for was what that failover did to the evaluation loop timing.
The EventLoopWatchdog is a daemon thread that monitors the asyncio event loop. If beat() isn't called within 120 seconds, it assumes the loop is hung and sends SIGTERM. The wrapper catches the exit, waits 10 seconds, restarts. Clean self-healing architecture — the design was correct. The implementation had one flaw.
beat() was called once per outer loop iteration. Then the inner loop evaluated 16 markets sequentially. Each evaluation calls the TA engine (10-second Binance timeout) and the pattern engine (also 10 seconds via shared fetch_candles). During the Coinbase→Binance failover, REST API responses were slower. 16 markets × up to 20 seconds each = up to 320 seconds with no heartbeat.
The watchdog fired at 129 seconds. Correct by spec. Wrong by intent.
The fix: move self._watchdog.beat() inside the for market in active_markets: loop, right before await self._evaluate_market(market). The timeout didn't change. The semantics did. 120 seconds now means a single market evaluation should never take 120 seconds — which is the correct invariant. The previous semantics were 16 market evaluations should collectively never take 120 seconds — which is mathematically impossible with 16 markets and 10-second API timeouts.
No watchdog fires since 22:28 PM. The bot has been running clean for hours.
The Numbers
These aren't backtests. These are live trades, real USDC, on-chain settlements on Polygon.
The by-conviction breakdown is the proof the framework works as designed:
- Strong conviction (score ≥ 90): 42 trades — 39 wins — 92.8% win rate
- Moderate conviction (score 80–89): 11 trades — 10 wins — 90.9% win rate
- Below threshold: skipped — the framework doesn't manufacture edge
The filtering is the product. The system doesn't trade uncertainty. It waits for a measurable conviction gap, takes it, closes it.
The real tell: on the 7-day view, the InDecision intraday feed is calling BULLISH or NEUTRAL_BULLISH across SOL, XRP, AVAX, DOGE, and LINK simultaneously. Strong spread across multiple correlated assets in the same direction is a regime signal, not noise. The framework reads that as a structural edge window — and the results confirm it.
Why This Architecture Works
There's a thesis embedded in this system that most retail traders never reach because they're focused on the wrong layer.
InDecision wasn't built to predict price. It was built to measure the gap between what the data suggests and what the market is pricing. These are different problems. Price prediction is hard — you're competing against every participant, human and machine, simultaneously. Conviction gap measurement is harder to commoditize because it requires a multi-factor, real-time evaluation architecture that most participants don't have and won't build.
The DualCaseAggregator — the engine powering the daily feed — was itself a critical fix from a week ago. Before it, when BTC was showing 22.7% conviction, the bot had a conviction drought: no signals strong enough to trade even when price structure was clear. The fix implemented a dual competing case model that forces the engine to quantify both the bull and bear cases simultaneously, then compute their divergence. BTC went from 22.7% → 50.5% conviction immediately. Not because the market changed. Because the measurement got more precise.
That's the through-line of today's session. Break-even categorization made the win rate metric more precise. Kelly sizing made the capital allocation more precise. The watchdog fix made the self-healing more precise. None of these touched the InDecision scoring engine itself — because the analytical engine was already right.
The infrastructure needed to be as precise as the signals it was carrying.
What This Project Is
This is personal use infrastructure. I'm not selling signals. I'm not running a fund. There's no reason to monetize something that prints while I'm working, sleeping, and building the other things I care about.
What's interesting about this project isn't the P&L. It's that the analytical frameworks driving it — multi-factor conviction scoring, dual competing models, self-healing daemon architecture, precision break-even categorization — are all directly applicable to how I think about engineering systems, team dynamics, and competitive intelligence.
The InDecision Framework started as a mental model for reading market structure. It became a codified scoring engine. It's becoming something else.
The signals were real from the beginning. They just needed the infrastructure to match their precision.
Explore the Invictus Labs Ecosystem
Follow the Signal
If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

From Framework to Signal: Building the InDecision API
The InDecision Framework ran for 7 years as a closed system — Python scorers feeding Discord and a trading bot. Turning it into a public API forced architectural decisions that changed how I think about signal infrastructure.

The Trader Behind the Bot
PolyEdge scored 65/100 and sat. Knox traded manually and went 3 for 3. The gap between those two outcomes reveals the exact mechanisms the model was missing.

Bitcoin Doesn't Fear Wars — It Front-Runs Them
Retail panics on geopolitical headlines. Institutions position ahead of them. The data from 20 major conflict events says the same thing every time — and right now, the signal is as clear as it gets.