The shape of normal. Most games are decided by single digits. A handful are decided by much more. Here's the histogram.
Every distribution has tails. These are the tails.
Top 5 and bottom 5 each season by margin differential per game — the rest hidden by default. Wins green, losses red, read left-to-right. Older seasons collapsed.
Ranked by surprise. The algorithmic detector found these patterns by scanning every game in the dataset for streaks, blowouts, nail-biters, and statistical anomalies.
The v2 ratchet model, validated against a held-out test set, applied to upcoming games. Track record updates as games complete. Picks are reasoned, not vibes. Confidence is honest.
The v4-spread model predicts expected margin and compares it against the bookmaker's line. When the model disagrees with the spread by a meaningful amount, that's an edge signal. This is experimental — no backtesting, no proven edge. Track record accumulates live.
The ratchet loop: hypothesize, test, keep or revert. Each iteration is a new rule added on top of the previous. Every version is scored on a TEST set the model has never seen — games from 2024-25 and 2025-26. No future leakage, no overfitting. Bootstrap confidence intervals, not point estimates.
When we say a team has a 70% chance to win, do they actually win 70% of the time? This chart grades the model's honesty. Dots on the diagonal mean the confidence is earned. Above the line we're sandbagging; below it we're bluffing.
Six leagues, hundreds of athletes, the leaders in every category we track. Volume scorers, efficiency monsters, defensive anchors, the overlooked specialists. Each row is a real player having a real season.