Loading...
Loading...
Hockey is the hardest major sport to predict. Here's why 62% is better than it sounds, how we measure it honestly, and where the model is strongest.
TL;DR
A coin flip gets you 50%. Vegas closing lines hit around 58-60%. The best public hockey models land in the 59-65% range. PuckCast's 62% was measured on 5,002 games the model never saw during training (walk-forward validation). Confidence grades let you filter for the strongest picks, where accuracy climbs well above that 62% average.
If you could predict NHL games at 70%, you'd be the best hockey forecaster who ever lived. That ceiling barely exists in hockey the way it does in other sports. Here's why.
The puck is small, fast, and bounces off sticks, skates, and bodies in ways that are genuinely unpredictable. A deflection off a shin pad can be the difference between a win and a loss.
Most NHL games produce 5-7 combined goals. With so few scoring events, a single lucky bounce has outsized impact on the final result. Compare that to basketball where 200+ points are scored.
A goalie having an off night can tank a team that dominates possession. Save percentage swings game to game in ways that are extremely difficult to forecast in advance.
The salary cap, draft lottery, and revenue sharing are designed to keep the league competitive. In any given NHL game, the underdog has a real shot. That is the point.
Context matters. 62% sounds modest until you see where it sits relative to other benchmarks.
Random guessing. Zero skill.
Naive baseline. Home ice advantage exists but it is not enough.
The market consensus. Backed by millions of dollars in action.
5,002 games across 4 temporal folds. 16 seasons of training data.
The realistic ceiling for hockey prediction. Beyond 65% is uncharted territory.
Any model can memorize the past. The real test is whether it works on games it has never seen. That is the difference between in-sample accuracy (testing on your training data) and walk-forward accuracy (testing on future data only).
PuckCast uses 4-fold temporal cross-validation. The model trains on earlier seasons and predicts later ones. It never peeks at the future. The 62% number comes entirely from these held-out predictions across 5,002 games.
Why this matters
Transparency is a core value here, so let's be direct: live performance does not always match backtest numbers. PuckCast's walk-forward accuracy is 62% across 5,002 historical games. Live accuracy during the 2025-26 season sits at 57.4% overall.
That gap is normal. Every model faces it. Live predictions deal with injuries breaking mid-game, lineup changes announced after the model runs, and the inherent variance of small sample sizes within a single season. The good news: recent performance has been strong, with the model hitting 71% over the last 30 days as of early March 2026. Performance fluctuates, and we track it publicly on the performance page.
62.0%
Walk-forward (5,002 games)
57.4%
2025-26 live (overall)
71%
Last 30 days (Mar 2026)
The 62% number is an average across all games. But the model knows when it has a strong read and when it is basically guessing. That is where confidence grades come in.
A+ / A
The model sees a clear edge. Strong team advantages, favorable matchups, and aligned underlying metrics. These games hit at well above the 62% average.
B+ / B
Solid lean but not a slam dunk. One or two factors might be working against the pick. Still above average accuracy.
C+ / C
Marginal edge. The model has a preference but the signal is weak. Close to the overall average hit rate.
D
Toss-up territory. The model barely favors one side. These games are close to a coin flip and the confidence reflects that.
If you only follow A+ and A picks, your effective accuracy is meaningfully higher than 62%. The tradeoff is volume: fewer games qualify. Confidence grades let you decide where on the accuracy-vs-volume spectrum you want to sit.
Most public hockey prediction models do not publish walk-forward accuracy numbers. The ones that do tend to report in the 55-63% range. Some claim higher, but without disclosure of their validation methodology, those numbers are difficult to verify.
PuckCast uses an ensemble of logistic regression (70%) and histogram-based gradient boosting (30%) trained on 154 features across 16 NHL seasons. The ensemble approach balances stability with the ability to capture non-linear patterns.
Model specs
Yes. Hockey is the most random of the four major North American sports. Vegas closing lines sit around 58-60%, and the best public models top out in the 59-65% range. PuckCast's 62% walk-forward accuracy places it solidly in that top tier, especially because it was measured on 5,002 games the model had never seen during training.
Walk-forward validation trains the model only on past data and tests it on future games, mimicking real-world usage. This is the honest way to measure accuracy. In-sample accuracy (testing on the same data you trained on) is always inflated and misleading. PuckCast's 62% number comes from 4-fold temporal cross-validation across 5,002 games.
Three main reasons: goals are rare events (typically 5-7 per game combined), so a single lucky bounce can flip the outcome. The puck is small, fast, and deflects unpredictably off sticks, skates, and bodies. And goalie performance varies significantly game to game in ways that are difficult to model in advance.
Not all predictions are created equal. PuckCast assigns confidence grades (A+ through D) based on how much edge the model sees. A+ and A games historically hit at a significantly higher rate than the 62% average, while D games sit closer to a coin flip. By focusing on high-confidence picks, you can filter for the model's strongest signals.