AI and Prediction: What the College Football Playoffs Tell Us About the Limits of AI and Data
Alabama missing out on the College Football Playoff (CFP) is a big deal in the college football world. Mark Consuelos quipped on Live with Kelly & Mark, “Then don’t lose three games.” While the sentiment seems simple, the truth is far more complicated. College football is not just about tallying wins and losses; it’s about interpreting a tapestry of data, historical precedent, and human judgment. To understand why the CFP is so complex, let’s break down the issues that make selecting teams anything but straightforward.
AI and Prediction: Insufficient Data
Football games generate a flood of statistics: yardage, completion percentages, time of possession, and more. But these numbers only tell part of the story. The data needed to definitively compare the strengths and weaknesses of teams across conferences, game conditions, and player dynamics simply does not exist.
Teams are more than their resumes or their stats. Early-season rankings rely on past performance, the perceived strength of the coaching staff, and subjective notions of player potential. However, football is unpredictable. Injuries, weather, and momentum all introduce volatility that cannot be fully captured in numbers. Models rely on averages, but football thrives on the exceptional—the single play or moment that disrupts expectations. These qualitative factors, while often discussed by commentators, rarely make their way into the quantitative models used to determine playoff berths.
AI and Prediction: Incorrect Models
The CFP Selection Committee uses advanced models to inform their decisions, but even these have inherent flaws. Traditional analytics depend on historical data to make predictions, which inherently introduces bias toward certain teams or conferences. AI-driven models, while more sophisticated, are still limited by the quality of the inputs and the assumptions embedded in their design.
For example, models may overemphasize strength of schedule without accounting for the timing of critical injuries or underestimate the significance of a close loss to a top-tier opponent. These blind spots can lead to rankings that don’t align with on-field performance or intuitive expectations.
Moreover, the human element—subjective evaluations by the committee—is supposed to fill these gaps, but human biases and inconsistencies often exacerbate the problem. The interplay between flawed data, imperfect models, and human judgment creates a system where no one fully trusts the outcomes, even when they agree with them.
AI and Prediction: Technology Isn’t Capable of Modeling the Solution Space
Even the most advanced technology struggles to make sense of a college football season. With dozens of teams and thousands of games played under wildly different circumstances, the solution space is enormous. AI may be able to simulate millions of scenarios, but that doesn’t mean it can predict the future with certainty.
For instance, a generative AI model might accurately predict that a team like Alabama could struggle if its offensive line underperforms in critical games. But if Alabama’s kicker misses a crucial field goal, does that mean the AI was right or just lucky? And how do we separate valid predictions from the noise of random chance? The vast number of variables in football—many of them qualitative and context-dependent—makes it nearly impossible to create a perfect model.
AI is excellent at identifying patterns in data, but outliers often define college football outcomes. A single dropped pass, a referee’s controversial call, a freak snowstorm, or an unexpected injury can swing a game and, by extension, a season. Technology, for all its power, is not yet capable of integrating these human and situational factors into its predictions.
Seeing the Whole Field
Alabama’s absence from the CFP highlights the complexity of selecting the “best” teams in a sport that thrives on unpredictability. People watch sports because it is the ultimate unscripted event. The controversy over playoff selections underscores the limits of data, models, and technology in providing definitive answers. Fans may grumble about the process, but the truth is that college football’s chaos is part of its charm.
As the playoff continues in the coming years, the system will face even more significant challenges. More teams mean more variables, more opportunities for debate, and more reliance on technology to sort through the chaos. Whether the process improves or simply shifts the arguments remains to be seen. Better reasoning, more variables and additional data won’t overcome the uncertainty. By proxy, all organizations need to be cautious about an overreliance on AI’s oracular capabilities when it comes to forecasting complex situations characterized by uncertainty.
All organizations need to be cautious about an overreliance on AI’s oracular capabilities when it comes to forecasting complex situations characterized by uncertainty.
For now, though, one thing is clear: the road to the CFP is anything but simple, and the reasons behind a team’s inclusion—or exclusion—go far beyond wins and losses. Unless, of course, (now me showing my bias attributed to a large portion of my mid-2000s income going to the University of Oregon for my daughter‘s tuition ) if all the games were wins. (Go Ducks!)
For more serious insights on AI, click here.
All images created by DALLE-2 from Open.AI from prompts written by the author.
Did you enjoy a AI and Prediction: What the College Football Playoffs Tell Us About the Limits of AI and Data? If so, please like the post and share it on social media. Click a sharing button for easy sharing. Have a question or comment? Please engage in the comments section!
Leave a Reply