Alan Schwarz of The New York Times published an interesting piece in today's edition about the rise in popularity of simulation software among geeky baseball fans and front office types.
These programs use the so called "Monte Carlo method" to estimate the impact of various scenarios on performance. Essentially, they run exorbitant numbers of simulations to eliminate randomness and arrive at an approximation. For example, simulations can predict the impact of inserting a new player into a team's lineup or evaluate the effect of weather conditions on performance, and do so with astounding accuracy.
Sounds pretty cool, but how effective would such simulations be for college football? Unfortunately, Homerism is skeptical about their applicability.
For the most part, there's a uniformity to most aspects of baseball that you won't find in college football. Football teams run a wide variety of offensive and defensive sets and schemes, whereas there are only so many different ways baseball teams actually can play a different style than their opponents. Likewise, football teams have greater influence over the course of a game via their in-game strategic decisions. Also, ever single football play involves every one of the 22 players on the field. In contrast, most baseball plays are influenced by three of four players.
Consequently, it seems like the multiplicity of variable factors that determine the outcome of games would undermine the reliability of Monte Carlo estimates when it comes to college football. However, seeing as Homerism is far from an expert in this field, the thoughts of my more-knowledgeable readers would be appreciated as always.