Instead of over-analyzing last night's game, instead I want to ask the question: How did the analysts, specifically Baseball Prospectus (BP), get the White Sox so wrong this year? Now, they have an answer, "luck", but that just doesn't wash given the incredible gap between what they said in their book this year and what actually happened. I think I have the answer, but explaining it takes some background.
First, you have to understand the history of sabermetrics a little bit. Sabermetrics is a term coined by Bill James to encompass the whole idea of studying questions about baseball using statistics. James didn't invent the concept, it's as old as baseball and Henry Chadwick keeping score in the 19th century. Many systems have been proposed over the years, from Earnshaw Cook's DX to Tom Boswell's Total Average to Pete Palmer's Linear Weights (LWTS) to Bill James' many metrics to BP's collection of formulas.
With that said, here's why BP screwed up:
Attitude. Bill James wrote books that were essentially academic papers. He would pose the question, describe the framework used to address the question in full (including formulas), attempt to answer the question, and then open the floor (conceptually) to objections and review. BP doesn't do this, really; their formulas are largely proprietary (because they make money selling information to many sources), and they don't really brook review of their methods or conclusions. BP's attitude is more Papal than academic, an attitude which leads to error through the age-old idea of hubris. The White Sox don't fit BP's concept of how to run a baseball team, so they must be bad. When their own formulas showed a better outcome than their guts, they went with their guts. When their formulas emitted transparently bizarre predictions, they stuck by their formulas, as long as they fit their assumptions.
Appearance of Conflict of Interest. Further, because the White Sox don't fit their concept (and presumably don't buy their private analysis products), they trashed them in public, which is analogous to the brokerage scandals of a few years back (but, admittedly, however, not harmful to the public in any way). The teams BP singles out for praise are, predictably, the teams that employ their authors and friends. As BP is largely a Chicago-born institution and the Chicago teams ignore them could be a cause for simple spite. Further, obviously success by any organization that doesn't follow the basic BP program could be perceived by the public as undermining the credibility of their otherwise entertaining and shrewd product.
Methodology. Plainly put, much of BP's work derives from Pete Palmer's work, and Palmer's methodology was, I believe, flawed by the basic, mistaken assumption that baseball is close enough to a linear process to be analyzed as such. James noticed early that baseball isn't linear, his formulas aren't linear, and his results were better. BP drank the Total Baseball Kool-Aid -- or we think they did; we'll never know because they don't really let us see their process. One example is the focus on replacement players, who are essentially strawmen; the goal is not to collect players better that AAA players, the goal is to win as often as possible.
Observability. Baseball has dozens of individual statistics that are collected at every level, and analyzed and over-analyzed. It also has dozens of events that happen on the field every game that go unrecorded because there's no statistic to cover them. It has no realistic way to distinguish intent. When a batter grounds to second to advance a runner to third base, no statistic is really kept, and if one were, no framework exists to evaluate it. BP's approach to this problem is age-ol, assuming that if we can't measure it, it must all even out in the end.
So, because they couldn't understand the Carlos Lee trade in their analytical framework, and couldn't account for the possibility that Jose Contreras' failures in Yankee Stadium not being correctable, and so on and so forth, BP allocated 82-odd wins to the White Sox, and Joe Sheehan predicted a 90-loss season for a team that has now won 107 games and counting and has a tenuous one-game lead in the World Series. When confronted time and time again with the fact that something went wrong in the system to make that grade of mistake, the responses amount to: (1) they were lucky, (2) they were lucky, (3) their luck will run out, and (4) they are really lucky. That excuse made the grade in May, BP, but now we're in late October and you need better ones.
The real answer is you can't predict run prevention because your defensive/fielding analysis is just as bad as everyone else's. For a business that makes money based on the promise that they can predict the future, that is what you call a severe downward indicator.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment