Nate Silver's 538 blog has a clear exposition of why advanced baseball statistics state that 20-year-old rookie phenom Mike Trout of the California Angels should win the American League Most Valuable Player award instead of the favorite, Miguel Cabrera of the Detroit Tigers, who is the first ballplayer since Yaz in 1967 to win the Triple Crown of batting average, homers, and RBIs.
Sabermetricians often preen themselves when sportswriters give the MVP award to the wrong guy because there have been some dumb votes in the past.
But it's not like nobody would have noticed that Trout was having an amazing season without advanced statistics. Trout absolutely electrified fans who watched him. He had all sorts of crazy feats like stealing four home runs from batters by leaping to catch balls over the fence. Cabrera, in contrast, had certain unglamorous deficiencies, such as a tendency to ground into double plays because he's a slow runner who hits the ball hard, which dedicated Tiger fans would notice.
It's not that hard to tell who's the most valuable player on a team if you watch the team every day. As Yogi Berra said, "You can observe a lot just by watching." But, when trying to pick the most valuable player in the entire league, sportswriters tended in the past to look to simplifying statistics, most notoriously the highly contextual runs-batted-in counter.
Still, picking between Trout and Cabrera is a complicated question. The difficulty raises questions about everybody's favorite panacea for fixing the schools: value-added testing of teachers.
As I've pointed out before, although the world has largely come around to the theory I propounded in the 1990s that teachers should be measured upon "value added,", almost nobody, including me, has much specific knowledge of how that ought to work in practice. And we're probably not going to get there all that soon, either.
For baseball statistics to reach the level of sophistication in Silver's posting has taken quite a few generations.
And, yet, there are still baseball conundrums that aren't clearly resolvable with statistics. For example, Silver deducts runs from Cabrera's overall performance for being a lousy third baseman. But Cabrera's fans write in to point out that he is a natural first baseman who volunteered during the offseason to lose weight and take lots of grounder so he could play third this season so the Tigers could sign the even fatter Prince Fielder to play first.
Playing a defensive position you are not cut out for can take a psychological toll on your hitting — an old time example is Pedro Guerrero's slump in the first two months of 1985 due to the Dodgers insistence that he continue to humiliate himself at 3rd base. Finally, when they let him move to his natural left field, he responded to his liberation with one of the great hitting months in history, setting a record for most homers in one month.
(I have a general theory that sabermetric logic points baseball in an ugly direction: kind of like semi-pro slowpitch softball. Don't worry much about defense or baserunning, just get a bunch of hulks who can hit homers or get walks. But I don't think this kind of reductionism works in practice for a psychological reason: you put too many Dr. Strangegloves on the field at once, it cuts the heart out of a team. Sure, statistics might say that fielding isn't really that important, but botching too many balls embarrasses teams and causes bad feelings in the dugout. Errors depress teams, like they depressed Guerrero.)
Also, it's not clear how to account for the fact that Mike Trout spent the Angels' first 22 games in the minors (during which they went only 8-14). As soon as he came up he jumpstarted the Angels' offense.
It's easy to say that if he'd played the whole season instead of just 139 games, he would have scored even more than 129 runs and hit even more than 30 homers.
No doubt about it.
On the other hand, he likely would have regressed somewhat more toward the mean in on-base and slugging percentages — after a superhuman start, his averages were slipping during the last two months of the season.
I suspect that it will turn out that there are all sorts of analogies to advanced teacher evaluations. If you, say, offer a $10,000 bonus for the school's Most Valuable Teacher based on value-added test scores, you will start to see all sorts of complex but not uninteresting arguments from interested parties for why more factors need to be taken into consideration.