An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “sagittal plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.
The banalities come from a gimmick that can be called the Straw We. First Gladwell disarmingly includes himself and the reader in a dubious consensus — for example, that “we” believe that jailing an executive will end corporate malfeasance, or that geniuses are invariably self-made prodigies or that eliminating a risk can make a system 100 percent safe. He then knocks it down with an ambiguous observation, such as that “risks are not easily manageable, accidents are not easily preventable.” As a generic statement, this is true but trite: of course many things can go wrong in a complex system, and of course people sometimes trade off safety for cost and convenience (we don’t drive to work wearing crash helmets in Mack trucks at 10 miles per hour). But as a more substantive claim that accident investigations are meaningless “rituals of reassurance” with no effect on safety, or that people have a “fundamental tendency to compensate for lower risks in one area by taking greater risks in another,” it is demonstrably false.
The problem with Gladwell’s generalizations about prediction is that he never zeroes in on the essence of a statistical problem and instead overinterprets some of its trappings. For example, in many cases of uncertainty, a decision maker has to act on an observation that may be either a signal from a target or noise from a distractor (a blip on a screen may be a missile or static; a blob on an X-ray may be a tumor or a harmless thickening). Improving the ability of your detection technology to discriminate signals from noise is always a good thing, because it lowers the chance you’ll mistake a target for a distractor or vice versa. But given the technology you have, there is an optimal threshold for a decision, which depends on the relative costs of missing a target and issuing a false alarm. By failing to identify this trade-off, Gladwell bamboozles his readers with pseudoparadoxes about the limitations of pictures and the downside of precise information.
Another example of an inherent trade-off in decision-making is the one that pits the accuracy of predictive information against the cost and complexity of acquiring it. Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.”
But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).
The common thread in Gladwell’s writing is a kind of populism, which seeks to undermine the ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition. For an apolitical writer like Gladwell, this has the advantage of appealing both to the Horatio Alger right and to the egalitarian left. Unfortunately he wildly overstates his empirical case. It is simply not true that a quarterback’s rank in the draft is uncorrelated with his success in the pros, that cognitive skills don’t predict a teacher’s effectiveness [see here],, that intelligence scores are poorly related to job performance or (the major claim in “Outliers”) that above a minimum I.Q. of 120, higher intelligence does not bring greater intellectual achievements.
The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle. Fortunately for “What the Dog Saw,” the essay format is a better showcase for Gladwell’s talents, because the constraints of length and editors yield a higher ratio of fact to fancy. Readers have much to learn from Gladwell the journalist and essayist. But when it comes to Gladwell the social scientist, they should watch out for those igon values.
I suspect that on the Big Five personality traits, Gladwell, at least in his writing persona (which isn't necessarily the same as his day to day persona), would score very high on: