Wall Street Quants And The Inherent Failures Of Risk Management
10/05/2008
A+
|
a-
Print Friendly and PDF

Obviously, there has been a gigantic failure by Wall Street rocket scientists at "risk management." This isn't my area of expertise, but I think I can point out a basic mistake. Going back decades to Burton Malkiel's book  A Random Walk Down Wall Street,  sophisticated financial thinking (e.g., the "efficient markets theory" of the 1970s) has been dominated by the concept of randomness: events are distributed on a bell curve-shaped probability distribution.

For example, to take a simplistic example, people tend to default on mortgages when they have bad luck: dad gets cancer and dies and then mom gets depression and loses her job. The bank forecloses. If you hold 1,000 mortgages, that kind of bad luck happens to, say, 15 each year. Of course, your 1000 people might have worse luck than normal. Say you study millions of mortgage and determine the standard deviation is 5 per 1000. So, for 99.75% of the bundles of 1000 mortgages, the number of defaults in a year will range from 0 to 30.

Here's the problem: human life really isn't all that random. That's because human beings respond to incentives. If you treat human beings as if they are just mindless probabilistic events, whose risks you can diversify away by dealing with large numbers of them at a time, they will outsmart you. They will put down inflated incomes on their mortgage applications. They will claim to be owner-occupiers when they are just speculators who will rent out the property to Section 8 tenants when they get into a cash flow bind. They will bribe appraisers to report a higher than actual value.

Another common pattern in life is that things build to a climax of the greatest risk and reward, where predictability is at a minimum. The events that we are most interested in are those that are hardest to predict. We know exactly when the sun will set on December 21, 2008. That is a hugely important fact, but it's not a very interesting one to us because it has already been taken into account. We're more interested in things like who will win the World Series or be elected President or whether the stock market will go up or down ... because those are so hard to predict.

Let's look at a sports example of risk vs. reward. Say you are an Olympic boxer, one of 32 contenders in your weight class. You have a particular power punch that you are fond of which requires you to drop your defenses for a fraction of a second as you wind up to deliver it. In the first round, against a boxer from Sikkim, you throw it seven times with no bad results for you. In the second round, against the Ghanian fighter, you use it five times with no ill effects. In the third round against the Slovenian fighter you throw it six times and suffer one glancing blow. In the semifinal round against the Korean boxer, you throw it seven times and suffer two glancing blows.

Okay, so, in the first four matches, you've thrown it 25 times and suffered three glancing blows. Only a 12% problem rate, and those problems aren't that bad: just glancing blows. You run a 1000 Monte Carlo simulations, and using that punch pays off in 973 of them. You like those odds!

Now you are in the final against the Cuban, who is the World Champion and defending Olympic gold medalist. You immediately rear back to throw your power punch ... and wake up in the infirmary with your silver medal on the bedside table.

What happened?

Non-randomness. The whole Olympics were set up to pit the two best boxers in the final round. The Cuban, who might be the professional champion of the world if he were allowed out of Castro's paradise, is just plain better than anybody you fought before. In hindsight, you can see a trend in the data but you simply couldn't predict from it how hard you'd get hit.

A lot of things in real life work out roughly along the same lines as in organized tournaments, building to a climax. First, Hitler conquers Czhecoslovakia, then Poland, then Denmark and Norway. So, feeling lucky, he invades France. Then in 1941, with all that positive data on the high rewards and low risks involved in starting wars available to him, he invades the Soviet Union and declares war on the United States. Notice a pattern?

In retrospect, things tend to evolve toward maximum unpredictability. The pre-1914 alliance system of Europe tended, because the weaker side at any point had the incentive to offer more to neutrals to join them, tended to evolve

This doesn't mean that this has to happen. There are lots of periods without that kind of disastrous evolution. But things like World Wars or financial crashes are what catch our eye in hindsight, and with good reason.

This suggests that there is no way to avoid disasters permanently. That's no doubt true. But we can make them rarer and less catastrophic just by being less stupid. Consider two economies, both of which either grow 5% per year or shrink 5% per year. The first economy is more bubble-prone, so it grows for seven years then shrinks for three years. The second economy grows for ten years, then shrinks for two years. Over the course of sixty years (six cycles for the first economy, five for the second), the second economy will end up over twice as big.

The Albanian economy collapsed in 1997-98 due to the entire population, who had only been introduced to capitalism less than a decade before, becoming entranced by simple pyramid scams. That was so stupid that it probably won't happen again in Albania for a long time.

Americans don't fall for simple pyramid schemes. We need more complicated scams.

Print Friendly and PDF