To continue some of the stuff raised in The Black Swan I’ll discuss Taleb’s attack on financial models based on the normal distribution, this misuse of which he describes as the Great Intellectual Fraud.
I considered entitling this “Am I a fraud?” as the contention raised in the book that those who use such models are either idiots unaware of the problem or frauds aware of the problem, but content to sell their services based upon an idea they know to be wrong. On the surface of it I fall into the second category. Although I came out of a physics PhD on powerlaws in physical systems, aware of the what they imply, and know that financial timeseries typically have powerlaw tails (Mandelbrot work on this cited in my thesis). Despite this I still work day in day out with normal based models.
The first line of defence of myself would be that these are really the best we have. However Taleb’s contention is that financial models based on the normal distribution are so badly flawed that not only do they not miss the really significant risks (the big ones) by assuming them away, they go further and actually create them. Once we have constructed a model to reflect the risk, people trade and protect themselves on the basis that the model’s predictions actually reflect reality and that large, uncorrelated moves are exceptionally rare. This leads them into taking on additional risk, and leaves them worse exposed in the case of exceptionally large moves.
There are numerous examples where he would cite this as having occurred recently with the subprime collapse in the US, we have statements that we had 1 in 10000 year events occurring every day for 3 days, the collapse of Amaranth, and of course LTCM. In all of these cases we had people believing that their models actually captured all the risks they faced and therefore could trade accordingly with a good idea of the worst results that could occur. When something occurred which was outside their model they blew up.
Taleb’s contention is that we know that markets don’t have normally distributed outcomes. While there are studies which have looked through series of data to confirm normality this is an essentially worthless activity. As a follower of Popper, his view is theories are conjectures, which we must reject if they are falsified. Induction will never show us what is the real distribution, however falsification can show us what aren’t acceptable models. I accept and agree with this.
In the case of the normal distribution and related distributions we need only look at the massive overabundance of extreme events. It is typical to see say “5 sigma” events – which should occur roughly once every 10,000 years about once every say 3-5 years. Knowing this, it is pointless to claim that a normal distribution describes market moves. To claim that you can find periods where it is normal is – to paraphrase Taleb, like having breakfast with OJ Simpson, and thinking the fact that he didn’t kill anyone while you were there means he isn’t a killer. Absence of evidence is not evidence of absence.
The question I ask is do most practitioners actually believe that their models are an accurate description of reality? Or are they trying to capture what risks they can in a normal model while still relying on things such as stress tests to attempt to capture exposure to the more extreme Black Swans. Certainly in my experience this is what occurs. The Quants I am experience with were physicists and mathematicians gone bad, not the MBA graduates Taleb refers. Perhaps those people do believe in normal markets.
Anyway onto whether what I do is essentially fraudulent. My answer is no, but there is a danger it is taken the wrong way. Firstly consider Value-at-Risk (VAR). We calculate VAR for a 99% quantile, trying to estimate where the size of a 1 in 100 day loss. Now while I can never backtest this model to prove it, we can still test it and see that it has historically found pretty close to the correct number of breaches (2% are above 1% or below 99%), which is the critical test that we are trying to check. Certainly it hasn’t been falsified for this criterion. Now how do I reconcile this with my belief that such models are ultimately flawed by not predicting large events? Well quite simply we don’t need to predict large events to calculate accurate VAR. 1 in 100 is not a Black Swan by any means and even with limited non-normality in our model we can account for this type of event.
If however we were to test say, the most extreme moves in our data set, we would find moves that are really not accounted for by the model eg multiple 5 sigma events. So while we may not capture the big moves correctly we don’t need to for VAR purposes. We only need 1 in 100. Thus I can deliver a VAR number with a straight face and a feeling of honesty. I believe we are giving a good estimate of the 1 in 100 risk.
In my opinion the real question is not, is our VAR number accurate, but is this estimate really useful and is it being used correctly. If I accept there are fat tails (powerlaws), then the risk of say a 1 in 1000 move doesn’t scale to the size of a 1 in 100 move how we would expect or predict from our model. More importantly should I be provisioning based on a multiple of 1 in 100 VAR when I know it doesn’t scale to larger moves.
One way that this is checked for is that for the case of more extreme moves we don’t in general calculate exposure due to a VAR model but rather stress test various combinations of extreme moves. While comparing these to a multiple of VAR against the level of risk we may think if we judge it off a normal model, it gives us some extreme bound with which we compare through stress tests what may happen in extreme situations.
So when Taleb says that taking VAR or volatility (Sigma) or some other normal distribution derived number and calling it your risk is wrong, I agree with him. It’s plainly inadequate. However when he calls it worthless or even dangerously misleading I tend not to agree. Taleb seems to fall for one of his own fallacies here. Absence of evidence is not evidence of absence. He explains at length in one of his narratives how a legislator that had introduced tighter security before 9/11 could have prevented the disaster, but since no-one would have known about this there would have been no reward or glory for the person. Possibly they would have been reviled for unnecessary security restrictions, given we never really saw what was averted. How does Taleb know that VAR hasn’t stopped numerous dangerous build up of risks, and so averted disasters? Certainly we know in the case of the NAB options trading debacle, if they had paid attention to their VAR models they would have seen the problem building up over time quite clearly.
So in one respect I agree with Taleb. Anyone who thinks their VAR model encompasses all risk is extremely naive and foolish. However I disagree that the use of such models are useless or dangerous. They must be used with their flaws in mind and in my experience they mostly are.