Ignoring false negatives

14 November, 2007

I seem to be stuck on the exciting topic of VAR and backtesting models. Anyhow via a friend who pointed out this comment in an article in the economist.

UBS’s investment-banking division lost SFr4.2 billion ($3.6 billion) in the third quarter. The bank’s value-at-risk, the amount it stands to lose on a really bad day, has shot up … On 16 days during the quarter its trading losses exceeded the worst forecast by its value-at-risk model on the preceding day. It had not experienced a single such day since the market turbulence of 1998.

What is staggering is the last part. They had not had a single backtesting breach from 1998-2007. I wasn’t convinced the Economist got it right so I checked the UBS third quarter report which states it more explicitly.

… When backtesting revenues are negative and greater than the previous day’s VaR, a “backtesting exception” occurs.
In third quarter we suffered our first backtesting exceptions – 16 in total – since 1998. Given market conditions in the period, the occurrence of backtesting exceptions is not surprising. Moves in some key risk factors were very large, well beyond the maximum level expected statistically with 99% confidence.

Now it seems that they are trying to reassure people that the model is ok even if it breached 16 times in one quarter year (i.e. 25% of the time), because they haven’t breached in the period 1998-2007 at all. Anyone who actually thinks about it will realise that the probability of no breaches in 9 years – around 2250 trading days is very, very low and should be an indicator of something wrong with their model. The probability of running a model and getting no breaches in a 9 year stretch is around 1 in 1,000,000,000. While I accept that markets cluster their extreme moves and this is difficult to account for, its not like their have been no extreme events in the period. September 11th 2001 to name one.

They have perhaps fallen into the error of taking comfort from having a “conservative” model meaning they will always be allocating more capital rather than less. This is totally misleading. You construct a model to try and predict a certain level of confidence. If you can’t do that even approximately then you really need to look at your model. If you want to be conservative reserve a greater multiple of the VAR as capital. Using a model that doesn’t actually reflect reality is never going to give you any sort of confidence in the model integrity. A false negative result – too few breaches, is just as much an error in your model as a false positive one.


The invalidity of risk models (or do I trust my VAR calculations)

10 November, 2007

To continue some of the stuff raised in The Black Swan I’ll discuss Taleb’s attack on financial models based on the normal distribution, this misuse of which he describes as the Great Intellectual Fraud.

I considered entitling this “Am I a fraud?” as the contention raised in the book that those who use such models are either idiots unaware of the problem or frauds aware of the problem, but content to sell their services based upon an idea they know to be wrong. On the surface of it I fall into the second category. Although I came out of a physics PhD on powerlaws in physical systems, aware of the what they imply, and know that financial timeseries typically have powerlaw tails (Mandelbrot work on this cited in my thesis). Despite this I still work day in day out with normal based models.

The first line of defence of myself would be that these are really the best we have. However Taleb’s contention is that financial models based on the normal distribution are so badly flawed that not only do they not miss the really significant risks (the big ones) by assuming them away, they go further and actually create them. Once we have constructed a model to reflect the risk, people trade and protect themselves on the basis that the model’s predictions actually reflect reality and that large, uncorrelated moves are exceptionally rare. This leads them into taking on additional risk, and leaves them worse exposed in the case of exceptionally large moves.
Read the rest of this entry »


Amaranth: Was it failure of risk management or a failure caused by risk management?

18 October, 2006

In September the US Hedge fund Amaranth collapsed losing $6.5 billion dollars over a couple of weeks when its positions in natural gas futures turned bad. With prices falling, Amaranth was unable to off load its position and the losses continued to mount.

While it didn’t cause the sort of near disaster that the LTCM collapse triggered, it did wipe out almost the entire value of the fund which has now been taken over. Here is how the CEO of Amaranth described it:

In September 2006, a series of unusual and unpredictable market events caused the Funds’ natural gas positions (including spreads) to incur dramatic losses while the markets provided no economically viable means of exiting those positions. Despite all of our efforts, we were unable to close out the exposures in the public markets.

Market conditions deteriorated rapidly during the week of September 11. Material losses began early in the week, and we accelerated our efforts to reduce our exposures. On Thursday, September 14, the Funds experienced roughly $560 million in trading losses on their natural gas positions. We continued to attempt to reduce our natural gas exposures, while also selling other positions to raise cash in order to meet margin calls. As news of our losses began to sweep through the markets, our already limited access to market liquidity quickly dissipated.

The fund lost an average of $420 million per day for the first 14 trading days of September, totaling a final loss of around $6 billion.
Read the rest of this entry »


Taleb on Randomness

24 July, 2006

One of the things I originally intended to blog on, as you can see from my early posts, was some of the issues related to complex systems and the unpredictability associated with non-normal distributions. Possibly because my ideas were too badly formulated I’ve made few comments on this as I wanted more time to consider rather than posting uninformed crap.

Anyway, I have been reading some of the work of Nassim Nicholas Taleb an ex-derivatives trader who runs a hedge fund, is a fan of Karl Popper and has published a book on the subject of errors people make in the face of randomness some years ago called Fooled By Randomness with another one on the way apparently. He is now an academic. I have the book on order and will let people know what I think in due course. Anyhow it seems that a major theme of Taleb’s work is the idea that the widespread use of the normal distribution in finance (and other areas but he’s a finance guy so this is his focus) leads to people persistently underestimating the possibility of rare events. Which leads to his attacks on the whole idea of calculating risk measures such as VAR not to mention option pricing.

His belief in this idea is strong enough that according to this New Yorker article, he runs a hedge fund who’s main strategy is to systematically buy options (never sell) on the basis that the market persistently undervalues the chance of big moves. Rather than try to make money in “normal” market conditions, and then get occasionally take a hit when Russia defaults on its debt or a major terrorist attack occurs etc, the strategy is such that you usually make a loss, but every so often you make a very large profit.

An illustration of the difference that the big moves make it this graph of the S&P with and without the 10 largest moves. If you fit a normal distribution to the time series you should never get these, and particularly not ten over this timescale. [ref]
SandPComparison
Read the rest of this entry »