Ignoring false negatives

14 November, 2007

I seem to be stuck on the exciting topic of VAR and backtesting models. Anyhow via a friend who pointed out this comment in an article in the economist.

UBS’s investment-banking division lost SFr4.2 billion ($3.6 billion) in the third quarter. The bank’s value-at-risk, the amount it stands to lose on a really bad day, has shot up … On 16 days during the quarter its trading losses exceeded the worst forecast by its value-at-risk model on the preceding day. It had not experienced a single such day since the market turbulence of 1998.

What is staggering is the last part. They had not had a single backtesting breach from 1998-2007. I wasn’t convinced the Economist got it right so I checked the UBS third quarter report which states it more explicitly.

… When backtesting revenues are negative and greater than the previous day’s VaR, a “backtesting exception” occurs.
In third quarter we suffered our first backtesting exceptions – 16 in total – since 1998. Given market conditions in the period, the occurrence of backtesting exceptions is not surprising. Moves in some key risk factors were very large, well beyond the maximum level expected statistically with 99% confidence.

Now it seems that they are trying to reassure people that the model is ok even if it breached 16 times in one quarter year (i.e. 25% of the time), because they haven’t breached in the period 1998-2007 at all. Anyone who actually thinks about it will realise that the probability of no breaches in 9 years – around 2250 trading days is very, very low and should be an indicator of something wrong with their model. The probability of running a model and getting no breaches in a 9 year stretch is around 1 in 1,000,000,000. While I accept that markets cluster their extreme moves and this is difficult to account for, its not like their have been no extreme events in the period. September 11th 2001 to name one.

They have perhaps fallen into the error of taking comfort from having a “conservative” model meaning they will always be allocating more capital rather than less. This is totally misleading. You construct a model to try and predict a certain level of confidence. If you can’t do that even approximately then you really need to look at your model. If you want to be conservative reserve a greater multiple of the VAR as capital. Using a model that doesn’t actually reflect reality is never going to give you any sort of confidence in the model integrity. A false negative result – too few breaches, is just as much an error in your model as a false positive one.


The invalidity of risk models (or do I trust my VAR calculations)

10 November, 2007

To continue some of the stuff raised in The Black Swan I’ll discuss Taleb’s attack on financial models based on the normal distribution, this misuse of which he describes as the Great Intellectual Fraud.

I considered entitling this “Am I a fraud?” as the contention raised in the book that those who use such models are either idiots unaware of the problem or frauds aware of the problem, but content to sell their services based upon an idea they know to be wrong. On the surface of it I fall into the second category. Although I came out of a physics PhD on powerlaws in physical systems, aware of the what they imply, and know that financial timeseries typically have powerlaw tails (Mandelbrot work on this cited in my thesis). Despite this I still work day in day out with normal based models.

The first line of defence of myself would be that these are really the best we have. However Taleb’s contention is that financial models based on the normal distribution are so badly flawed that not only do they not miss the really significant risks (the big ones) by assuming them away, they go further and actually create them. Once we have constructed a model to reflect the risk, people trade and protect themselves on the basis that the model’s predictions actually reflect reality and that large, uncorrelated moves are exceptionally rare. This leads them into taking on additional risk, and leaves them worse exposed in the case of exceptionally large moves.
Read the rest of this entry »


The Black Swan

29 October, 2007

My original intention was to write a longish post discussing the main ideas. Rather I will do an overall impression here, and then do a few follow ups to talk about what I think are the interesting points.

All up I found The Black Swan an very interesting read. If you’ve read fooled by randomness then some of the arguments will have already appeared but certainly not all. The Black Swan is concerned with wild randomness, the randomness that according to Taleb dominates modern society. Fool by randomness largely concerns the randomness of games, what Taleb would describe as mild randomness. So mild that it is hardly random at all.

As we are reminded in several places in the book Taleb made his fuck off money (i.e. enough money to be comfortably independent even if not mega wealthy) in the 87 stockmarket crash betting on the fact that the market under-appreciated Black Swans. He doesn’t need to fawn to the establishment be it economics, philosophy or publishing and we get this tone all through the book.

While the book touches on finance applications its hardly a finance book. Rather Taleb considers it to be as much a work of Epistomology. His chief concern is with what he calls epistemic arrogance. He hates those who profess to know more than they really do know, be they economic modellers, historians or political scientists, but his chief enemy is the normal distribution which is described as the Great Intellectual Fraud. To use normal distributions in fields where it is easy to show are not normal (such as finance), is both fraudulent and causes Black Swans.

Its strong stuff which he backs with examples, logic and the findings of behavioural finance and similar studies. Its also fairly convincing for the most part. Its true we don’t need sophisticated statistical studies to find whether market moves are normally distributed. We only need the fact that we get 1 in 10,000 year events (as modelled by a normal distribution) occurring every few years. Risk managers who run such models (such as me) are either ignorant fools, who actually believe wrongly in what they do, or deceptive frauds who know better but persist in fooling others for the money. I’d contend I’m neither but I’ll discuss later on.

Anyway I leave the rest to further discussion. I particularly want to mention the Narrative Fallacy, which I have actually mentioned before, the problem of prediction as well as whether I really am a fraud, idiot, or neither.


Are epidemological studies almost worthless?

20 February, 2007

Browsing through Nassim Nicholas Taleb’s diary I noticed this quote:

At the AAAS conference in San Francisco I was a discutant of session in which John Ioannidis showed that 4 out of 5 epidemiological “statistically significant” studies fail to replicate in controlled experiments.

NNT crows that this is what he has already come to describe as the narrative fallacy. If you look hard enough at enough data, you will see a pattern emerge.

Anyway I have looked up John Ioannidis’s research and found this interesting paper Why Most Published Research Findings Are False
, which unfortunately I haven’t had the chance to read in full.

The outline of his idea is simple enough, if you look at enough data (particularly small data sets) you will find statistically significant relationships. The part I thought was interesting was this.

As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.

Which is kind of obvious. If I correlate enough astrological data with some disease I will inevitably find some correlation, but because the prior probability of it being true is essentially zero there is still very little chance of the study being true.
Read the rest of this entry »


Card Counting

19 December, 2006

This will probably be a real spam magnet but anyhow….

Some time ago, while I was still at uni, with a few friend I tried card counting blackjack. Like all casino games blackjack is in the house’s favour. However over the course of a deck (or several decks) this is not always the case for every hand. By keeping track of the cards that have come out of the deck, you can determine when the game is in your favour and increase your bet accordingly. Contrary to popular opinion you don’t need to track the actual cards that have been removed but rather keep a running tally on their effect on the game.

Certain cards are good for the player. A deck rich in aces gives more chance of blackjack with its higher pay out and a deck rich in tens gives the dealer who must hit automatically until he gets 17 or more, a much higher chance of going bust. Conversely a deck rich in 4, 5 and 6’s is bad for the player. The dealer will more rarely go bust as these cards will save him from going bust when he gets to totals of 14-16. The basics of such a system of counting and betting were first shown by Edward O. Thorpe and rely on both inferring a proportional measure of the expectation for a single bet and the use of the Kelly Criteria, in essence betting amounts proportional to your expected chance of winning.

In a small deck this is particularly effective as you get towards the end of the cards, but it’s also effective in large decks as well. Particularly if they place the cut card, which determines the point when the decks are reshuffled, near the end. Casinos of course take a dim view of the idea that someone else other than the house could be playing a positive expectation game and so try to stop it typically by banning the players in question. Counting is not illegal, but the casino is free to exclude anyone they don’t like.

The reality of the situation though is that unless you have a large bank roll you are better off working at McDonald’s, and if you are playing alone its pretty easy to detect someone with wild swings in their betting patterns, particularly if they are winning money. Still it was possible for us on many trips to the casino to sit off a table not playing until the deck was “hot” and then jump in and lay some bets. This makes you pretty obvious but if you are serious low rollers like we were when we were uni students then I doubt they are worried particularly.

Successful counters these days work in teams. A reasonably interesting book on the subject is Bringing Down the House, which although it concentrates too much on the glamour of the high-roller lifestyle and not enough on the actual scheme.

As for our little project, it slowly disappeared into nothing more than an occasional drunken trip to the casino where we would attempt to count through the haze. I note though now that the Star City Casino has put in continuous shuffle machines and the whole hope of counting is gone. I wonder whether it is actually a positive revenue deal for them. After all people like me will only play if they have the knowledge that they might just be able to have an edge even though in practice they rarely will. For the casino I guess giving this money away is worth it if it means avoiding serious counters.


Safety, risk and nuclear power

30 November, 2006

It has been pointed out by many people, including on this blog by Sacha (quoting James Lovecock), that nuclear is much safer than pretty much any source of power for electricity. The relevant comparison was that per Terawatt-year of electricity generated there are 342 deaths for coal, 885 deaths for Hydro, 85 deaths from Natural Gas and only 8 for Nuclear. This comparison appears to be using a low figure for the Chernobyl accident as the WHO finds around double that number of deaths directly to the Chernobyl accident, but even so that brings the total to 16 per TWy, still well less than the next most safe method, Natural Gas. Hydro rates poorly due to some severe accidents with dams bursting in India that have killed in the thousands each.

Death of course is not the only risk associated with nuclear. Thousands more have had thyroid cancer directly as a result of Chernobyl although most have been treated and over 99% have recovered it is still a cost to bear. Then there is the contamination of land and the wholesale abandonment of the surrounding area. Also this is not to mention that we don’t really have much historical data to base our estimate of how bad or how frequent a meltdown can be. Still if we look at the total historical human costs and average over all the power that has successfully been generated by nuclear the human cost still comes out as being low, certainly lower than coal power.

If that is the case then isn’t it rational that we should adopt nuclear on the basis of safety? What that kind of comparison misses out is that people regard riskier situations as different to less risky situations for similar expectations. Thus although most nuclear power plants will sit there quite happily not hurting anyone, the rare one that does is potentially extremely hazardous. It is reasonable to treat this volatile outcome as much more serious than the equivalent. We do after all routinely pay away money to insurance companies when we would be better, on an expected outcome basis, to save the money ourselves.

If we believe that nuclear is not just a bit safer (in terms of deaths) than other forms of electricity but significantly safer, then surely this is enough to outweigh our risk aversion? I would say yes, but I could quite easily understand others coming to the no conclusion as well even if they were fairly well informed of the facts and the true risks.

That said I believe it is clear that many people over estimate the risk of nuclear compared with other risks that they don’t even consider or take for granted. On the other hand its also seductively easy to look at nuclear power’s track record in the west and do the reverse. It’s easy to believe there are no black swans if you’ve never seen one. We know catastrophic accidents are rare, but have we been lucky or unlucky seeing as few as we have seen?


Macro economic prediction and errors

31 October, 2006

I have been looking at this research report from ABARE on their forecasts for different scenarios of Global Warming effects and the cost Australia in terms of GDP. As someone trained in science one thing that I find striking about this is that whereas scientific estimates include some estimate of uncertainty, eg the IPCC’s estimate of the effect on AGW to 2001 of 2C and 4.5C. The forward estimates of GDP do not.

This strikes me as bizarre that no attempt to quantify the error band for GDP estimates is made particularly over such a long time frame. Even over the time period of one year forward we could expect a standard deviation of perhaps +/- 0.5% on the GDP estimates and that is probably generous. Over the 50 year time frame this error (assuming they are independent random errors) would scale to around 3.5%. Given that the differences in the outcomes between scenarios for all but the most punative carbon tax are mostly smaller than this it would tend to suggest that the model isn’t really accurate enough to distinguish between the different scenarios and instead they are quoting false precision.

Perhaps I have missed something and there is some footnote explaining this, or some standard assumption but I can see it and if so can someone please point out the relevant information. More likely I think they don’t want to admit inability of their models to clearly distinguish between different courses of action.


Amaranth: Was it failure of risk management or a failure caused by risk management?

18 October, 2006

In September the US Hedge fund Amaranth collapsed losing $6.5 billion dollars over a couple of weeks when its positions in natural gas futures turned bad. With prices falling, Amaranth was unable to off load its position and the losses continued to mount.

While it didn’t cause the sort of near disaster that the LTCM collapse triggered, it did wipe out almost the entire value of the fund which has now been taken over. Here is how the CEO of Amaranth described it:

In September 2006, a series of unusual and unpredictable market events caused the Funds’ natural gas positions (including spreads) to incur dramatic losses while the markets provided no economically viable means of exiting those positions. Despite all of our efforts, we were unable to close out the exposures in the public markets.

Market conditions deteriorated rapidly during the week of September 11. Material losses began early in the week, and we accelerated our efforts to reduce our exposures. On Thursday, September 14, the Funds experienced roughly $560 million in trading losses on their natural gas positions. We continued to attempt to reduce our natural gas exposures, while also selling other positions to raise cash in order to meet margin calls. As news of our losses began to sweep through the markets, our already limited access to market liquidity quickly dissipated.

The fund lost an average of $420 million per day for the first 14 trading days of September, totaling a final loss of around $6 billion.
Read the rest of this entry »


Taleb on Randomness

24 July, 2006

One of the things I originally intended to blog on, as you can see from my early posts, was some of the issues related to complex systems and the unpredictability associated with non-normal distributions. Possibly because my ideas were too badly formulated I’ve made few comments on this as I wanted more time to consider rather than posting uninformed crap.

Anyway, I have been reading some of the work of Nassim Nicholas Taleb an ex-derivatives trader who runs a hedge fund, is a fan of Karl Popper and has published a book on the subject of errors people make in the face of randomness some years ago called Fooled By Randomness with another one on the way apparently. He is now an academic. I have the book on order and will let people know what I think in due course. Anyhow it seems that a major theme of Taleb’s work is the idea that the widespread use of the normal distribution in finance (and other areas but he’s a finance guy so this is his focus) leads to people persistently underestimating the possibility of rare events. Which leads to his attacks on the whole idea of calculating risk measures such as VAR not to mention option pricing.

His belief in this idea is strong enough that according to this New Yorker article, he runs a hedge fund who’s main strategy is to systematically buy options (never sell) on the basis that the market persistently undervalues the chance of big moves. Rather than try to make money in “normal” market conditions, and then get occasionally take a hit when Russia defaults on its debt or a major terrorist attack occurs etc, the strategy is such that you usually make a loss, but every so often you make a very large profit.

An illustration of the difference that the big moves make it this graph of the S&P with and without the 10 largest moves. If you fit a normal distribution to the time series you should never get these, and particularly not ten over this timescale. [ref]
SandPComparison
Read the rest of this entry »