Ignoring false negatives

14 November, 2007

I seem to be stuck on the exciting topic of VAR and backtesting models. Anyhow via a friend who pointed out this comment in an article in the economist.

UBS’s investment-banking division lost SFr4.2 billion ($3.6 billion) in the third quarter. The bank’s value-at-risk, the amount it stands to lose on a really bad day, has shot up … On 16 days during the quarter its trading losses exceeded the worst forecast by its value-at-risk model on the preceding day. It had not experienced a single such day since the market turbulence of 1998.

What is staggering is the last part. They had not had a single backtesting breach from 1998-2007. I wasn’t convinced the Economist got it right so I checked the UBS third quarter report which states it more explicitly.

… When backtesting revenues are negative and greater than the previous day’s VaR, a “backtesting exception” occurs.
In third quarter we suffered our first backtesting exceptions – 16 in total – since 1998. Given market conditions in the period, the occurrence of backtesting exceptions is not surprising. Moves in some key risk factors were very large, well beyond the maximum level expected statistically with 99% confidence.

Now it seems that they are trying to reassure people that the model is ok even if it breached 16 times in one quarter year (i.e. 25% of the time), because they haven’t breached in the period 1998-2007 at all. Anyone who actually thinks about it will realise that the probability of no breaches in 9 years – around 2250 trading days is very, very low and should be an indicator of something wrong with their model. The probability of running a model and getting no breaches in a 9 year stretch is around 1 in 1,000,000,000. While I accept that markets cluster their extreme moves and this is difficult to account for, its not like their have been no extreme events in the period. September 11th 2001 to name one.

They have perhaps fallen into the error of taking comfort from having a “conservative” model meaning they will always be allocating more capital rather than less. This is totally misleading. You construct a model to try and predict a certain level of confidence. If you can’t do that even approximately then you really need to look at your model. If you want to be conservative reserve a greater multiple of the VAR as capital. Using a model that doesn’t actually reflect reality is never going to give you any sort of confidence in the model integrity. A false negative result – too few breaches, is just as much an error in your model as a false positive one.


The invalidity of risk models (or do I trust my VAR calculations)

10 November, 2007

To continue some of the stuff raised in The Black Swan I’ll discuss Taleb’s attack on financial models based on the normal distribution, this misuse of which he describes as the Great Intellectual Fraud.

I considered entitling this “Am I a fraud?” as the contention raised in the book that those who use such models are either idiots unaware of the problem or frauds aware of the problem, but content to sell their services based upon an idea they know to be wrong. On the surface of it I fall into the second category. Although I came out of a physics PhD on powerlaws in physical systems, aware of the what they imply, and know that financial timeseries typically have powerlaw tails (Mandelbrot work on this cited in my thesis). Despite this I still work day in day out with normal based models.

The first line of defence of myself would be that these are really the best we have. However Taleb’s contention is that financial models based on the normal distribution are so badly flawed that not only do they not miss the really significant risks (the big ones) by assuming them away, they go further and actually create them. Once we have constructed a model to reflect the risk, people trade and protect themselves on the basis that the model’s predictions actually reflect reality and that large, uncorrelated moves are exceptionally rare. This leads them into taking on additional risk, and leaves them worse exposed in the case of exceptionally large moves.
Read the rest of this entry »


The Narrative Fallacy in Weather

20 March, 2007

Looking back at my most recent posts on rainfall I note that they were motivated at a reaction to what Nassim Taleb describes as the Narrative Fallacy, the desire people have to make a post-hoc story to so all events relate with a strict cause and effect. Journalists (amongsts others) love to try and join dots and so are typical propagators of this nonsense. Recently they have been keen to link everything weather related like the drought, to global warming.

Should I worry about this though? Ultimately I think we should be acting to mitigate AGW, so is it best for us not to be criticising our “side”? I think that this kind of partisanship is amongst the worst things that can be done. Claims that are unsupported by evidence will ultimately undermine the very case they are trying to support. They are in effect crying wolf.

The stupidity of course isn’t limited to AGW alarmists, with many if not more of the skeptics running the its a cold day, it can’t be warming line. Some of these are taking the piss but not all of them. Some seem to believe its a refutation.

Ultimately it would do well if people were to remember that a particular instance is usually not a good representation of the average. Its ridiculous to ascribe meaning to every single occurrence, and even a couple of years don’t really do much to hundred year trends. Climate is long term average weather. Year to year it varies, and we are looking at long term trends not day to day heatwaves or even year to year droughts.


More on rainfall

15 March, 2007

For those unconvinced about my previous claim that, according to the BOM data, there doesn’t appear to be a drying trend on the east coast and think it might be an artifice of too broad averaging, I note that they should have a look at the trend maps found here. For example the rainfall trends from from 1900 to 2006.
Australian Rainfall trend 1900 - 2006

This shows few regions which have shown a definite drying trend over the period. One clear exception being the south west of WA, which does show a drying trend over pretty much anytime period you pick. In addition it appears that some slight trend could be evident in mid to northern Queensland.

Go and have a look at some of the different time periods on their site, the lack of a clear trend for much of eastern Australia isn’t just for the period 1900-2006. In case any one’s concerned that its because we started the comparison in a drought. Take 1910-2006, 1920-2006 or 1930-2006 and you get something similar.

Indeed its not unless you compare with the 1950’s to 1970’s that you get a really marked decrease in rainfall in most of Eastern Australia.


Drying up or typical climatic variation?

10 March, 2007

I noticed this article Big dry: no one knows why in the SMH this week, and in particular the claim that eastern Australia’s rainfall had dried up since the 1950’s. The report seems to be related to this press release announcing a new centre at UNSW studying climate change.

A priority for the new centre is to better understand the mystery of why Australia’s most populated region, the continent’s east coast, has suffered such major declines in rainfall in recent decades.

“We recently had a round-table of Australia’s leading climate-change researchers and this emerged as the biggest unknown issue and, of course, it seriously affects the largest concentration of people stretching right down the coast from Cairns to Melbourne,” Professor England said.

So I had a look again at the rainfall graphs and time series provided by the Bureau of Meteorology for eastern Australia and there is no doubt that we have been receiving less rainfall in recent decades than the 1950’s. However it also seems that we are receiving more rainfall than in the first half of the 20th century, so the question seems to me to be as much why did rainfall rise in the 50’s as why has in fallen in the 80’s and 90’s.

EasternAustraliaAnnualrainfall
Read the rest of this entry »


Institutions and Empire

3 November, 2006

As a maths nerd with an interest in history I find attempts to actually quantify various historical effects extremely interesting, even if they are ultimately often flawed. This article in The Economist shows how economists, or really more accurately statisticians, are trying to sort through various historical outcomes to measure the value of different institutions as an explanation for wealth and poverty.

Of the many proposed solutions to that riddle (technology, geography, the Protestant ethic) the current favourite is rather bland in the abstract: “institutions”. In rich economies institutions—meaning the formal laws and unwritten rules that govern society—function rather well on the whole. In poor ones they don’t. That much is indisputable.

What is tricky is showing that good institutions are a cause of economic progress rather than a by-product of it. You cannot run controlled experiments in which a particular institution is randomly imposed on some countries, but not on others, in order to compare how they fare. Or at least economists can’t. But perhaps imperialists can. Maybe the colonial adventures of the past provide the natural experiments economists need to put their theories to the test.

What is ingenious about the recent economic studies of empire is how they overcome this problem. Imperial institutions may determine prosperity, but the reverse may also be true. The trick is to find some third factor that is securely linked to institutions, but entirely unconnected to economic success. Such factors are called “instrumental variables”, because the economist is interested in them not for themselves, but for what they tell him about something else.

That name, however, now seems quite ironic. Because all of the fun in the recent spate of papers is in the instruments themselves. Economists are outdoing each other with ever more curious instruments, ranging from lethal mosquitoes to heirless maharajahs, or, most recently, wind speeds and sea currents.

It has often been said that you were better off being colonised by the British than say the Spanish and Portugese and in general the studies bear this out. Although also pointing out that it wasn’t necessarily a positive.
Read the rest of this entry »


More on the Guns Study

26 October, 2006

Andrew Leigh’s thread continues to generate interesting comments on the recent study on gun deaths, including this one by Christine where she analyses the data. In particular she examines the log return (ie. percentages) rather than the absolute return which the study focused on, and finds firstly there is close to a break in the trend in 2004 if you consider the log return, and comments that the finding appears to be closer to “there is no definite effect” rather than there “was no effect”.

Don Weatherburn has an op-ed in the SMH where he discusses the study and the effect of the laws saying:

It doesn’t follow, however, that restrictions on firearm ownership should be dropped. The gun buyback may not have had any effect on the rate of firearm homicide but any policy that permitted large increases in the supply of guns in the community could still produce untoward effects.

The Port Arthur massacre graphically demonstrated the catastrophic effects that result when the wrong people get hold of guns. This is dangerous ground and we need to tread carefully.

Also Andrew has another post examining the study’s interpretation of the analysis of suicide by firearm.