Poincare to Walras

2 August, 2006

This quote might be very well known, but I hadn’t heard it until skim reading a book at lunchtime. Essentially Poincare had identified the main critique of the assumptions of neoclassical equilibrium economics back in October 1901 in a letter to Walras.

“at the beginning of every mathematical speculation there are hypotheses and that, for this speculation to be fruitful, it is necessary (as in applications to physics for that matter) to account for these hypotheses. If one forgets this condition, then one goes beyond the correct limits”.

“You regard men as infinitely selfish and infinitely farsighted. The first hypothesis may perhaps be admitted in a first approximation, the second may call for some reservation.”

The critique is valid but despite the basic assumptional flaw hasn’t prevented the deriving of some useful theories. Ultimately a theory does have to be both usefully predictive but also simple enough to be tractable. Some, such as sections of the “post-autistic economics” movement, seem to have become a little carried away with the critique and think the whole edifice should be thrown out. Making such calls without having a better theory to put in its place is typical crank behaviour and an analogous style can be seen in the anti-relativity and anti-quantum mechanics nuts.

I have great hopes for new modelling and understanding of economics from areas like complexity theory, but the people who call for a clean slate are ridiculous. The critique is not new as the quote shows, but fortunately for everyone Walras chose to ignore it. Economics reached where it is by following a path that, despite its flaws, led to many fruitful results. Perhaps its a path that doesn’t lead to the top of the mountain, but one that made the first stage much easier and given us a better view of where we might go to now.

Advertisements

Evolutionary Economics

25 July, 2006

Interesting article in The Economist (viewable for free) on evolutionary economics. The article mentions that Paul Krugman has been critical of the area but from my reading he’s not entirely opposed especially given he’s written a book about The Self-organizing Economy. His scepticism and how he sees it as being useful seems to be outlined in this talk he gave on What Economists can learn from Evolutionary Theorists. Which seems pretty well balanced even though I think he’s a little harsh on Stephen Jay Gould.

Anyhow I think there is a lot of value in the type of modelling described below, even if it’s not of the explicitly predictive kind. Certainly valuable qualitative knowledge and statistical relationships can be found by such modelling not to mention insight into how relatively simple relationships between groups of individuals can lead to incredibly complex behaviour and structures.

…more unsettling than the ideas are the techniques and tools Mr Beinhocker advocates. He argues that economists should abandon blackboard deduction in favour of computer simulation. The economists he likes do not “solve” models of the economy—deducing the prices and quantities that will prevail in equilibrium—rather they grow them “in silico”, as he puts it.

An early example is the sugarscape simulation done in 1995 by Joshua Epstein and Robert Axtell, of the Brookings Institution. On a computer-generated landscape, studded with “sugar” mountains, they scattered a variety of simple, sugar-eating creatures, which compete for this precious commodity. Some creatures move faster than others, some see farther, and some burn sugar at a higher metabolic rate than their rivals.

Surprisingly, the results of their myopic lives can be gripping. Even simple rules of behaviour result in collective patterns that are impossible to foresee yet easy to recognise. The sugarscape, for example, is quickly beset by a division between haves and have-nots, which bears a strong statistical resemblance to the distribution of income in real economies. These macro-results cannot be deduced from the micro-rules simulators write. Rather, they emerge from the interactions of the creatures in the model, just as “wetness” emerges from the interaction of water molecules, rather than being a property of the molecule itself.


Taleb on Randomness

24 July, 2006

One of the things I originally intended to blog on, as you can see from my early posts, was some of the issues related to complex systems and the unpredictability associated with non-normal distributions. Possibly because my ideas were too badly formulated I’ve made few comments on this as I wanted more time to consider rather than posting uninformed crap.

Anyway, I have been reading some of the work of Nassim Nicholas Taleb an ex-derivatives trader who runs a hedge fund, is a fan of Karl Popper and has published a book on the subject of errors people make in the face of randomness some years ago called Fooled By Randomness with another one on the way apparently. He is now an academic. I have the book on order and will let people know what I think in due course. Anyhow it seems that a major theme of Taleb’s work is the idea that the widespread use of the normal distribution in finance (and other areas but he’s a finance guy so this is his focus) leads to people persistently underestimating the possibility of rare events. Which leads to his attacks on the whole idea of calculating risk measures such as VAR not to mention option pricing.

His belief in this idea is strong enough that according to this New Yorker article, he runs a hedge fund who’s main strategy is to systematically buy options (never sell) on the basis that the market persistently undervalues the chance of big moves. Rather than try to make money in “normal” market conditions, and then get occasionally take a hit when Russia defaults on its debt or a major terrorist attack occurs etc, the strategy is such that you usually make a loss, but every so often you make a very large profit.

An illustration of the difference that the big moves make it this graph of the S&P with and without the 10 largest moves. If you fit a normal distribution to the time series you should never get these, and particularly not ten over this timescale. [ref]
SandPComparison
Read the rest of this entry »


Just Random?

7 July, 2006

Nine people have got breast cancer in an ABC newsroom in Toowong over the last 12 years. This is similar to the case of a spate of seven brain tumours in seven years at RMIT where the installation of a mobile phone tower was being investigated.

Obviously both these case look like there is a common link, but is it really? Are the chances of this occurring so low that we can rule out a coincidence?

Taking the breast cancer incidence we have the fact that just over 1 in 1000 women will get breast cancer each year. There are 60 women working at the studio in question. Thus over a period of 12 years we would expect around 1 woman to get breast cancer 60*12 = 720 woman-years. Using my rather simple binary model for 720 woman-years, gives something close to 1 in 1 million chance of this occurring. Long odds you would think…
Read the rest of this entry »


Gravity at the centre of the earth

6 July, 2006

In response to a question that I have received at LP, I have decided to put a justification up for why gravity, will decrease as you move towards the centre of the earth. As usual in a physics problem we will make a couple of assumptions to get the answer, but I don’t think they are terribly bad ones. If someone wants a mathematical derivation of this they should read the Wikipedia entry on the Shell Theorem, particular the section on a solid sphere.

First I want to clear up a couple of confusions. Newtonian gravity is governed by an inverse square law. The magnitude of the force of gravity being equal to g = GM/r^2, where M is the mass of the object we are being attracted to. Now naively you may assume that this means that gravity goes to infinity as you approach the centre of the earth. Well it would if the earth’s Mass was all concentrated in the centre, but its not which is the key point. When we are outside a sphere its acceptable to treat the whole thing as a point mass, but not once we are inside.

Now consider a spherical shell, like a tennis ball. The bellow diagram shows an arrow pointing to an arbitrary point inside that sphere’s cross section, the dashed line is meant to go through the arbitrary point and the centre of the circle (it doesn’t quite because I drew it badly, but just imagine it does). The other two sectors are symmetrical about this dashed centre line.
CircleArc
Read the rest of this entry »


The St. Petersburg Paradox

2 July, 2006

The St. Petersburg Paradox is a classic problem in the general area of gambling, probability theory, risk aversion and the marginal utility of money. It was first posed by Nicolaus Bernoulli in 1713 and the solution was given in terms of marginal utility in 1738 by Daniel Bernoulli, his cousin. In most games we rationally expect a fair price to play the game is the expectation value, perhaps slightly less if you are risk averse. In general you should be happy to pay five or a bit less dollars for a game where, if a head comes up you win $10 and get $0 if its a tail. In many other games people are even happy to pay more than this, eg. Roulette.

Ok so what about the following game. How much would you pay to play it?

I have a pot of $2. I flip a coin. If it comes up tails then you win the pot, if it comes up heads then I double the pot and we play another round, and so forth. Each time we get a head the amount in the pot doubles and we play again. So if the first tail is in the nth round you win $2^n. So far so good, but what happens when we calculate the expectation? What do we get?
Read the rest of this entry »


Probability is not intuitive.

15 June, 2006

I was thinking about how people don’t have a good intuition on nuclear power probability and how this effects people’s thinking on nuclear power. In nuclear power there is a very small risk of something very bad happening, and we must weight this up against say a coal power station where there is a large chance of creating an incrementally small bad effect (ie. additional global warming). Maybe I’ll say something useful later as I think this has real implications for the evaluation of the worth of nuclear power, but this is not the point of the post.

During these thoughts I remembered reading a nice piece from The Economist from years back discussing this sort of thing. It gave three examples, the first is the well known birthday problem where the chance of two people having the same birthday in a group is always more likely than people expect. The second is the Monty Hall problem, where you have less chance of getting a goat than you think. I heard both of these before, but the one I hadn’t heard was the false positive one.

The false-positive puzzle. You are given the following information. (a) In random testing, you test positive for a disease. (b) In 5% of cases, this test shows positive even when the subject does not have the disease. (c) In the population at large, one person in 1,000 has the disease. What is the probability that you have the disease?

It seems pretty obvious but is it?

Read the rest of this entry »