Thursday, March 4, 2010

The False Belief in True Models

This whole post over at the Junk Charts blog is worthy of being on my blog and probably should have been written by me.

An excerpt:
"Unfortunately, Gross apparently did not speak with a statistician before writing this article. A statistical modeler would compute


which is different from the reporter's perspective that


What's the difference?


The statistician takes reality as the truth, and any deviation from reality as a modeling error while the reporter takes "expectation" (the forecasts) as the truth, and any deviation from expectation as overperformance (if positive) or underperformance (if negative). When the model's forecasts are treated as yardsticks for performance, one is assuming that the expectations are set correctly, which means the model can never be wrong!"
This phenomenon is startlingly and disappointingly common within all aspects of journalism and politics. In this blog post, the author (Kaiser Fung) is referring to the Olympic medal winning prediction models, but how often do we see exactly this logical error being made by reporters and politicians?  What's more disturbing is when you see these errors being made by other types of scientists and economists, and this happens pretty frequently!

For example, policy supposedly designed to combat global warming is entirely based on assumptions coming from predictive models which have often proven to be wildly inaccurate. Now, sure... Decent scientists will see the inaccurate model and say "Ok, how do I make this better?", but the reporter or politician will often implicitly suggest that in fact the model is correct and reality simply didn't match expectations for reasons unknown.  They then typically proceed to push through policy driven based on those failed models as "truth".  This represents a sort of blind faith in the infallibility of the model, even when it turns out to fail to reflect reality.

A point that Fung doesn't cover which I'd like to bring up, however, is that sometimes the very premise of building models via statistics is a flawed concept from the outset.

Unfortunately, most of modern (Neo-Keynesian) "macroeconomics" is based on exactly this fallacies.  When reality - for example the 2008 stock market crash - doesn't line up with the models, some scientists will go back and try to re-work their model to be more accurate at predicting future events... But what macroeconomists fail to understand is that economics is not about Keynesian "aggregates"... Economics deals with individual human action, which in turn is based on a wide array of unknowable (to researchers) values and motivations spurring those individuals to act in one way or another.

Building accurate models in economics would actually require economists to be able to plug in data from millions of variables (each individual) which they are absolutely not privy to.  This is why it's far better to view the whole field from the framework of axiomatic human action, than it is trying to aggregate all demand and all supply - and thus all individual decisions - into big lumps of "economic activity" that can be manipulated or controlled from on high.  It is also ironically why the folks who don't really view economics as "predictive" science (any more than an evolutionary biologist would feign to "predict" the next phase of evolution) are always the best at predicting future outcomes of current policy.

So anyway... Belief in the infallibility of models is a huge problem, but so is building statistical models in some cases & fields at all when that's entirely the wrong tool for understanding reality.

No comments: