Models as Traps

Tyler Cowen writes,

Enter DSGE models. There are plenty of good arguments against them. Still, they provide a useful discipline and they pinpoint rather ruthlessly what it is they we still do not understand. We can and should devalue them in a variety of ways, and for a variety of reasons, but still we should not dismiss them.

Models are simplifications. Sometimes they seem useful. For example, the AS-AD model often seems useful for explaining economic fluctuations. The production function often seems useful for explaining the distribution of income.

The DSGE model was never adopted for its usefulness in that sense. It was adopted in order to satisfy a methodological principle. That may be why I am tempted to dismiss it.

Any model can be described as valuable if you say that it shows us what we do not understand. To me, that is a low bar.

I think that a model is a trap if factors outside of the model constantly have to be invoked, to the point where they overwhelm the factors that are in the model.

Take the neoclassical production function. Recently, it has occurred to me that this may be a trap. Economists seem to need to add all sorts of types of capital to the model: human capital, social capital, network capital, brand-name capital, etc. It is hard enough dealing with heterogeneity in physical capital–how many forklifts equal one blast furnace? But when physical capital does such a poor job of explaining differences in performance across firms, across economies, and over time, at what point do you say that the neoclassical production function is a trap?

5 thoughts on “Models as Traps

  1. There are numerous examples of things like this in human thought. For example, think of Descartes idea of the mind… We’ve had to amend it so many times that it doesn’t seem to be as useful or as accurate as it originally seemed, but it’s still hard for philosophy to break out of it.

    Is it sociological or psychological? A bit of both, probably. But very hard to break out of our historical conceptual baggage.

  2. Models are maps, and inherently only useful for some purposes. I admit that I don’t understand exactly for what purpose Tyler finds his encounters with DSGE models useful (it’s all Greek to me, in any case), but I am inclined to believe that he really does. Who am I to say how Tyler should make sense of the world?

    It isn’t immediately clear to me why maps should become “traps.” Using the wrong map to try to navigate through a particular problem may be confusing, but then just don’t use that map to navigate that problem. Difficulties arise if you are determined to make a map do work that it isn’t really suited for (a special problem for academics whose are disciplined to favor some maps, almost religiously); but in every case it seems like the problem will be with the users, not the maps.

  3. Well said. Your “trap” is Hayek’s pretense of knowledge. Or Hawking’s illusion of knowledge. When you constantly work with models it’s natural to believe them. You begin to take assumptions for granted even if you didn’t like them initially, just based on repetition and the way the mind works. Then the model fails and you find out “ruthlessly what it is that we still don’t understand.” I’m not sure why TC couches the last part as a DSGE model defense – imo the gaps in the approach are pretty well known to begin with, albeit often ignored. If the model’s failure “ruthlessly” brings these gaps to light once again, then I’m not sure how the exercise was helpful in the first place.

  4. That’s a good mason-dixon line. It also points to the perverse incentive in academia where a “good” model is one that yields future papers, mostly from post hoc fitting and arbitrary fudge factors.

Comments are closed.