I found this video by the Institute for New Economic Thinking, in which a number of prominent economists discussing the use of models, more than a bit frustrating, with Jamie Galbraith’s comments at the very end a notable exception.
Mainstream economics fetishizes the use of models. Even if your insight could be stated clearly and concisely in a narrative, it is not economics unless a model is involved. For instance, two of our colleagues have gone through Mankiw’s introductory economics textbook and have ascertained that every use of a graph is not only unnecessary, but in most cases serves to impede rather than add to the presentation of the concept under discussion.
The preoccupation with models suggests that the economics discipline has unduly limited its problem-solving abilities by giving high priority to “models”. Recall how central bankers rejected William White and Claudio Borio’s well documented warning that an international housing bubble was underway. The basis for the bankers’ dismissal? White and Borio had no theoretical underpinning for their view.
Similarly, you’ll hear Brad DeLong mention, and later distance himself from a prescription promulgated by Milton Friedman, and embraced by many in the profession, that all that mattered was that a model make good predictions. We discussed the so-called “F-twist” in ECONNED:
Friedman, like his peer Samuelson, played an important role in defining what constituted proper methodology. An oft-invoked section of an influential 1953 paper:
Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and in general, the more significant the theory, the more unrealistic the assumptions. . . . The reason is simple. A hypothesis is important if it “explains” much by little, that is, it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomenon to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be deceptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomenon to be explained.
To put the point less paradoxically, the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic,” for they never are, but whether they are sufficiently good approximations for the purpose at hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions.
Friedman’s statement that “unrealistic assumptions” often prove the best is willfully false. In the absence of any evidence to the contrary, unrealistic assumptions are worse than realistic ones. An “unrealistic” assumption is one directly contradicted by present evidence. This amounts to a “get out of reality free” card.
The deceptive aspect of this argument is the slippery word “unrealistic.” Now it is true that relaxing the known parameters of a situation can be very productive. In his paper, Friedman uses the example of how the “law” in physics that describes how bodies fall assumes a vacuum, which is an unrealistic assumption, at least on planet Earth. Similarly, a line in geometry has no thickness, again a condition never observed in real life.
But in this context, the vacuum is not an “unrealistic assumption” but an abstraction that eliminates a known condition, air resistance. It is not a feature grafted on to make a construct tidy, but the stripping away of an environmental element to see if getting rid of it exposes an underlying, durable pattern. This procedure is in keeping with how mathematics as a discipline evolved, through the successive whittling away of extraneous elements.
But in economics, core and oft-used assumptions necessary to make many theories work, such as “everyone has perfect information,” are unrealistic not in the sense of stripping out real-world aspects that are noisy, but in adding properties that are not observed or even well-approximated in reality. Yet they are deemed valid and those who protest are referred to Friedman. Economists may argue that that isn’t the case, that the “perfect information” assumption simply serves to eliminate the role of bad information in decisions. But the sort of all-encompassing knowledge often posited to make a model work goes well beyond that. Similarly, “rational” economic actors are super-beings with cognitive and computational capabilities beyond those of the best computers, capable of weighing all that perfect information.
Friedman and his followers have a ready defense. The assumptions don’t matter; all that counts is that the theory “works.” Even though Samuelson wrote a harsh criticism of Friedman’s “unrealistic” assumptions, both wanted economics to be “scientific.” The sort of science they had in mind was what philosophers call “instrumentalist,” which judges a theory by its predictive power alone…
But it is actually difficult to prove anything conclusively in economics. In fact, some fundamental constructs are taken on what amounts to faith.
Models, or any abstraction, is a way of whittling down reality to the point that we can get our limited brains around it. There is more than a touch of hubris in the way economists celebrate a compensation mechanism for our constrained cognitive capabilities.