By Philip Mirowski, Carl Koch Professor of Economics and the History and Philosophy of Science University of Notre Dame. Professor Mirowski has written numerous books including More Heat than Light, Machine Dreams and, most recently Science-Mart
Edited and with an introduction by Philip Pilkington, a journalist and writer living in Dublin, Ireland
Perhaps the best defence for a failed set of ideas is to have critics that will engage in only superficial critiques. This provides the audience – in this case, the educated general public – with a spectacle by which they can console themselves that the edifice is being shaken up by brave and innovating insiders. The critiques of the Efficient Markets Hypothesis (EMH) currently pouring out of the discipline and into the mediasphere is precisely such a spectacle. (A spectacle which, I must admit, I have partaken in to some degree).
The prize-fighters that step into the public arena in this regard are none other than Paul Krugman and Joseph Stiglitz, both of whom have won the Nobel Prize in Economics – a sort of official sanction by the profession that these are people worth listening to on the state of economics. Their critiques, which attack some of the outlandish excesses of neoclassical thought, merely tiptoe around the edges of the neoclassical research program and do not take on the more fundamental issues.
The EMH is thus set up as a sort of arch-villain of the Bond film variety which, by some readings, led directly to the financial excesses and collapse that we have witnessed. Thus all it needs is a suave hero to do away with it and all will be right with the world once more.
And that is how the critics become the system’s best defenders. By insulating the research program from any real, fundamental criticism (such as, for example, a charge that, pace Hyman Minsky, capitalism is inherently unstable and lacking in equilibrium) such critics limit the scope of serious debate – all the while giving the audience the impression that they are, in fact, watching a serious debate unfold. As Mirowski writes, such a spectacle “constitutes the very definition of an ‘empty gesture’ in orthodox economics.”
– Philip Pilkington
Part III: Microirrationalities – A Critic’s Defence of The System
For those living through the roller-coaster of 2008, and retrospectively searching for previous wrong turns, it seemed obvious to focus on the sector wherefrom disasters cascaded one after another like clowns piling out of an auto: namely, Wall Street. Not only had finance become the 400 pound gorilla of the US economy, accounting for 41 of all corporate profits in 2007 (Stiglitz, 2010a, p. 7), but it was also the arena where economic theory had seemed to matter to a greater degree than elsewhere, given recourse to formal models to ‘justify’ all manner of activities, from securitization and options pricing to risk management. Thus it was fairly predictable that some economists would look to finance theory as the locus of error, and rapidly settled upon a single doctrine to scapegoat, the one dubbed the ‘efficient-markets hypothesis’ [EMH]. Paul Krugman became a prominent spokesperson for this option in his notorious ‘How Did Economists Get it so Wrong?’ (2009):
By 1970 or so, however, the study of financial markets seemed to have been taken over by Voltaire’s Dr Pangloss, who insisted we live in the best of all possible worlds. Discussion of investor irrationality, of bubbles, of destructive speculation had virtually disappeared from academic discourse. The field was dominated by the ‘efficient markets hypothesis’ . . . which claims that financial markets price assets precisely at their intrinsic worth given all publicly available information . . . And by the 1980s, finance economists, notably Michael Jensen of the Harvard Business School, were arguing that because financial markets always get prices right, the best thing corporate chieftains can do, not just for themselves but for the sake of the economy, is to maximize their stock prices. In other words, finance economists believed that we should put the capital development of the nation in the hands of what Keynes had called ‘a casino’.
Journalists found the EMH irresistibly seductive to ridicule, with John Cassidy and Justin Fox attacking it at length. The journalist Roger Lowenstein declared: ‘The upside of the current Great Recession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis.’ There was more than sufficient ammunition to choose from to rain fire down on the EMH, not least because it had been the subject of repeated criticism from within the economics profession since the 1980s. But what the journalists like Cassidy, Fox and Lowenstein, and commentators like Krugman, neglected to inform their readers was that the back and forth, the intellectual thrust and empirical parry had ground to a stand-off more than a decade before the crisis, as admirably explained in Lo and MacKinlay (1999):
There is an old joke, widely told among economists, about an economist strolling down the street with a companion when they come upon a $100 bill lying on the ground. As the companion reaches down to pick it up, the economist says ‘Don’t bother – if it was a real $100 bill, someone would have already picked it up.’ This humorous example of economic logic gone awry strikes dangerously close to home for students of the Efficient Markets Hypothesis, one of the most important controversial and well-studied propositions in all the social sciences. It is disarmingly simple to state, has far-reaching consequences for academic pursuits and business practice, and yet is surprisingly resilient to empirical proof or refutation. Even after three decades of research and literally thousands of journal articles, economists have not yet reached a consensus about whether markets – particularly financial markets – are efficient or not.
What can we conclude about the Efficient Markets Hypothesis? Amazingly, there is still no consensus among financial economists. Despite the many advances in the statistical analysis, databases, and theoretical models surrounding the Efficient Markets Hypothesis, the main effect that the large number of empirical studies have had on this debate is to harden the resolve of the proponents on each side. One of the reasons for this state of affairs is the fact that the Efficient Markets Hypothesis, by itself, is not a well-defined and empirically refutable hypothesis. To make it operational, one must specify additional structure, e.g., investors’ preferences, information structure, business conditions, etc. But then a test of the Efficient Markets Hypothesis becomes a test of several auxiliary hypotheses as well, and a rejection of such a joint hypothesis tells us little about which aspect of the joint hypothesis is inconsistent with the data. Are stock prices too volatile because markets are inefficient, or is it due to risk aversion, or dividend smoothing? All three inferences are consistent with the data. Moreover, new statistical tests designed to distinguish among them will no doubt require auxiliary hypotheses of their own which, in turn, may be questioned.
This imperviousness of an isolated hypothesis to empirical rejection, and the crucial role of auxiliary hypotheses in serving as a protective barrier, is familiar in the philosophy of science literature as ‘Duhem’s thesis’. The mere fact of deflecting disconfirmation off onto harmless auxiliary hypotheses is not prima facie an illegitimate ploy; it occurs in all the natural sciences. The issue was not that immunizing stratagems had been resorted to in this instance; rather, it was that the EMH had proven so rabidly tenacious within orthodox economics and in business schools, occupying pride of place for decades within both macroeconomics and finance, that economists had begun to ignore most modern attempts to disprove it. Perhaps it was not the localized cancer that its detractors had portrayed; maybe it was more akin to a symbiotic parasite that actually helped orthodox economics thrive. The lesson for crisis-watchers that I shall explore is that the EMH cannot be killed easily and maybe not at all within the parameters of the current economics profession. That is one reason why non-economists need to be suspicious of claims like the pronunciation of the economist most famous for the ‘reject the EMH’ option, Joseph Stiglitz:
[A] Considerable portion of [blame] lies with the economics profession. The notion economists pushed – that markets are efficient and self-adjusting – gave comfort to regulators like Alan Greenspan, who didn’t believe in regulation in the first place . . . We should be clear about this: economic theory never provided much support for these free market views. Theories of imperfect and asymmetric information in markets had undermined every one of the ‘efficient market’ doctrines, even before they became fashionable in the Reagan– Thatcher era. (Stiglitz, 2010b)
Pace Stiglitz, each blow just seemed to leave it stronger. One of the characteristics of the EMH which rendered it impervious to refutation was the fact that both proponents and critics were sometimes extremely cavalier about the meaning and referent of the adjective ‘efficient’. Both Krugman and Stiglitz, for instance, in the above quotes simply conflate two major connotations of efficiency, namely, ‘informational efficiency’ and ‘allocative efficiency’. The former is a proposition about the efficacy and exactitude of markets as information conveyance devices; the latter is a proposition that market prices correctly capture the ‘fundamentals’ and maximize the benefits to market participants by always representing the unique arbitrage-free equilibrium. It is sometimes taken for granted that the former implies the latter; this is the gist of the comment that one will never find loose $100 bills on the sidewalk. However, if one rephrased the claim to state that no one will ever find valuable unused information on the sidewalk, then the fallacy starts to become apparent.16 In order to respect the significance of that distinction, in this section I deal with those who propose that the orthodoxy shed the information-processing version of the EMH in reaction to the crisis; while in the next I consider those who seek to dispense with allocative efficiency altogether.
The journalist and blogger Felix Salmon posed the critical question during the crisis: why did the EMH become the destructive love affair which the economics profession seemed unable to shake off?  To understand where the orthodox economics goes awry, one must become acquainted with a little bit of history. The role of the EMH should be situated within the broader context of the ways that neoclassical economics has changed over time.  In a nutshell, neoclassical economics looks very different now than it did at its inception in the 1870s. From thenceforth until World War II, it was largely a theory of the allocation of scarce means to given ends. Although trade was supposed to enhance ‘utility’, very little consideration was given to what people knew about commodities, or how they came to know it, or indeed, about how they knew much of anything else. The Socialist Calculation Controversy, running from the Great Depression until the fall of the Wall, tended to change all that. In particular, Friedrich Hayek argued that the true function of The Market was to serve as the greatest information processor known to mankind. Although Hayek was not initially accorded very much respect within the American economics profession before the 1980s, nonetheless, the ‘information processing’ model of The Market progressively displaced the earlier ‘static allocation’ approach in the preponderance of neoclassical theory over the second half of the twentieth century. As one can appreciate, this profoundly changed the meaning of what it meant to assert that ‘the market works just fine’, at least within the confines of economics.  ‘Efficiency’, a slippery term in the best of circumstances, had come increasingly to connote the proposition that the market could package and convey knowledge on a ‘need-to-know’ basis in a manner which could never be matched by any human planner.
Once one recognizes this distinct trend, then the appearance of the EMH in Samuelson (1965) and Fama (1965) and its rapid exfoliation throughout finance theory and macroeconomics (Mehrling, 2010; Bernstein, 1992) becomes something more than just a fluke. The notion that all relevant information is adequately embodied in price data was one incarnation of what was fast becoming one of the core commitments of the neoclassical approach to markets. Of course, the fact that numerous ineffectual attempts were made along the way to refute the doctrine in specific instances (variance bounds violations, the end-of-the month effect, January effect, small cap effects, mean reversion, and a host of others) did not impugn the EMH so much as quibble over just how far the horizon would be extended. The EMH spawned lots of econometric empiricism, but surprisingly little alteration in the base proposition. The massive number of papers published on the EMH merely testified to the protean character of the idol of ‘market efficiency’, which grew to the status of obsession within the American profession.
In the Odyssey, Proteus assumed a plethora of shapes to escape Menelaus; in the EMF, ‘information’ had to be gripped tight by neoclassical theory, because it kept squirming and changing shape whenever anyone tried to confine it within the framework of a standard neoclassical model. Few have been sensitive enough to the struggle to attend to its twists and turns, but for present purposes it will be sufficient that three major categories of cages to tame the beast have been: information portrayed as ‘thing’ or object, information reified as inductive index, and information as the input to symbolic computation (Mirowski, 2009). For numerous considerations here bypassed, they cannot in general be reduced one to another. The reason this matters to journalists’ convictions that the crisis has invalidated the EMH is that the detractors mostly conform to the literature which treats information like a commodity, whereas the defenders repulse them from battlements of legitimation built largely from information as an inductive index. This may seem a distinction that only a pedant could love, but once clarified it goes a long way to demonstrating that the crisis will never induce the majority of neoclassical economists to give up on the EMH.
The standard-bearer for the denial of the Kenntnisnahme über alles EMH has been Joseph Stiglitz. Here it is important to acknowledge that Stiglitz deserves the respect of the Left because he has repeatedly taken political positions that have not ingratiated him with those in power, and often has been steadfast in his pessimistic evaluations of the crisis, when all the journalists wanted to hear was how the crisis was done, dusted and under control. He has been right more often about the gravity of problems that the crisis revealed than the thundering herd of economists claiming that they had sagely prophesied disasters.20 And, in stark contrast to most of the figures encountered in this chapter, he has repeatedly gone on record stating that economists should bear some responsibility for the crisis. By these lights, Stiglitz has been an exemplary contrarian economist. Nonetheless, Stiglitz has simultaneously been a major defender of neoclassical economics, suggesting that the EMH is not all that central to the core doctrines of orthodoxy:
Normally, most markets work reasonably well on their own. But this is not true when there are externalities . . . The markets failed, and the presence of large externalities is one of the reasons. But there are others. I have repeatedly noted the misalignment of incentives – bank officers’ incentives were not consistent with the objectives of other stakeholders and society more generally. Buyers of assets also have imperfect information . . . The disaster that grew from these flawed financial incentives can be, to us economists, somewhat comforting: our models predicted that there would be excessive risk-taking and shortsighted behavior . . . In the end, economic theory was vindicated. (Stiglitz, 2010a, pp. 150, 153)
This is what Krugman has called ‘flaws-and-frictions’ economics, and it comes perilously close to the standard response) that ‘we already had models that told us the crisis was coming’. It follows that our first hesitation should be the one previously broached: so why weren’t these models well represented in macro or micro textbooks and graduate pedagogy? Stiglitz is fully aware that there exists a tradition of oxymoronic ‘New Keynesianism’ which reprised a boring old story of sticky wages and prices in a neoclassical equilibrium, but he wants to suggest that there exists something else on offer which is more compelling. In Stiglitz’s case, there is a special caveat: the models he has in mind are found mostly in his own previous publications. While there could be no academic prohibition against tooting your own horn, there is something less than compelling about claiming a generality for some idiosyncratic models where the novelty quotient is distinctly low. While Stiglitz has certainly earned the Nobel, he has not effectively staunched the intellectual trend of treating markets as prodigious information processors; nor has he provided a knock-down refutation of the EMH. This has led to the distressing spectacle of Stiglitz, the great hope of the Left, openly defending the neoclassical approach to the crisis, while not really changing it all that much.
Stiglitz has admitted that his mission all along was to undermine free market fundamentalism from within:
[I]t seemed to me the most effective way of attacking the paradigm was to keep within the standard framework as much as possible . . . While there is a single way in which information is perfect, there are an infinite number of ways that information can be imperfect. One of the keys to success was formulating simple models in which the set of relevant information could be fully specified . . . the use of highly simplified models to help clarify thinking about quite complicated matters. (Stiglitz, 2003, pp. 613, 583, 577)
The way he has chosen to do this is to produce little stripped-down models which maximize standard utility or production functions, with a glitch or two inserted up front in the set-up. He has been especially partial to portraying ‘information’ as a concrete thing to be purchased, and ‘risk’ as standard density function with known parameters. There is no canonical Stiglitz ‘general model’, but rather a number of specialized dedicated exercises, one for each flaw and/or friction explored. Macroeconomics then simply becomes microeconomics with the subscripts dropped. This distinguishes Stiglitz from the small cadre of researchers in section 20.3.3 below, who are convinced that this ‘representative agent’ trick does not constitute serious macroeconomic theory.
In Stiglitz’s academic writings, he stakes his claim to have refuted the EMH primarily on two papers, one co-authored with Sanford Grossman in 1980, and another co-authored with Bruce Greenwald in 1993. The take-away lesson of the first was summarized in his Nobel lecture:
When there is no noise, prices convey all information, and there is no incentive to purchase information. But if everybody is uninformed, it clearly pays some individual to become informed. Thus, there does not exist a competitive equilibrium. (2002, p. 395)
The second is proffered as the fundamental cause of the crisis in his (2010b):
It perceives the key market failures to be not just in the labor market, but also in financial markets. Because contracts are not appropriately indexed, alterations in economic circumstances can cause a rash of bankruptcies, and fear of bankruptcy contributes to the freezing of credit markets. The resulting economic disruption affects both aggregate demand and aggregate supply, and it’s not easy to recover from this – one reason that my prognosis for the economy in the short term is so gloomy.
Both of his crucial ‘findings’ are in fact based upon very narrow versions of what is a much more diversified neoclassical orthodoxy. It would indeed have been noteworthy if Stiglitz or his co-workers had provided a general impossibility theorem, say, along the lines of Gödel’s incompleteness theorem or Turing’s computability theorem, but Stiglitz has explicitly rejected working with full Walrasian general equilibrium (2003, pp. 580, 620), or Chicago’s resort to transactions costs (p. 573), and does not seriously consider the game theorists’ versions of strategic cognition. Indeed, it seems a rather heroic task to derive any general propositions from any one of his individual ‘toy’ models. Stiglitz himself admits this in when he is not engaged in wholesale promotion of his information program. Instead, it is possible that ‘simple’ models serve mainly to cloud the issues that beset the half-century quest for a consensus economics of information.
Take, for instance, the Grossman–Stiglitz model (1980). The text starts out by positing information as a commodity that needs to be arbitraged (p. 393), but claims in a footnote (p. 397) that the model of knowledge therein is tantamount to the portrayal of information as inductive index, which is not strictly true, and then defines its idiosyncratic notion of ‘equilibrium’ as equivalence of plain vanilla rational expected utilities of informed and uninformed agents. Of course, ‘for simplicity’ all the agents are posited identical; how this is supposed to relate to any vernacular notions of divergences in knowledge is something most economists have never been poised to address. Many economists of a different political persuasion simply ignored the model, because they deemed that Stiglitz was not taking into account their (inductive, computational) version of ‘information’. When Grossman offered his own interpretation of their joint effort, he took the position that the rational expectations model was identical to the approach in Hayek (1945), that: ‘when the efficient markets hypothesis is true and information is costly, competitive markets break down’, and that, ‘We are attempting to redefine the Efficient Markets notion, not destroy it’ (Grossman, 1989, p. 108). That seems closer to the median interpretation of Stiglitz’s work in the profession as a whole.
Perhaps the most distressing aspect of Stiglitz’s designated models that he believes starkly refute neoliberalism has been that, when you really take the trouble to understand them, they end up having nothing cogent to say about the current crisis whatsoever. Start with Grossman and Stiglitz (1980). The problems with the financial system in 2007 had nothing to do with participants lacking correct incentives to purchase enough ‘information’ which would have revealed the dodgy nature of the collateralized debt obligations (CDOs) and other baroque assets which clogged the balance sheets of the financial sector. Rather, the reams of information that they did purchase, from ratings agency evaluations to accounting audits to investment advice, was all deeply corrupted by being consciously skewed to mislead hapless clients and evade the letter of the law. Perhaps the ‘information’ was corrupted by the mere fact of being bought and sold. Since Stiglitz never comes within hailing distance of confronting epistemology in any of his models – he disdains philosophy as much as the next neoclassical – he never really deals with matters of truth and falsehood. Agents are just machines buying unproblematic lumps of information (or not).
And worse, the‘marketfailure’ that he repeatedly diagnoses has nothing to do with what people mean by ‘failure’ in the vernacular. Stiglitz identifies ‘market failure’ with not realizing the full measure of utility which might have occurred in the standard neoclassical model – this is called ‘Pareto optimality’ in the trade – and exists in an imaginary universe utterly devoid of markets freezing up and the implosion of the assignment of credible prices across the board. Likewise, the Stiglitz–Greenwald paper has nothing whatsoever to do with the collapse of the financial sector in 2008. Using their own words: ‘we showed that there were essentially always simple government interventions that could make some individuals better off without making anyone worse off. The intuition behind our result was that whenever information was imperfect, actions generated externality- like effects’ (Stiglitz, 2009, p. 557). Stiglitz persistently conflates ‘welfare loss’ with system-wide economic failure: this travesty stands in stark contrast to the model-free occasions wherein Stiglitz perceptively analyzes the inconsistencies of concrete practices in real-world institutions, linking them to palpable dire outcomes. Pareto optimality was the last thing one needed to consult to try and understand the utter confusion and disarray accompanying the mad improvisations at the Federal Reserve System (the Fed) and the Congressional Troubled Asset Relief Program (TARP) appropriation in the depths of the crisis; it certainly would be impotent to clarify the types of ‘government intervention’ required to stem the collapse. Incredibly, the Greenwald–Stiglitz model does not even explicitly have any money in it, even though one core phenomenon of the 2008 meltdown was a credit crisis. Instead, their model identifies the central weakness of the capitalist system as a rational contraction of investment on the part of firms, not financial system collapse. 
Stiglitz repeatedly pronounces last rites over the EMH, but has little effect on the profession because he cannot see that what is sauce for the goose is sauce for the gander. ‘The Chicago School and its disciples wanted to believe that the market for information was like any other market’ (2010a, p. 268). Yet, that is the fundamental initial premise of his own models. ‘The widespread belief in the EMH played a role in the Federal Reserve’s failure. If that hypothesis were true, then there was no such thing as bubbles’ (2010a, p. 269). But this just displays a deficiency of hermeneutic attention. As noted above, both the Fed and the profession can accept that the EMH, properly understood, and bubbles are entirely compatible – you just will not know you are in one till it bursts. And paraphrasing Bill Clinton, it all depends what you mean by ‘bubble’. Does Joe Stiglitz really repudiate the neoliberal doctrine of the Marketplace of Ideas? ‘The price mechanism is at the core of the market process of gathering, processing and transmitting information’ (2010a, p. 266). It seems the answer is No. His evangelism consists of showering his political opponents with little models where the Cosmic Information Processor aka ‘The Market’ is beset with various formats of idiosyncratic noise, pesky little flaws and granular frictions. There is nothing fundamentally autodebilitating about the system, nor autodestructive in the Marketplace of Ideas.  But then, who in the elite of the orthodox economics profession ever thought otherwise?
The endless quest to dispatch the EMH almost constitutes the definition of ‘empty gesture’ within orthodox economics.
14. The impression that economists were in some sense responsible for the existence of these markets became the pretense for the burgeoning literature on ‘performativity’ in the sociology of finance. See MacKenzie et al. (2007).
15. Washington Post, 7 June 2008.
16. This distinction has been a crucial component in some contemporary defenses of the EMH. See, for instance Szafarz (2009), or the Cassidy interview with Richard Thaler:
‘I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened’ (Cassidy, 2010).
17. http://blogs.reuters.com/felix-salmon/2009/08/11/why-the-efficient-markets-hypothesis- caught-on/: ‘Economists are scientists, after all. That which they can’t explain, they turn into an axiom.’
18. The following three paragraphs are ridiculously telegraphed summaries of narratives found in Mirowski (2002, 2009).
19. Since the notion that The Market is uniquely better at information processing than any human mind is a core tenet of neoliberalism (Mirowski and Plehwe, 2009), this trend justifies the claim that the economic orthodoxy has become more neoliberal, and hence more conservative, over time.
20. This often attracts the disdain of other Nobel winners, here James Heckman: ‘The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses’ (Cassidy, 2010). For some sensible Stiglitz observations on the state of affairs in 2010, see: http://www. hulu.com/watch/148219/foratv-economy-joseph-stiglitz-on-freefall-the-sinking-of-the-world-economy and http://www.businessinsider.com/joseph-stiglitz-were-probably- going-to-have-to-bail-out-the-banks-again-2010-7.
21. These are reprinted in (Stiglitz, 2009) as Chapters 21 and 26, respectively. Stiglitz identifies these as the key papers in his (2010a) and (2010b).
22. ‘Unfortunately, we have not been able to obtain a general proof of any of these propositions. What we have been able to do is analyze an interesting example’ (Grossman and Stiglitz, 1980, p. 395).
23. ‘While Keynes was willing to let Animal Spirits serve as the deus ex machina to retrieve an explanation of investment variability, our theory provides a more plausible explanation of variability in investment’ (Stiglitz, 2009, p. 647). ‘Talking up animal spirits can only take you so far’ (2010a, p. 256). These quotes exemplify why Stiglitz does not belong under our previous category of ‘behavioral economics’.
24. This is why Stiglitz’s repeated attempts to usurp the mantle of Hyman Minsky are deeply embarrassing; at least, if you had actually read Minsky. A serious attempt to refute neoliberalism would begin with acknowledgement of the ways in which markets undermine themselves in the course of ‘normal’ operations (for example Mirowski, 2010).