By Richard Alford, a former economist at the New York Fed. Since then, he has worked in the financial industry as a trading floor economist and strategist on both the sell side and the buy side.
As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.
Some weeks ago, an economist at the Federal Reserve Bank of Richmond’s Research Department posted an open letter on the web titled “Economics is hard. Don’t let bloggers tell you otherwise.” It generated a tidal wave of criticism that swept the original post off of the web (Still available here.) The criticism is both understandable and justified, with much anger directed at the implicit hubris and its self-serving nature.
However, the criticisms were almost exclusively aimed at the conclusion and little if any attention was paid to the underlying premise or structure of the argument. This is most unfortunate. The argument is based on three assertions:
1. Economics is and ought to be treated as hard science; consequently
2. Macroeconomics is too hard for bloggers and other non-PhD commentators to offer clear conclusions with any degree of confidence, but
3. Macroeconomics is not so hard that PhD economists are unable to offer clear conclusions with high degrees of confidence.
In regard to the first assertion, there are numerous reasons that macroeconomics should not be treated as a hard science. Nonetheless, the author of the post is not alone in drawing parallels between macroeconomics and seismology. However, the choice of this particular hard science to serve as benchmark reflects a wrinkle. The charge had always been that economists had “physics envy,” in particular, “astrophysics envy,” and economists as a group embraced the comparison.
Why “astrophysics envy”? And why did economists embrace a comparison to astrophysics?
Astrophysics is widely accepted as a hard science. It has been very successful in making predictions. Furthermore, it is highly mathematical and model-based, with limited ability to do repeated-controlled laboratory experiments. Given the predictive power and the absence of the ability to do controlled laboratory experiments, economists desiring that economics be accepted as a hard science adopted astrophysics as a research role model/paradigm. (The replacement of astrophysics by seismology as the benchmark to which economics is to be compared may reflect the fact the forecasting record of economics has more in common with seismology than it has with astrophysics.)
There are two possible outcomes: If the result confirms the hypothesis, then you’ve made a measurement. If the result is contrary to the hypothesis, then you’ve made a discovery.
No amount of experimentation can ever prove me right; a single experiment can prove me wrong.
To be scientific, a discipline must be based on gathering observable measurable evidence (data). On the basis of the observable data and past experience, researchers form a hypothesis. They use that hypothesis as a base to make a prediction or forecast a consequence. The hypothesis is then tested. If the predicted consequence does not come to pass, then the hypothesis has been falsified. One “failure” can prove a hypothesis false, but no number of “passes” can prove a hypothesis to be true. For example, one sighting of a black swan disproved the hypothesis that all swans are white, all the sightings of white swans notwithstanding.
How does contemporary macroeconomics fare when compared to the above description of the scientific method?
Contemporary macroeconomic theory generates policy frameworks/hypotheses predicated on non-observable variables, e.g., expected future values of explanatory variables and various “natural” rates. Furthermore, many variables are subject to significant measurement errors. If crucial variables cannot be observed and/or accurately measured, the hypotheses cannot be truly tested. No falsifiability, no science.
In addition, there are the problems posed by the Lucas Critique (the instability of aggregate macroeconomic relationships) and Goodhart’s Law (policymaker dependent reality).
The man who cannot occasionally imagine events and conditions of existence that are contrary to the causal principle as he knows it will never enrich his science by the addition of a new idea.
— Max Planck
Prior to the onset of the current travails, economists and Fed policymakers repeatedly cited inflation-only targeting as the reason for the “Great Moderation. They dismissed concerns about unsustainabilities in financial markets, asset prices, savings rates, and external imbalances. These variables were not part of their causal model-based policy framework.
We cannot solve our problems with the same thinking we used when we created them.
Inflation-only targeting, as reflected in the Taylor Rule, was predicted to result in stable inflation and trend growth. The decision to attach zero cost to the unsustainabilities and the failure to act when they grew, contributed to an economic outcome at variance with the forecast. However despite this failure, mainstream professional Ph.D. economists do not seem to have reduced the confidence they place in the Taylor Rule as a guide to policy.
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.
Or Planck’s short version:
Science advances funeral by funeral
In a similar fashion, criticism of the widely-used DSGE model has been limited. Proposed changes to the model have been limited to relatively minor (marginal) adjustments when compare to the near absence of a financial sector and a model that is inherently self-stabilizing.
However, even if economics is not a hard science, that does not imply that economics cannot be useful. Medicine is a mix of art and science. I believe that it is accurate to say that society places a greater value on the advancements in medicine than on all the increased understanding of the cosmos. The net benefit stream is more important than the relative purity of the science.
Economics might profit from replacing physics with medicine as its research role model. Adopting the research discipline of medicine as a role model would imply significant changes for economics.
Medicine acknowledges that risks are attached to virtually all courses of treatment. The medical profession also recognizes that not all treatments work or are as effective in all cases. It recognizes that in some cases negative side effects may preclude the use of a drug or treatment. It practices risk management by weighing the potential benefits against potential costs.
In many, it requires patients be periodically tested to confirm the absence of negative side effects. Evidence of any adverse outcomes not discovered during the trials can be cause to have them pulled or their use proscribed. There are processes and procedures in place to prevent researchers and pharmaceutical companies from ignoring negative side effects or carrying on while suggesting that possible side effects are someone else’s or some other specialty’s problem.
Furthermore, as a profession it is open to the possibility that both the benefits and costs may vary over time and across populations. Pharmaceuticals that were once effective, e.g., antibiotics, are acknowledged to have lost effectiveness. Pharmaceuticals with acceptable benefit/risk ratios in one section of the population may be inappropriate for use in another or because of the existence of another medical condition. It is recognized that a pharmaceutical can be beneficial at one dosage and fatal at another.
Contrast that with the behavior of the highly confident policymakers and macroeconomists.
They posit arguments based on the universality and time invariant quality of their models. Currently, pundits are debating the impact of possible tax changes. They express their conclusions with certainty, but history suggests that the link between changes and tax is mutable. The Kennedy tax cuts are credited with stimulating GDP growth. Clinton’s economic advisors and others have argued tax increases stimulated US GDP while at the same time argued that tax increases in Japan derailed a recovery there.
The link between financial aggregates and the economy has followed a similar path. Some Keynesians (but interestingly enough not Keynes) argued that money did not matter and that fiscal policy could be used to exploit a stable Phillips Curve (which it turned out wasn’t stable). In retrospect, the policy stances came to be viewed as contributing to the inflation of the 1970s. Monetarism was reborn and the back of the inflation was broken. However, high and volatile rates of interest induced a spate of financial innovation. Links between the monetary aggregates and GDP became unstable. Monetary targeting was dropped. With the rise of inflation-only targeting, policy makers chose to treat all financial measures, including debt and liquidity levels, but not interest rates, as devoid of any informational content and inappropriate targets for monetary policy. Post the housing bubble; however, liquidity became a focus of policy as the policymakers took action to prevent the deleveraging of the financial system, i.e., levels of liquidity, debt and leverage were important after all.
Prior to the current recession, policymakers dismissed all the policy supported unsustainabilities in the financial markets and the real economy (think negative side effects) during the run up to the crisis, then laid all the blame for the crisis and its aftermath on the failure of regulation.
Fed policymakers also failed to practice risk management when setting interest rate policy. In response to the slowing of the economy, the Fed eased dramatically starting in late 2000. The target for the Fed funds rate was reduced to 1.00%, where it stayed until mid-2004. The rate was then ratcheted up 25 basis points per FOMC meeting until mid-2006, when the Fed funds rate reached 5.25% and accommodation was deemed to have been removed. According to the CBO, the output gap (as a percentage of GDP) was -0.7, 0.0 and +0.2 for 2004, 2005 and-2006 respectively. In terms of the real economy, how much potential benefit/upside risk was there in accommodate monetary policy 2004-2006?
Economists and others had cited a list of growing economic and financial imbalances prior to and during the period 2004-2006, but the Fed chose to ignore the warnings. We now know with some certainty the minimum size of the downside risks associated with having interest rates too low for too long: an output gap at close to -7% of GDP, a sizable negative output gap that is expected to last for years, a ballooning of the fiscal deficit, the severe crippling of the financial system, etc.
Exposing the economy to those risks in turn for upside potential of less than 1% of GDP was a failure of risk management. The failure stemmed from policymakers’ misplaced confidence in their model and a resulting willingness to dismiss risks without so much as a thought. (If you doubt the veracity of this line of argument, then I suggest you Rajan’s speech and Kohn’s intellectually vacuous reply at the Jackson Hole Conference in 2005.)
In regard to the second and third assertions in the original post, I believe that no one, including PhD economists, has a sufficient understanding of either the real economy or financial markets to be able to forecast or recommend policy prescriptions with anything like the confidence expressed by any of the most widely-read, degree-holding pundits or policymakers. In short, I agree with the second assertion, but disagree with the third.
For all their differences pundits share a position. They all assert with great confidence that there is a riskless low-cost solution to the economic problems we face. They also assert that the solution could be put in place if only correct thinking policymakers are empowered. I differ. I do not think that the best solution will be riskless or will be low cost. However, economics can make a significant contribution to economic well-being, but only if it recognizes that it will never be able to make predictions (and propose policy) with anything like the confidence of a hard science.
However, to my mind the most objectionable and ironic aspect of the original post isn’t the claim that only professional economists can comment on economics and economic policy with confidence, but rather that the conclusion itself reflects really bad economics.
The original post cited a number of reasons that analysis performed by PhD economists has more merit and is more valuable than analysis or commentaries by both non-PhD economists and non-economists. They are
1. Ph.D economists are bright (although not the only bright people),
2. They have devoted years to study of economics, and
3. They have expended enormous efforts.
This is to say that economics done by professional Ph.D. economists embodies more human capital and is therefore superior to economics done by those who have not devoted the same amount of resources e.g. time and effort. This is ironic beyond belief. It is nothing more than a very crude labor (quality adjusted?) theory of value. A theory of value found to be wanting at least a hundred years ago by Jevons, Walras, Menger, Marshall et al.
It is of course possible that the author was aware of the failure to predict the worst recession and financial crisis since the Great Depression, but did not mention the failures in order not to undermine his own argument that economics is a hard science. While this explanation would explain the apparent retreat to a labor theory of value (totally ignoring the usefulness of the output, demand side), it only does so by emphasizing the profoundly unscientific and anti-intellectual aspects of the argument.