Yves here. This post indirectly makes a powerful case for automatic stabilizers, as in programs like food stamps and other social safety nets, where spending automatically rises in bad times and falls off when the economy is robust. You don’t need economists intervening as much if government spending levels auto-correct.
By Richard Alford, a former New York Fed economist. Since then, he has worked in the financial industry as a trading floor economist and strategist on both the sell side and the buy side
Forecasting is a necessary component of macroeconomic policy formation for the simple reason that macroeconomic policies act with “long and variable lags.” If policy is based upon a macroeconomic forecast that proves to be inaccurate, then the policy prescription will become inappropriate over the forecast horizon and degrade economic performance. Consequently, understanding the odds and possible impact of forecasting error should be a important element of policy design.
This post assesses the success of both private and official sources in forecasting turning points in the economy, which is one way to evaluate risks to the forecasts and policy. It concludes the experts don’t do a good job. It then addresses whether this failure matters, that is, whether the ability to forecast turning points in the economy is a sound benchmark. The post closes with a number of implications of the inability to forecast turning points for macroeconomic policy.
The Forecasting Record: Bad to Ugly
The macroeconomic record of forecasting turning points is poor, as illustrated in: “There Will Be Growth in the Spring”: How Well do Economists Predict Turning Points?, by Hites Ahir and Prakash Loungani (VoxEu April 2014):
In a classic 1987 paper, William Nordhaus documented that, as forecasters, we tend to break the “bad news to ourselves slowly, taking too long to allow surprises to be incorporated into our forecasts.” Papers by Herman Stekler (1972) and Victor Zarnowitz (1986) found that forecasters had missed every turning point in the US economy.
These traits appear to have persisted to this day. In our recent work we look at the record of professional forecasters in predicting recessions over the period 2008-2012 (Ahir and Loungani 2014). There were a total of 88 recessions over this period, where a recession is defined as a year “in which real GDP fell on a year-over-year basis in a given country. The distribution of recessions over the different years is shown in the left-hand panel of Figure1.
The panel on the right shows the number of cases in which forecasters predicted a fall in real GDP by September of the preceding year. These predictions come from Consensus Forecasts, which provides for each country the real GDP forecasts of a number of prominent economic analysts and reports the individual forecasts as well as simple statistics such as the mean (the consensus).
As shown above, none of the 62 recessions in 2008–09 was predicted as the previous year was drawing to a close. However, once the full realisation of the magnitude and breadth of the Great Recession became known, forecasters did predict by September 2009 that eight countries would be in recession in 2010, which turned out to be the right call in three of these cases. But the recessions in 2011–12 again came largely as a surprise to forecasters…
In short, the ability of forecasters to predict turning points appears limited. This finding holds up to a number of robustness checks…
To summarise, the evidence over the past two decades supports the view that “the record of failure to predict recessions is virtually unblemished,” as Loungani (2001) concluded based on the evidence of the 1990s.
The results presented in the VoxEu post indicate that official forecasts are no better than private sector forecasts:
Forecasts from the official sector, either from national sources or international agencies, are no better at predicting turning points. In the case of the US, the March 2007 statement by then-Fed Chairman Bernanke that “the impact on the broader economy and financial markets of the problems in the subprime market seems likely to be contained” has received a lot of attention. Another Fed Chair, Alan Greenspan, told his colleagues in late-August 1990 – a month into a recession – that “those who argue that we are already in a recession are reasonably certain to be wrong.” Forecasts by Fed staff have also missed turning points, as discussed in Sinclair, Joutz and Stekler (2010).
During the Great Recession, Consensus forecasts and official sector forecasts were so similar that statistical horse races to assess which one did better end up in a photo-finish.”
Is Correctly Identifying Turning Points a Useful Benchmark for Macroeconomic Forecasts?
Most evaluations of the accuracy of macroeconomic forecasts are based on comparisons of measures of the average absolute size of the forecast errors. However, the number of periods in which the economy is trending far exceeds the number of periods in which the economy is turning. Consequently, evaluations of forecast accuracy based on average errors place a low weight on the accuracy of forecasts at turning points. Macroeconomic policy was developed to reduce the incidence and costs of significant deviations from full employment, not relatively small quarter-to-quarter deviations from trend. Hence, the ability to forecast turning points in the economic cycle is a more appropriate measure of forecast accuracy for setting macroeconomic policy than is the average error over a cycle or cycles.
What Are the Implications of the Inability of Forecasts to Identify Turning Points?
In addition to all the other risks that economic agents face, the inability to forecast turning points implies that economic agents must also contend with 1) the risks inherent in the economy, and 2) the risk that they will incorrectly forecast the future course of policy. Given the dynamic and complex nature of a modern economy, economic agents and policymakers themselves also face the risk that policymakers will at times set policy inappropriately. However, there is little evidence that policymakers consider and communicate to the markets the existence of the possibility of outcomes removed from the central tendency of their forecasts.
While macro policy mistakes are inevitable, the costs of policy mistakes can be minimized if policymakers continuously and diligently explore the risk that their models are mis-specified, the forecasts are incorrect, and policies are inappropriate. This is not to say that policymakers should alternate between slamming on the brakes and flooring the accelerator, but rather be willing to alter the policy more quickly when developments change the confidence in the forecast and/or the risk/return profile of the policy stance.
To be successful in the long-run, policymakers must be comfortable making decisions despite the risks and uncertainty. They must also have the flexibility to adjust policy in light of 1) new evidence that the economy is not evolving as expected, 2) in response to risks not previously appreciated, or 3) changes in the structure of the economy or sectors that increase the probability that the underlying model is mis-specified. They must constantly weigh the risk/return profiles of alternative policy paths and consider all possible outcomes and not just the most likely.
Unfortunately, policymakers have been anything but open minded about the possibility that their underlying model is mis-specified and their forecast and policy decisions are incorrect. The rude and anti-intellectual response of the attendees at the Jackson Hole symposium in 2005 to Rajan’s presentation (“Has Financial Development Made the World Riskier?”) is just one example.
It is also clear that the Fed never explored the downside risks to its chosen policy path prior to the crisis of 2007. The civilian unemployment rate peaked at 6.2% in July of 2003. The upside potential to the continuance of the accommodative policy stance, as reflected in the elevated unemployment rate, declined rather steadily. In 2004, with the unemployment rate at 5.6%, the Fed started to raise the Fed funds rate from 1% at a “measured pace,” i.e., 25 basis points per FOMC meeting. By May of 2005, the unemployment rate was 5.1%, down 1.1 percentage points from its peak in June of 2003. The upside potential to the continuation of the accommodative stance had been halved. (This assumes that the Fed viewed the 4.2% unemployment rate as close to its estimate of NAIRU — the Fed funds rate plateaued at 5.25% in 2006 when the unemployment rate was 4.2%.)
What was happening to the downside risks? The Fed acted as if there were no downside risks and dismissed any and every argument that economic and financial fragilities and unsustainabilities were building. Post-crisis, the Fed defended its failure to see the crisis and recession by arguing that no one saw it coming. However, numerous parties cited risks, including Shiller and Rajan. Most telling, Fed Governor Kohn gave a speech in 2003 in which he cited and dismissed criticisms of Fed policy, including the possibility of financial dislocations stemming from a correction in the housing market.
Unfortunately, economic and financial fragilities and unsustainabilities were building. The rate of house price appreciation was accelerating, loan-to-value ratios were rising, financial institutions were becoming increasingly leveraged and were employing CDS and other off-balance sheet vehicles, corporate bonds were becoming covenant light (reduced investor protections), and investors were reaching for yield.
The housing market rolled over in 2006 and eventually took with it the balance sheets of households and financial institutions, as well as the performance of the real economy. The unemployment rate rose, reaching 10% in October of 2008, despite monetary and fiscal stimulus. While the precise size, depth and duration of the recession was unknowable in advance, one does not have to have foreseen its severity to have realized that the risk/return profile associated with the policy stance had deteriorated and become unfavorable long before the Fed funds rate ceased to be accommodative.
An ex post, and hence somewhat unfair exploration sheds light on the deterioration of the risk/return profile. The Fed pursued an accommodative policy through 2005 when the possible return (upside potential) was a 1 percentage point decline in the unemployment rate and the downside risk was at least an unemployment rate 6 percentage points higher than the target the Fed was aiming for. To risk losing 6 when the best you can do is gain 1 does not make a lot of sense, unless one has done all his or her homework and is very confident of a favorable outcome
The Fed was very confident, but hadn’t done its homework. Perhaps the failure to forecast or to allow for even the possibility of a housing price bubble and financial crisis reflected its chosen intellectual framework, i.e., DSGE models. The financial sector was excluded from DSGE models, as it was viewed as a passive intermediary between the representative agents, and of no macroeconomic importance itself. While the assumed absence of a financial sector presumably aided in the construction and tractability of the DSGE models, the question remains: how did models with no financial sector come to dominate policymaking at the Fed and other central banks? After all, in 2006, the market capitalization of firms in the financial sector was 22.3 % of the total market capitalization of the S&P 500. Employment by the financial sector was 7+% of all private sector employment. The scale of the profits and the number of jobs in the sector make it difficult to believe that it was a passive intermediary of no macroeconomic significance. Furthermore, history is replete with instances of financial disruptions being followed by unusually long and deep recessions, e.g., Japan’s lost decades and the US Great Depression.
Nonetheless, the Fed never took warnings about the financial risks seriously. The Fed’s surprise at the crisis and recession reflect its failure to fully research and understand the implications of its policies, both macroeconomic and regulatory. It blithely assumed that the announced low-for-long accommodative monetary policy was a scalpel with narrowly defined implications for the real economy, when, as it turned out, it was a relatively blunt instrument with implications for the stability of the financial markets as well the real economy.
The results cited in the VoxEu blog post also raise questions about the wisdom of central bank commitments to policies of “expectations management” and “forward guidance”. The term “expectations management” is an example of Orwellian Newspeak. “Expectations management” is an exercise in expectations (thought?) control, i.e., the Fed uses policy commitments to cause the range of alternative private forecasts of policy and the real economy to converge with its own.
There are a number of problems and risks. The official forecasts are no better at identifying turning points than are the private forecasts that they replace. Expectations management can exacerbate real economic problems if it succeeds in replacing alternative forecasts when its own underlying forecast is incorrect. For example, the Fed succeeded in convincing people post-2000 that: 1) policy was responsible for the advent of the “Great Moderation” and insured continued trend growth with low and stable rates of interest and inflation, 2) housing prices were not in a “bubble”, and 3) critics who saw the possibility of future financial and economic problems were incorrect.
In doing so, Fed encouraged the increased use of leverage and the willingness to incur higher ratios of debt to income. As a result, it contributed to the economic and financial fragilities, the scale of the economic and financial dislocation that followed the crisis, as well as the financial and economic hardship experienced by a large number of households.
The Fed also argues strongly that the use of talk as policy has had beneficial effects. The “talk” has had easily observable effects on the financial markets. However, it is not clear if the developments in the financial markets have been the result of increased policy transparency or of decreased near-term policy uncertainty.
“Forward guidance,” a commitment to a predetermined policy path has been cited as particularly effective, even though it is less transparent than a rule-based policy path, e.g., the Taylor Rule. The Taylor Rule mechanically linked changes in specified variables to the policy response. In contrast, forward guidance has been a litany of short-lived commitments. In addition, the rationale for the purchases of $85B (as opposed to say $125B or $60B) per month of Treasuries and MBS was never disclosed. Neither was the rationale underlying the decision about the pace at which to taper the purchases. In the absence of an underlying rationale for the chosen path, markets are without a benchmark to evaluate the policy or to form expectations for policy for the periods after the commitment expires.
However, the commitment to a path for policy has trumped the opaqueness of policy as the asset markets have rallied. It appears that the commitment to a particular path for policy in the near-term reduces the uncertainty for players in the financial markets and therefore increases the incentive to employ leverage maturity mis-matches and to reach for yield. Hence, “forward guidance” has the ability to generate responses in the financial markets even in the absence of policy transparency. The beneficial effects on the real economy are less certain, as are the implications for financial stability and the sustainability of real growth.
Also, on an internal logic consistency note, it is more than a little ironic that while the Fed embraces models that assume that markets exhibit ‘rational’ expectations, it also argues that it has to actively and almost continuously change market expectations about policy. If the market is incapable of correctly, i.e., rationally, interpreting a carefully worded FOMC statement, what is the chance that the market’s other expectations, based on noisy and often conflicting data, are “rational.”
The historical track record of macroeconomic forecasting, including the recent past, is inconsistent with the degree of confidence expressed by official and private forecasters. In evaluating macroeconomic forecasts and resulting policy prescriptions, one should employ a healthy dose of skepticism. The size of the dose should vary directly with the precision of the forecast, the expressed confidence of the forecaster and the degree of commitment that the forecaster has shown to a model or school of thought.
Furthermore, committing to follow a predefined path for policy for an extended period of time, even if one is highly confident, is risky. If the underlying forecast is accurate and the policy correct, the upside potential will be realized. If, however, the forecast is not accurate or the policy has unforeseen and undesirable side effects, the commitment to the policy is likely to increase the costs attached to the adoption of that policy.
While the Fed talks as if it knows the outcomes of its policy choices with a differentially higher degree of certainty, the record indicates that it doesn’t:
Policymakers should adopt disciplines that reflect the inherent riskiness in setting policy. There are numerous disciplines that involve decision making in the face of risk and uncertainty, e.g., game theory. Perhaps policymakers should consider incorporating findings from those fields in to policy design and implementation, rather than setting policy as if they know with certainty the future course of the real economy and the interactions among policy, the financial markets and the real economy.