Yves here. Now the big artillery is coming out. The Gang of Four, as Bill Black called them, that attacked Gerald Friedman for publishing a model that showed that Bernie Sanders’ economic plan could work, delivered what they though was a roundhouse punch on Thursday, in the form of a short paper by Christine Romer and David Romer. Their conclusions, which were taken up quickly by the mainstream media, such as the New York Times, would seem to be fatal:
The demand impacts forecasted are too large
The Friedman model assumes the output gap is larger than it is
The plan is likely to do little to increase productive capacity
The latter claim is bizarre since quite a few economists around the world are now pushing for infrastructure spending, precisely because it has spillover effects on productivity. For instance, Larry Summers pointed out in the Wasshington Post in 2014 that the IMF estimated that every dollar of infrastructure spending increased GDP by nearly $3 and the Sanders plan calls for significant infrastructure expenditures. By contrast, Friedman used very conventional fiscal multiplies of 1.2 in the early years falling to .8
So why do the Romers say so confidently that Friedman is off base? They are using a different model. And as Galbraith explains long-form, it’s one with a pretty crappy track record in post-crisis America. And Galbraith gives an important warning:
In the real world, forecasts are a very weak guide to policy; when attempting to make major changes the right strategy is to proceed and to take up the challenge of obstacles or changing circumstances as they arise. That is, after all, what Roosevelt did in the New Deal and what Lyndon Johnson did in the 1960s. Neither one could have proceeded if today’s economists had been around at that time.
By James Galbraith, professor of Government/Business Relations at the Lyndon B. Johnson School of Public Affairs, the University of Texas at Austin. His most recent book, Inequality and Instability was published in March, 2012 by Oxford University Press. The next will be The End of Normal, from Free Press in 2014. Originally published at the Institute for New Economic Thinking website
The Romer/Romer letter to Professor Gerald Friedman marks a turning point. It concedes that there are indeed important issues at stake when evaluating the proposed economic policies of Presidential Candidate Bernie Sanders. These issues go beyond the political debate and should be discussed seriously between and among professional economists.
All forecasting models embody theoretical views. All involve making assumptions about the shape of the world, and about those features, which can, and cannot, safely be neglected. This is true of the models the Romers favor, as well as of Professor Friedman’s, as it would be true of mine. So each model deserves to be scrutinized.
In the case of the models favored by the Romers, we have the experience of forecasting from the outset of the Great Financial Crisis, which was marked by a famous exercise in early 2009 known as the Romer-Bernstein forecast. According to this forecast (a) the economy would have recovered on its own, in full and with no assistance from government, by 2014, (b) the only effect of the entire stimulus package would be to accelerate the date of full recovery by about six months, and (c) by 2016, the economy would actually be performing worse than if there had been no stimulus at all, since the greater “burden” of the government debt would push up interest rates and depress business investment relative to the full employment level.
It’s fair to say that this forecast was not borne out: the economy did not fully recover even with the ARRA, and there is no sign of “crowding out,” even now. The idea that the economy is now worse off than it would have been without any Obama program is, to most people, I imagine, quite strange. These facts should prompt a careful look at the modeling strategy that the Romers espouse.
I attach here the manuscript version of Chapter 10 from my 2014 book, The End of Normal, “Broken Baselines and Failed Forecasts,” which discusses these issues in (I hope) accessible detail.
It should be noted that these issues, while important, do not bear on whether economists should try to discourage American voters from supporting the Sanders program. In the real world, forecasts are a very weak guide to policy; when attempting to make major changes the right strategy is to proceed and to take up the challenge of obstacles or changing circumstances as they arise. That is, after all, what Roosevelt did in the New Deal and what Lyndon Johnson did in the 1960s. Neither one could have proceeded if today’s economists had been around at that time.
The End of Normal By James K. Galbraith (Free Press, 2014)
Chapter Ten: Broken Baselines and Failed Forecasts
The Great Financial Crisis broke into public view in August, 2007, when the interbank lending markets suddenly froze. It built through that fall and winter, to the failure and fire sale of Bear Stearns in March, 2008. By then the US economy was slowing down, and Congress enacted the first “stimulus” package at the request of President George W. Bush. But this was all prologue. Through the spring, summer and early fall, the official line continued to be that problems were manageable, that the slowdown would be modest, that growth would soon resume. The presidential campaign played itself out that year on topics of greater interest to the voting public: a prolonged debate among the Democrats over the details of health care reform and between the eventual nominees over the war in Iraq. True panic would await the bankruptcy of Lehman Brothers, the sale of Merrill Lynch, the failure of AIG, and the seizure of Fannie Mae and Freddie Mac in September, 2008.
Then panic came. Money-market-mutual funds fled from the investment banks whose debts they had unfortunately held. Their depositors then began to flee, seeking safety in insured deposits in the banks. The funds had to be rescued by a guarantee from the President, who duly committed the Treasury’s Exchange Stabilization Fund to their support. As depositors then fled the smaller banks to the larger ones, deposit insurance limits had to be raised. Globally, access to dollars dried up, threatening banks, especially in Europe, that needed dollars to service debts they had incurred at low rates of interest in New York. This threatened the collapse, ironically, of foreign currencies against the dollar. Meanwhile Treasury Secretary Henry Paulson and his team struggled to come up a program that could keep the big banks from sinking under the vast weight of corrupt and illiquid mortgage-backed-securities, made against over-priced and over-appraised properties, that they held.
Once the immediate panic was quelled, the public’s reaction to these events was conditioned by three major and closely-related forces. First, new loans were no longer available, on any terms let alone the preposterously easy ones of the pre-crisis years. Second, home values declined and so even if lenders had wanted to resume lending the collateral to support new loans had disappeared. And third, there was fear. The consequence was a very sharp decline in household spending (and so also in business investment), in production and employment, and a sharp increase in private savings. Households, mired in debt, began the long, slow process of digging themselves out.
In the face of such events, one might think that economists responsible for official forecasting would be moved to review their models. Yet as the financial world crumbled, there is no sign that any such review took place. Indeed there were practically no modelers working on the effects of financial crisis, and so the foundations for such a review had not been laid. Though many observers saw that the disaster was of a type and severity not captured by the available models, they could not change the structure of the models on the fly. So the models absorbed the shock, and went on to predict – as they always had done in the past – a return to the pre-crisis path of equilibrium growth.
Consider the baseline economic forecast of the Congressional Budget Office, the officially nonpartisan agency lawmakers rely on to evaluate the economy and their budget plans. In its early January forecast in 2009, CBO measured and projected the departure of actual from “normal” economic performance – the “GDP gap.” The forecast had two astonishing features. First, the CBO did not expect the recession to be any worse than that of 1981-1982, our then-deepest postwar recession. Second, CBO expected a strong turnaround beginning late in 2009, with the economy returning fully to the pre-crisis growth track by around 2015, even if Congress had taken no action at all.
Why did Congress’s budget experts reach this conclusion? On the depth and duration of the slump, CBO’s model was based on the postwar experience, which is also the run of continuous statistical history available to those who program computer models. But a computer model based on experience cannot predict outcomes more serious than anything already seen. CBO – and every other modeler using this approach – was stuck in the gilded cage of statistical history. Two quarters of GDP loss at annual rates of 8.9 and 5.3 percent were beyond the pale of that history. A long, slow recovery thereafter – a failure to recover in any full sense of that word – was even more so.
Further – and partly for the same reason that past recessions had been followed by quick expansions – there was baked into the CBO model a “natural rate of unemployment” of 4.8 percent. This meant that the model moved the forecast economy back toward that value over a planning horizon of five or six years, no matter what. And the presence of this feature meant that the model would become more optimistic when the news got worse. That is, if the news brought word of a ten percent unemployment rate, instead of eight percent, then the model would project a more rapid rebound, so as to bring the economy back to the natural rate. A twelve percent unemployment rate would bring a prediction of even faster recovery. In other words, whatever the current conditions, the natural rate of unemployment would reassert itself over the forecast horizon. The worse, the better.
Then there was another problem, which has to do with the way economists inside the government interact with those outside. When the government has a scientific question – say on the relation of tobacco to cancer or the danger of chlorofluorocarbons to the ozone layer – there is a protocol for getting an answer, which typically consists of setting up a commission of experts who, within a relatively narrow range, are able to deliver a view. That view may be controversial, but there is at least a fairly clear notion of what the scientific consensus view is, as distinct from (say) the business view. The Intergovernmental Panel on Climate Change does not feel obliged to include, among its experts, the designated representative of the coal companies.
With economic forecasting there is no such independent perspective. A large share of working economic forecasters are employed by industry – especially by banks. And those in academic life who forecast often make some outside living by consulting with private business. The CBO and the Office of Management and Budget – which do the economic forecasting for the government, are not independent centers but derivative from the dominant business/academic view. Typically the business forecasters aggregate their views into an average or community viewpoint; this is published as the “Blue Chip” consensus. And here is the result: a vigorous dissent – say by Nouriel Roubini – in early 2009, making the case that conditions were far worse than they seemed, and that the long-term recovery forecast was wholly unrealistic, would have been immediately classified as an eccentric or unusual point of view. As such, it would be either dropped from the consensus (as an “outlier”) or simply averaged in. Either way, it would carry little weight.
And meanwhile, the financial economists – those employed directly by banks being perhaps anxious to avoid having their institutions seized – became a chorus of optimists. In April 2009, for example, in New York City at the annual Levy Institute Conference on economics and finance, James W. Paulsen of Wells Capital Management projected a “V-shaped” economic recovery and scrawled “Wow!!” over a slide depicting the scale of the stimulus to that point.* CEA Chair Christina Romer polled a bipartisan group of academic and business economists, including those of this type, and senior White House economic adviser Lawrence Summers told “Meet the Press” that the final package reflected a “balance” of their views. This procedure guaranteed a result near the middle of the professional mind-set.
The method is useful if the errors are unsystematic. But they are not. Even apart from institutional bias, economists are by nature cautious and in any extreme situation the mid-point of professional opinion is bound to be wrong. Professional caution even dampened the ardor even of those who may have been ideologically disposed to favor strong action. In November, 2008, 348 left-leaning economists signed a letter to the President-elect, demanding a stimulus of just $350 billion.* Within a few weeks, a much larger package was on the table, but the tentativity of the left position helped to tie the hands of those within the administration who might have pushed for more.
The CBO and the OMB took the measure of these views as though they were an unbiased sample, which they were not, and as though the situation were within the normal post-war range, which it was not. It’s hard to imagine a set of forecasting principles and consultation practices less well-suited to recognizing a systemic breakdown. Short of a decision to override the forecasting exercise – a decision that could only have come from the President, for which he was not qualified, and that would have been open to criticism as “political” interference in a technical process – there was no way for an unvarnished analysis of the grim situation to make it to the center of policy-making.
The principles underpinning the models, as they were built, reinforced the notion that recovery would be automatic and inevitable. A key such principle is that of a “potential rate of total output” or potential GDP. This is usually calculated in a simple way, by extending the trend of past growth of total production (gross domestic product) into the future. It is therefore assumed that the capacity to produce continues to grow, even if actual growth and production fall short for a time. The presumption is that the economy can, if properly managed (or not-managed, according to ideas about policy) always return to the rate of production predicted by the long-run trend of past output growth.
The concept of a natural rate of unemployment – also known as the “non-accelerating inflation rate of unemployment” (NAIRU) – provide a notional mechanism for the return to potential. The NAIRU idea is that the unemployment rate is determined in a market for labor, governed by the forces of supply and demand, which impinge on the level of wages – the price of labor. If there is unemployment, then market pressures will drive real wages (wages measured in terms of their purchasing power) down. This will improve the attractiveness of workers to employers, and gradually bring the unemployment rate back to its normal or natural level. The return of employment to normal then implies a return of production to its potential.
The NAIRU had been a staple of textbook economics for decades, with the mainstream view holding that any effort to push unemployment below six or seven percent would generate runaway inflation. Over the 1980s these estimates came under challenge, and in the 1990s, as unemployment fell without rising inflation, the custodians of natural-rate estimates progressively lowered their numbers. For those who had argued against the high NAIRU, the relatively low NAIRU estimates, on the order of five percent, of the post-crisis forecasts produced a bitterly ironic outcome. Previously a high NAIRU had been an excuse for policy complacency in the face of high unemployment. Now the low NAIRU became a reason for considering the high actual unemployment rate to be anomalous – and thus for expecting a rapid “natural” rebound of economic growth – and once again for doing little.
The infamous Christina Romer/ Jared Bernstein forecast of early 2009 illustrates this property. Romer and Bernstein, senior economists with the incoming Obama team and in Bernstein’s case the progressive economists’ lone representative on the team, predicted that with no stimulus package unemployment would peak at about 9 percent in early 2010.
With stimulus, they held that the peak would be around 8 percent in early 2009, a mis-forecast for which they were criticized somewhat unfairly*. The more important point is that with or without stimulus, Romer-Bernstein projected that unemployment would return to near five percent by 2014. And they projected that a return to unemployment below 6 percent, expected in 2012, would be delayed only by six months if there were no stimulus. Romer and Bernstein were trapped not so much by the unexpected depth of the slump, as by the entirely formulaic expectation, dictated by the NAIRU, of what would happen afterward.
Among other consequences, the official theory of events was forced to concede that the benefits of any fiscal expansion, or stimulus program, would be felt only in the very short term. From the standpoint of the new administration, as expressed by its own economists – perhaps unwittingly, but still – the entire American Recovery and Reinvestment Act – the signature response to the crisis – was only a stopgap. It was conceived and designed, at least in macroeconomic terms, as nothing more than a bit of a boost on the way to an otherwise-inevitable outcome. In practical terms it was much better than this, but that fact was downplayed, even concealed – rather than being trumpeted as it might have been.
The grip of teleology on the half-hidden mechanics of the forecasting process is actually even stronger than this. Looking out over ten years or so, official economic forecasts tend to show minor losses from stimulus programs. This is thanks to what they project to be the financial consequences – higher interest rates – of increasing the government’s debt. This effect is supposed to “crowd out” private capital formation that would otherwise have occurred. Once the shortfall in total production is made up, the extra interest burden associated with recovering the lost ground more quickly than otherwise is projected to weigh as a burden on private economic activity going forward. With less private investment, there will be (it is projected) a smaller capital stock, slightly less output, and eventually the gains associated with stimulus will be outweighed by these offsetting losses.
And so the Obama Team found itself working from predictions that foresaw a top jobless rate of around nine percent, with no stimulus, and a fast recovery beginning in the summer of 2009 with stimulus or without it. Those forecasts helped to place an effective ceiling on what could be proposed or enacted, as a practical matter, in the form of new public spending. When CEA Chair Romer proposed an expansion program well above one trillion dollars, Lawrence Summers advised her that the number was “extraplanetary.” Summers did not necessarily disagree with Romer’s estimates; in retrospect he has said he did not. Rather, his
political judgment was that to propose such a large plan would undermine the credibility of the analyst – given the weight of the forecasts with the President and Congress. So eight hundred billion dollars over two years became the number around which expectations coalesced.
Given the pressure for quick results, the American Recovery and Reinvestment Act tilted toward “shovel-ready” projects like refurbishing schools and fixing roads, and away from projects requiring planning, design and long-term project execution, like urban mass transit or high-speed rail, even though a large number of such long-term investments, including in energy and the environment, were tucked inconspicuously into the bill, as Michael Grunwald tells in his 2012 book, The New New Deal, on the expansion program. There was an effort to emphasize programs with high estimated multipliers – “more bang for the buck” – though this was also compromised by accepting tax cuts for political reasons, for about a third of the dollar value. Tax cuts have low multipliers, and especially so when the household sector feels strong pressure to pay down its debts rather than embark on new spending. The bill also provided considerable funds to state and local governments to hold off the sacking of teachers, police and fire, and other local public servants. Such expenditures are stabilizing, but they add nothing to the economy that wasn’t already there.
The push for speed also influenced the recovery program in another way. Drafting new legislative authority takes time. In an emergency, it was sensible for Chairman David Obey of the House Appropriations Committee to mine the legislative docket for ideas already commanding broad support (especially among Democrats). In this way he produced a bill that was a triumph of fast drafting, practical politics and progressive principle. But the scale of action possible by such means was unrelated to the scale of the disaster. And in addition to that, there was, to begin with, the desire for political consensus. The President chose to start his administration with a bill that might win bipartisan support and pass in Congress by wide margins. He was of course spurned by the Republicans; in spite of making tax cuts a major feature of the bill, no House Republican voted for it.
The only way to have avoided being trapped by this logic, would have been to throw out the forecasters and their forecasts. The President might have declared the situation to be so serious, and so uncertain, as to require measures that were open-ended; that were driven by the demand for them; measures that would not be subject to appropriations limits and that would therefore break, as necessary, all budgetary rules and all the constraints. A program enacted under that stipulation could then have been scaled back, once in place, should it prove to provide more support than the economy required*. In early 2009, that would have been a remote risk.
Of course forecasting failures became apparent quite quickly when the economy did not remain on the growth track anticipated in early 2009. 2010 was a disappointment, as were 2011 and 2012; from the trough of the slump economic growth never exceeded 2.5 percent. The ratio of employment to population never improved, and unemployment declined largely because in increasing numbers people ceased looking for work. Residential investment in 2012 was half its 2005 levels; total investment remained more than ten percent below its previous peak.
And what happened when the economy did not cooperate with the forecasts? Did this bring on a review of the models? Again, one might hope so. Again, one would be disappointed. The simple response of the forecasters to the failures was to run the models again, with a new starting point. Thus the five-year window for the start of a full recovery kept receding into the future, year by year, like a desert mirage. In 2009, full recovery was expected by 2014; in 2010, the date became 2015, and so forth.* Each year, the forecasters told us, the world would be “back to normal” – with full employment, recovered output, and high investment – five years hence.
It’s plainly unsatisfactory, to forecast in this way. But what’s the alternative? To develop a different point of view, one needs a model capable of generating a picture of the future that does not necessarily yield a mirror of the past. To do that, one needs a structured grip on the underlying mechanics. One needs a vision of how the economy works, and one needs to have the courage to assert that vision – ironically – in spite of the fact that it cannot be derived from the past statistical record. This is the hard part. But only in this way can one see that the baseline is baseless, that equilibrium is vacuous, that the past growth path is not the single best forecast. There is no way to build such a model for use by functionaries, and hence no easy escape from the mental traps of statistical prediction.
In an emergency, therefore, forecasts are not only useless; they are counterproductive. Franklin Roosevelt was blessed by history, in that he came to power in an age bereft of national income accounts and economic forecasting models. He was working in the dark, with nothing to guide him but a sense of urgency, the advice of trusted observers, and the observed results of action. So he tried everything, the plausible and the implausible alike, and both received and accepted full credit for the results. If Roosevelt had had the benefit of today’s economic experts, Keynesians and anti-Keynesians alike, he would have never gotten the New Deal off the drawing boards.
It is no surprise, then, that in 2008 and early 2009 the policy responses to crisis were in roughly inverse scale to the influence of forecasting on the decision makers. In the financial sphere, and especially at the beginning, the panic was pervasive and the policy reaction boundless. Forecasts did not matter. The Federal Reserve dropped interest rates to zero, provided unlimited liquidity to banks, and essentially unlimited access to dollars to the world financial sector. Only the element presented to Congress, the monstrous cash-for-trash operation called the Troubled Asset Relief Program (TARP) was limited in scale, to an arbitrary number ($700 billion) chosen for political reasons [because it was “more than $500 billion and less than a trillion,” as one senior Senate staffer put it to me.] But TARP was window dressing. Initially, it was designed to have been a scheme of reverse auctions, intended to “discover prices” for the bad assets and so simulate the behavior of the financial markets that suddenly no longer existed. As the notorious three-page, $700 billion proposal made its way through Congress, it became clear that the idea would not work. The auctions could not be got up-and-running in the few days’ time available, and could not be protected from manipulation even if they had been.
Quite quickly, schemes based on mimicking or supporting markets were replaced by improvised quasi-nationalization. Deposit insurance was increased from one hundred thousand to $250,000 on all accounts, and extended to cover business payroll accounts that often exceed that limits for brief periods. The Treasury deployed TARP funds to take an equity position in the major banks, providing them with capital that they needed for regulatory reasons. Further funds flowed to Goldman Sachs, Morgan Stanley and foreign counterparts like Deutsche Bank from a decision to pay off the insurance giant AIG’s credit default swaps at face value. Meanwhile the Federal Reserve took over the commercial paper market, assuring a flow of funds to major companies who had been relying on money market mutual funds. The Fed also made dollars available on a large scale, at near-zero cost, and created its own (non-auction) support for toxic assets (via TALF, PPIP and other facilities), while Treasury addressed foreclosures with a program, called HAMP, to “foam the runway” for the banking system * by stringing out the process of foreclosure on millions of defaulted or troubled home loans. The Federal Reserve also swapped currencies, to the tune of $600 billion, with foreign central banks whose national banks otherwise would have had to sell off non-US assets in order to meet their dollar liabilities.* Such actions would have driven up the dollar against the Swiss franc, Euro, pound and yen.
The next piece of the policy was to restore confidence, at least in the future of the big banks. To this end, in early 2009 Secretary Geithner launched a program of “stress tests,” whose stated purpose was to establish the extent to which banks required more capital – a larger cushion of equity – to protect themselves against even worse economic and financial conditions. Whether they gauged this accurately is doubtful, since they did not require banks to mark their failing mortgage portfolios to market prices, and since – contrary to all normal practice – the results of the tests were negotiated with the banks themselves before being released. But the fakery of the stress tests served a larger purpose: it demonstrated to the world that the United States government was not going to assume control over the banks, or otherwise let them fail. Bank shares rallied – spectacularly.
The great financial rescue of 2008 permitted the banks to continue operations, soon enough free of constraint on activities and on compensation. None of the big banks were seized, no bankers jailed. With bank earnings but nothing else in full recovery by mid-2009, there was a savage political reaction. The Federal Reserve’s programs were, moreover, as opaque as they were Pharaonic, and ultimately an audit ordered by Congress along with disclosures incident to a suit by Bloomberg revealed embarrassing abuses, including the participation of at least two top bankers’ wives in the TALF program. And so voters punished the bailouts in the 2010 congressional mid-terms, while a terrified Congress enacted, as part of the Dodd-Frank financial reform act, numerous limits on repetition of the bailout policy.
Nevertheless, the banking system survived. The big banks especially were saved. Their market share increased. Their profits soared and their stock prices recovered. They could meet their need for revenue by lending abroad, by speculation in assets, and simply by pocketing the interest on their free reserves. Limits on their freedom of maneuver remained minor, as even the weak restrictions imposed in the Dodd-Frank Act came only very slowly into effect. As time went on, the Federal Reserve pursued its programs of “quantitative easing,” which were ongoing purchases of assets from the banking system, including large volumes of mortgage-backed securities. While this program was touted as support for the economy, its obvious first-order effect was to help the banks clean up their books, and to bury potentially-damaging home loans deep in the vaults of the Fed itself, where they might – or might not – eventually be paid.
As for the supposed economic policy goal of all this largesse, the President stated it many times. The purpose of saving the banks was to “get credit flowing again.” The Treasury Secretary, Timothy F. Geithner, stated his view that, as an affirmative policy, the government sought a world financial sector dominated by American big banks, and an American economy in which private banks played a leading role. But it is one thing to have the banks and something else entirely for them to make loans. And loans to support commercial and industrial lending, still less new residential construction, were not on the agenda. New loans to businesses or households? To whom would they have made loans? For what? Against what collateral, with a third or so of American mortgages already underwater? Against what expectation of future profits? Five years later the banks had still not returned to this business. Nor would they.
In banking policy, as expressed by the President, the dominant metaphor was of plumbing. There was a blockage to be cleared. Take a plunger to the toxic assets, it was said, and credit conditions will return to normal. Credit will flow again. But the very metaphor was misleading. Credit is not a flow. It is not something that can be forced downstream by clearing a pipe. Credit is a contract. It requires a borrower as well as a lender, a customer as well as a bank. And the borrower must meet two conditions. One is creditworthiness, meaning a secure income and, usually, a house with equity in it. Asset prices therefore matter. With the chronic oversupply of houses, prices fell, collateral disappeared, and even if borrowers were willing they couldn’t qualify for loans. The other requirement is a willingness to borrow, motivated by the “animal spirits” of business enthusiasm. In a slump such optimism is scarce. Even if people have collateral, they want the security of cash. And it is precisely because they want cash that they will not deplete their reserves by plunking down a payment on a new car.
With few borrowers knocking on the door, for banks the safe alternative was to sit quietly and rebuild capital over time, by borrowing cheaply from the central bank and lending back to the government, at a longer-term, at a higher rate. For this there are two tools: the cost of funds from the Federal Reserve, and the interest rate on longer-term bonds, paid by the Treasury. So long as the two agencies are able to maintain the spread – the positive yield curve – between these two numbers, profitable banking is easy. This is what the large money center banks did in the 1980s, after the Latin American debt crisis, it is what the supervisors instructed regional and smaller banks to do in this crisis (by tightening up on underwriting guidance) and it is the preferred solution for all, among bankers and those who supervise them, who like a quiet life. But it does not lead to new loans.
After a certain amount of time, most detached observers would conclude that the purpose of unlimited (and forecast-free) intervention in banking was to save the banks and the bankers. A beneficial effect on the rest of the economy was not impossible, and not to be despised. But it was not the objective of policy in the financial sphere.
In the event, the American Recovery and Reinvestment Act poured about two percent of GDP in new spending and tax cuts per year, for two years, into a GDP gap estimated to average six percent for three years. In other words, as a matter of arithmetic, it might have worked to restore full production at levels prevailing before the crisis, if the fiscal multipliers had been close to two. But under the conditions, and the mix of spending types and tax reductions in the bill, they were not close to that value.
Given the political and economic constraints on the “stimulus package,” there remains a puzzle. Why, in the wake of financial calamity, did the US economy fall as little as it actually did? Employment and market incomes fell by some ten percent. Yet the fall in real GDP – in total output – from 2007 to 2009 was just three and a half percent; that in personal consumption expenditures was only two and a half percent. How come so little? Some are tempted to credit the unlimited actions of the central bank – but we have seen, without loans there is no stimulus, however low the interest rate.
The answer is that total federal government spending (purchases of goods and services) rose by over six percentage points in the same period, health security payments rose twenty-five percent, Medicare nearly fifteen percent, Social Security by over sixteen percent, and other income security programs (notably, unemployment insurance) by over forty-five percent. Meanwhile total tax receipts fell over eight percent. In sum, and notwithstanding the small scale of the ARRA, the federal budget deficit rose to above ten percent of GDP.
Some of these changes were enacted after the stimulus package, in further measures including extended unemployment insurance, a federal add-on to state plans, and (later on) a two percentage-point reduction in payroll tax collections. But most of these changes were automatic – which is to say, they too were forecast-free. They reflected increased demands on federal programs that already existed, and that had existed for decades, as well as the lucky special circumstance that very high oil prices in 2008 helped produce a substantial cost-of-living adjustment in Social Security in 2009. And they reflected the effect of declining private incomes on tax revenues. Overall, some ten percent of private incomes were lost in the crisis, but about four-fifths of these losses were made good, in the aggregate, by the stabilizing force of a changed federal fiscal posture – and the resulting large deficits in the public accounts.
What saved the United States from a new Great Depression in 2009 was not the underlying resilience of the private economy, nor the recovery of the banking sector. And it was not the stimulus program, though that clearly did help. It was, mainly, the legacy of big government that had been created to deal with the Great Depression, and to complete the work of the New Deal. Big government programs – Social Security, Medicare, Medicaid, unemployment insurance, disability insurance, food stamps, and the progressive structure of the income tax – worked to transfer the loss of private income from households, which could not handle it, to the government, which could.
In short, the very scale of government created over the previous century meant that the public sector could step up to meet the needs of the population when the private sector no longer did so. And in the spirit of the age, according to which no achievement goes unpunished, this success, modest and qualified, relative to expectations, though it was, led to a rapid change in the public debate.