Cash Flow Discounting Leads to “Astronomically” Large Mistakes Over the Long Term

Your humble blogger is a vocal opponent of placing undue faith in single metrics and methodologies, like placing a lot of weight in total cholesterol as a measure of heart disease risk. One of the most troubling examples is the totemic status of discounted cash flow based analyses. It’s a weird defect of human wiring that reducing a story about the future to a spreadsheet and then discounting the resulting cash flows (which means you are now layering a second story, about what you think reasonable investment returns will be over that time period) is treated as having a solidity and weight that simply is not there, a reality of its own that manages to take precedence over the murky future it is meant to help understand.

An article by physicist Marc Buchanan in Bloomberg gives a layperson’s summary of an important paper by Yale economist John Geanakoplos, and Doyne Farmer, a physicist at the Santa Fe Institute. It shows that the conventional use of discounted cash flow models over long time periods, as is often the case when discussing environmental impacts, is fatally flawed. And this finding comes after the publication of a paper by Andrew Haldane and Richard Davies of the Bank of England, which proves what many have long surmised: that businesses use overly high discount rates, which is how you build short-termism into financial models. Needless to say, that assures underinvestment, particularly in infrastructure. Projects with paybacks beyond the 30 to 35 year time frame are treated as having no value at all. From their article:

First, there is statistically significant evidence of short-termism in the pricing of companies’ equities. This is true across all industrial sectors. Moreover, there is evidence of short-termism having increased over the recent past. Myopia is mounting.

Second, estimates of short-termism are economically as well as statistically significant.
Empirical evidence points to excess discounting of between 5% and 10% per year.

The Geanakoplos/Farmer analysis finds vastly larger distortions when NPV approaches are used over very long time frames, say over 100 years. The errors result from the convention of using a single discount rate which is meant to represent an average over the entire period. This simplification, however, is dangerous. Per Buchanan:

In calculating this average, some paths turn out to contribute far more than others. In particular, paths that descend into relatively low rates and stay there for many years have a disproportionate effect — a path at 1 percent for 50 years, for instance, counts 20 times as much as a path running along at 7 percent. Change 50 to 500 years, and the difference becomes 10 trillion times.

This demonstrates how simple thinking about the future can lead to terrific mistakes. When something fluctuates, we often suppose we can use the average rate over time. And sometimes this works. The amount of food you will eat over 20 years, for example, will be roughly equal to 20 times what you ate last year, because your appetite doesn’t fluctuate that much. But averaging to get a true effective discount rate isn’t so easy. Some of the paths of fluctuation — the lower paths — carry extraordinary weight, and hence dominate the outcome.

Not surprisingly, Geanakoplos and Farmer find that the correct formulas for discounting over long periods don’t follow the textbook exponential form. The math is tricky (I’ve put some discussion of the technical stuff on my blog). But the consequences are not. Using a standard model from finance for interest rate movements (with an average rate of 4 percent), the authors show that, for the first 100 years or so, their correct form of discounting gives results that are similar to those that come from traditional calculations. But at 500 years the standard exponential discounts the future not just a little too strongly, but a million times too strongly. And it gets worse after that.

And the flaws of using discount rates greatly exaggerate the bias we already see in practice, that of undervaluing anything beyond the next few years. Après moi, le déluge, indeed.

Print Friendly, PDF & Email

52 comments

  1. kezza

    Looks like he taking r to be geometric Brownian motion (which everybody know does NOT fit what is observed) and then taking the expectation of pathwise integral. I wonder if he is going to invoke the get-out-of-jail card (i.e. Friedman defence) here.

  2. MBA dude

    Do you discount by actual interest rate, which goes all over the place over long time horizons, or by the opportunity cost of capital, which is more or less constant? Interest rate fluctuations will create some profit and/or loss during the forecasted period but this should not be confused with value created by the project itself.

    Or so my prof says….

  3. vlade

    Yves,
    Given that predicting “cashflows” more than about a couple of years ahead (if that) is black magic, the discounting formula would be the last thing I’d worry about. All this is just saying “Gee, part of your formula is SOOO WRONG!”. Yep, it is. But then inputs are likely rubbish too, so having a wrong formula makes not much difference. (and, Feyman was rather upset when people took to call financial (or rather economical) numbers “astronomical” – after all, the number of stars in our galaxy is less than a 1% of the US deficit)
    That’s like saying Newton’s laws are wrong (on both macro and micro scales, i.e. relativistic and quantum physics), and that because they have problem with even a simple 3-body system (where unless you get the inputs perfectly correct, you diverge almost immediately), we should dump them in their entirety.

    I’ll fully agree that discounting has lots of problems. Any quantitative measure has, for most of them are rather sensitive to one input or another, and in real world situations we get most of out inputs wrong to some extent.
    But most good investors I know do their spreadsheets and the chuck them away and decide on “gut”. The spreadsheet is a tool to help you think. Of course, the problem is that this major distinction is not being taught very well – if at all. And, to be fair, it’s something we’re not teaching only in MBA classes, but I’ve seen more than one engineer fresh out of school refusing to see reality when his models were telling him something else. We’re in general too trusting of our models (and the finance is where it’s the most obvious).

    1. lambert strether

      True, “sophisticated investors” (ie, those who think and observe) may do all that you say. But I bet in the corporate cubes and in our famously free press, these formulas are fetishized and carry great weight.

      1. vlade

        I fully agree. But I believe it’s more important to show that this is _just_ a tool which cannot automate decisions/replace real thinking than to point out some technical inadequacies of the tool (and there’s a whole slew of them).
        There’s no perfect tool, but a good craftsman can use even imperfect one with a good result. An idiot will not be saved by a perfect tool. Unfortunately, our striving is towards idiots with technically perfect tools (why then not automate the whole world and be done with the human society? It’s just trouble, you know… ).

        Discussions about the tool distract from discussions about the skill of the user.

      2. Yves Smith Post author

        Yes, particularly in group decision contexts, like M&A and capital budgeting.

  4. lewy14

    the authors show that, for the first 100 years or so, their correct form of discounting gives results that are similar to those that come from traditional calculations.

    How exactly is this an indictment of “short term greed?”

    And are maintenance costs thrown in to the mix? Any project which lasts five hundred years is going to require maintenance, which, over five hundred years, is pretty much going to obliterate initial costs… no?

    The Pyramids at Giza are still effectively generating a cash flow in the form of tourism. (The late unpleasantness there not withstanding; a mere blip in the timescale).

    If I’m pharaoh’s vizier – possessed of the gift of perfect prescience – do I care at all about cash flow five thousand years in the future? Even at a discount rate of zero? How is that remotely connected to the allocation of resources in 2800BC?

    Each of the seventy billion human beings who have walked the earth have spent most of their lives in the same place – the Here and Now.

    Five hundred years is pretty much the MTBF of civilizations and epochs. Hard comfort to Vespasian that his Colosseum makes a nice traffic circle in the modern city. Our Rome is not his Rome.

    The aqueducts lasted longer than the political institutions which maintained them – arguably, therefore, they were overbuilt.

    The nearest thing to a counter example I can think of is the village cathedrals of France… the villagers retain a sense of continuity, through many dynasties and five Republics. But the village elders knew they were discounting for Eternity, in any case.

    1. Yves Smith Post author

      Did you not read the post? It would appear so.

      First, I cited Haldane and Davies who said overly high discount rates are used NOW, and the myopia is getting worse.

      We then have the same flawed logic on longer timeframes, and those ARE being used in environmental debates, which I mentioned explictly.

      1. Yossarian

        Couldn’t one then also argue that the discount rate applied to the private sector return on the capital expropriated for the government is also too low? I think the flaws come down to: flawed imputs- GDP and other accounting measure are flawed because they (1)fail to account for non-monetary costs/benefits of BOTH private and public sector projects, and (2)consider everything in nominal and not real terms.

      2. Dave of Maryland

        Count me puzzled, too.

        As finance gets more bombastic, and/or as money gets tighter, shorter time frames seem a logical result. With economic collapse/restructuring upon us, many of the individuals and institutions making these decisions won’t be with us in a year or two. When we’re again rich enough for the exotic services of Magrathea, wake me up & tell me what they’re up to.

        I went and read the 300 year study. Lots of assumptions piled on assumptions, as I would expect. It all came down to this:

        The Stern Review seems to argue that there should be an immediate move to a carbon tax of $85 per ton of carbon dioxide, or $312 per ton of carbon. It then sees the carbon tax falling to half by 2018 and stabilizing at a plateau of about $120 per ton of carbon by 2025. This profile is the opposite of most optimal carbon tax paths, which tend to start lower and then increase.

        Which I did not understand at all. How does a carbon tax suppress the production of carbon? Sounds to me like another sin tax. Look at this: By 1960, more than 50,000 Americans a year were being killed in car crashes. The average car cost around $2500. Suppose the government had leveled a “safety tax” of $100 per car? Would that have gotten us seat belts and air bags?

  5. marat

    Interesting perspective.

    Yves, could you give a specific example of what you have in mind when you say that DCF is used in environmental analysis? Do you mean that private sector players routinely underestimate the NPV of the future costs of environmental impact management?

    Agree that DCF is not an exact science and should play only a part in an investment decision (even for private sector players only concerned about monetary value), but when it comes to decisions on environmental regulations, infrastructure investment, investment in science – this is all properly the domain of governments, who, you would hope, have a much broader definition of “value” than a private sector player focusing on monetary gain.

    1. Yves Smith Post author

      The Stern report, prepared by the UK government, there has been a huge debate over that.

      http://en.wikipedia.org/wiki/Stern_Review.

      It used a time frame of over100 years, its 700 pages and not on my main beat, so I have not read it myself, but the controversy was that most of the damage took place after 2200. I can’t find a clear indication in Google.

      Here is a report on that report which uses 300 years:

      http://www.ycsg.yale.edu/climate/forms/chapter6.pdf

  6. Dan Duncan

    The authors, while focusing on the dangers of cash-flow discounting, missed the fact that they are engaging in another form of discounting: Survivor Bias Discounting.

    They assume that every long term is good and would have survived but for “short-termism”.

    That’s just not true. Some long term ideas suck.

    But the authors don’t account for the benefits of weeding out these shitty ideas. Instead, they assume that all long-term ideas “are good” and will ultimately payoff. Thus, they only consider the dangers of discounting in relation to the long-term ideas that would have survived.

    What’s worse is that the authors equivocate on the word “good”. Their entire premise is imbued with the sense that “good” is what works in a functional sense…AND…”good” is that which is “ethical”. And they interchange “good” in an specious manner that suits their agenda.

    Thus, if you agree with their conclusions, you are not only correct, but you are ethical at the same time. In fact, you are correct because you are ethical.

    You are good. Oh yes you are.

    And if you disagree…well you are myopic and selfish. You are an immediate gratification seeking, environmentally insensitive capitalist who doesn’t appreciate the long-term benefits of massive central planning.

    You are bad. Oh yes you are.

    No doubt there is merit in the skepticism towards cash flow discounting. But this analysis is silly. Instead of myopia, all they offer is the astigmatism of hyperopia.

    They are redundant Leftist-Academic Automatons from the future. They are here to warn us:

    “Long-Term Good. Short-Term Bad.”

    Hell, they offer the same intellectual heft and agenda transparency as those billboard cows who tell us to “Eat Mor Chikin”.

    I’m looking forward to their billboard campaign:

    Leftist-Academic Cows…”Milk-Mothers Who Know Best”, hammering home the message:

    “Need Mor Govmint”

    1. Ellen Anderson

      Cash flow discounting really came into its own with computers because it is so easy to plug numbers into spreadsheets. If you create a discounted cash flow spreadsheet and start to change numbers you see immediately how even very small assumptions about income, expenses and discount rates have huge impacts on the bottom line.

      The philosophical assumptions behind discounting are also very interesting. The discount rate is built upon a number of factors including 1) what is the ‘safe rate?’ (Hint, it has to do with US Treasuries) and what is the level of risk for the investment? The main assumption is that any investor wants to be well compensated for anything he is not holding at the present time. (Simply put, a bird in the hand is worth two in the bush.) In revolutionary times, such as we are now entering, the answers to these questions are wildly speculative.

      Also, a discounted cash flow projects income and expenses and then discounts the net income over time. In the capitalist system, environmental destruction is not counted in as an expense.

      Finally, do not think that the federal government at the level of the Department of Justice, is not aware of all of this. The Federal Standards for Land Acquisition are set out in the “Yellow Book” which was developed within the DOJ. It takes an extremely dim view of discounted cash flow techniques when they are used by appraisers to value land that is being acquired by the federal government for public purposes (parks, railroads, highways, etc.) All assumptions must be proven “in the marketplace” which, for the most part is impossible to do.

      There are plenty of public servants at mid and lower levels of government who are awake and aware. One hopes that they will survive the coming collapse and be able to help pick up the pieces.

    2. Jim Haygood

      The wonderful thing about government is that they don’t even bother with net present value analyses. How about that great return we got on capturing Iraq’s oil fields? Only an MBA pretzeldent could have a stroke of genius like that!

      Usgov’s negative net worth, by the Treasury’s own calculation, is in the tens of trillions.

      But even funding Social Security according to Erisa standards — much less amortizing the entire accumulated negative equity — is not even considered in the cash-based budgeting process.

      ‘Need mo edumacation,’ evidently. Talking cows could help, in getting through to Congress critters.

  7. Jim Haygood

    ‘Businesses use overly high discount rates, which is how you build short-termism into financial models. Needless to say, that assures underinvestment, particularly in infrastructure.’

    And why do businesses use overly high discount rates? One reason is that the U.S. dollar has lost more than 95 percent of its purchasing power since the Federal Reserve opened for business in 1914.

    Under a gold standard, appropriate real and nominal discount rates are the same: roughly the 2 to 3 percent average compounded annual return on long Treasuries.

    But under a fiat standard, you would be nuts to use such a low discount rate. With a fraudster fool like Bernanke in charge of the currency, and a socialist clown like O’Bomber impersonating the president, my discount rate for any US capital investment is well north of 10 percent.

    Short-termism is entirely rational and appropriate during the twilight of a dying empire. Pay me now, or just forget about the whole thing.

    1. Yves Smith Post author

      Foot in mouth and chew alert!

      The “discount rates are too high” analysis was prepared by staff at the Bank of England. The dollar has nada to do with it.

      1. Yossarian

        BOE operates on a non-fiat gold standard? The whole world isn’t dependent on $-based financing? What were all those $Trillion in foreign currency swaps for? The Fed-led fiat money system is one of the largest sources of wealth inequality in the world- this is a fact that both progressives and libertarians should be in agreement on.

      2. Yves Smith Post author

        BofE looked at examples all over the world. Their sample includes de facto gold standard (Eurozone). Plus Haygood specifically discussed relative decline of the dollar, which is not an issue in the UK.

        Moreover, the fiat issue would NOT explain the increase in discount rates. They found myopia is increasing. Yet the success in busting labor, which is the big driver of inflation (you can’t have the sort of sustained inflation we did in the 1970s without increasing labor costs providing a big impetus to cost push; commodities inflation alone won’t produce inflation on that scale) is much clearer now that it was in the early 1990s.

        1. Yossarian

          Europe is de-facto gold standard? Have you seen the ECB balance sheet explosion post financial crisis? I just don’t see how any currency has much different characteristics than the $ since they are all fiat and all monetary systems are dependent on financial-sector led nominal inflation.

  8. peleke

    My 95 year old neighbor is a chemical engineer who was in charge of 5 chemical plants in our southern and mid-western states during the 50’s, 60’s and early 70’s. The company he worked for had 27 plants in the US. When newer plants were being built, by other companies, his company had him find the value of improving the current plants to meet the environment laws or selling them. They sold the 10 most polluted sites to local citizens who wished to save the jobs in those plants.
    He is a very nice person who believes that the company was correct when it pushed their poison to another time and place.

  9. Lyle

    Of course the fundamental problem is that everyone doing the analysis of what may happen in 100 years will be dead by that time. But as a historical example look at the Union Pacific, by 1910 it had been completly rebuilt, with significant changes in the track after 40 years (EH Harriman after the 1893 bankruptcy, which by the way was related to the zero coupon federal bonds used to build the road in the first place).

  10. avgJohn

    Seems like the “with a hammer everything is a nail” problem to me.

    Applying discounted cash flow analysis for a real estate or capital asset investment over a 5 to 7 year period may be reasonable enough, but how do you reduce the value of God’s creation to dollars and cents, derived from a mathematical calculation?

  11. J. Lind

    Yves, that Haldane/Davies paper does not demonstrate that there’s significant myopia on the part of investors. It can’t demonstrate it, because it relies on the wrong measure of free cash flow, namely dividends. But using dividends as a proxy for free cash flow (which is what investors using a DCF analysis care about) doesn’t suffice, since nowadays so many companies — and so many successful companies — pay either no dividend or only a very small one. Relying on dividends to measure investor expectations is deeply flawed.

    The study is also flawed because it fails to take into account the fact that corporate success is far more short-lived than it once was, which has to raise the discount rate that investors use. In the 1950s and 1960s, it was incredibly rare for an industry-leading company to lose significant market share, let alone be completely eclipsed. These days, it happens regularly and with surprising speed (just look at Nokia and RIMM). Given this, any rational investor has to apply a higher discount rate to any company-specific investment. Yet Haldane’s estimate of rising short-termism rests on the assumption that the business climate today is the same as it was twenty or thirty years ago. It’s not.

    1. Yves Smith Post author

      How old are you? Seriously. This “there was less risk in the 1950s and 1960s” is wrong. Established incumbents then as now enjoy substantial advantages and corporate profits now are at all time records as a % of GDP. And did you forget about strikes as a risk factor?

      Major capital installations are big business risks and subject to large losses. In the paper industry, the big risk of a new paper machine (price $1 billion) is startup. Successful startups cost 20% of capital costs, bad startups bleed cash. This is operational risk and has nada to do with the factors you cite, which are market factors.

      And one can argue that lots of companies have dominant positions due to the increased importance of network effects, a powerful barrier to entry that was not operative as strongly then. Brand names also constitute a powerful barrier to entry.

      Look at investment banking and banking. Small players, not concentrated in the 1950s, you saw some in the top player group effectively bite the dust (Kuhn Loeb, Manufactures Hanover), Look at how well-nigh impossible it is for top players to die now.

      Similarly, I’ve heard from people at McKinsey that extreme short term-ism is endemic among McKinsey clients. One example of many is that a major telco refused to authorize a project with an 11 month payback, literally to promote the most profitable product offered, because they didn’t want the quarterly expense hit. And this was not a decision to defer due to earnings being under pressure, this was a straight up “no”.

      Investors were MUCH less short term oriented due to much higher transaction costs. Your story is simply a justification for bad modern practices. The future is a risky proposition.

      1. J. Lind

        “How old are you? Seriously. This “there was less risk in the 1950s and 1960s” is wrong. Established incumbents then as now enjoy substantial advantages and corporate profits now are at all time records as a % of GDP. And did you forget about strikes as a risk factor?”

        Yves, this is absolutely wrong. Look at Deloitte’ Shift Index study, which looks at the history of American business over the past 45 years. They find that what they call the “topple rate,” which is the rate at which big companies lose their leadership positions in markets, has more than doubled since 1965, as has the competitive intensity in US markets. And this is reflected in the declining performance of companies — both ROA and ROIC have fallen sharply for American companies since the 1960s as competition has increased. I’m frankly surprised that you’re trying to argue that company-specific risk was as high in the 1950s — an era when most industries were, as John Kenneth Galbraith described it, comfortable oligopolies — as it is today.

        I’ll also add that I don’t understand what you mean when you write, “Look at investment banking and banking. Small players, not concentrated in the 1950s, you saw some in the top player group effectively bite the dust (Kuhn Loeb, Manufactures Hanover), Look at how well-nigh impossible it is for top players to die now.”

        As you well know, between 2007-2008, we saw Lehman, Bear, Wachovia, and Washington Mutual all go under, and Countrywide and Merrill Lynch be swallowed up. Except for the early 1930s, there’s never been a period when we’ve seen more upheaval among big banks than we saw. That hardly suggests an economy in which leadership positions are secure.

        1. Yves Smith Post author

          Look at the death rate/consolidation of the securities industry as a result of:

          1. The back office crisis of the late 1960s

          2. The ending of fixed commissions in 1973

          3. The rise of fixed income, which forced all players to scramble for more capital in the 1980s.

          These were much lesser shocks than a global financial crisis yet produced vastly greater changes in industry structure and composition of the leading players.

          I also don’t buy the Deloitte analysis as proving that economics and competitiveness are the drivers as opposed to bad management incentives. This is MOST evident in banking. Banks exhibit a slightly increasing cost curve at roughly the $5 billon in assets size. This has been proven in every study ever done on banks (the only area of dispute is the size at which the cost increases kick in). That effectively says there is no economic reason to have a high concentrated banking system, it would be more efficient (and more robust) if we had lots of small banks.

          So why do banks buy each other anyhow? Bank CEO pay is highly correlated with bank total assets. Golden parachutes incentivize smaller bank CEOs to sell out.

          In other industries, you see the same pattern: pay correlated with size of company, widespread use of golden parachutes, which (ironically) were implemented on a widespread basis as a supposed takeover defense measure in the 1980s.

          1. J. Lind

            Yves, come on. You’re not really saying that American companies don’t face more competition than they did in the 1950s and 1960s, are you? In 1958, GM had almost 50% of the auto market, and it faced three (really two) meaningful competitors. Today, it faces fifteen or twenty. The steel industry was a comfortable oligopoly. The film industry consisted of essentially one company — Kodak. The technology industry was completely dominated by IBM. There were only three television networks. Membership in the Fortune 500 turned over much less rapidly than it does today. And corporate profits were as high as a percentage of GDP in the early 1960s as they are today, but those profits were divided among many fewer players. Again, what this means is that company-specific risk is much higher today, which means that discount rates should be higher as well. Maybe they shouldn’t be as high as they are — I don’t discount the impact of short-termism — but as I said in my original comment, the Haldane paper can’t demonstrate this because it uses the wrong measure of cash flow.

          2. Foppe

            Sure, GM had 50% of the market, but that market consisted of just the USA. Today, rather a lot bigger. (Though GM is hardly a good example, given the amount of state support it needs to stay alive because of stupid business decisions.)
            Also, risk is not the same as competition: the reason why IBM lost was partly because of innovations, but partly because they’d gotten so lazy because of the lack of competition that they felt no need to change with the times. Consequently, one might well argue (especially in the patented industries of today) that having an oligopolistic industry is safer than having a duopolistic or monopolistic industry. As for your other examples, I find them odd. If you look at the steel mill industry, consolidation is playing a huge role, with players like Tata Steel owning a huge chunk of the market.
            As for Fortune 500 membership turnover: part of that is because of cronyism (Carlos Slim, Russian Oligarchs), which has fairly little to do with competition as such; part is due to bank fraud and hedge fund speculation enabling huge wealth growth by certain individuals, which also has fairly little to do with competition, but rather with market malfunction. So even if there is higher per-company risk, it is hardly straightforward to attribute this to ‘increased competition’.

          3. J. Lind

            :”As for Fortune 500 membership turnover: part of that is because of cronyism (Carlos Slim, Russian Oligarchs), which has fairly little to do with competition as such; part is due to bank fraud and hedge fund speculation enabling huge wealth growth by certain individuals, which also has fairly little to do with competition, but rather with market malfunction.”

            No, this isn’t right. The Fortune 500 I’m talking about includes only American companies, so Carlos Slim and the Russian oligarchs don’t have anything to do with it. In the 1960s and even the 1970s, average annual turnover in the Fortune 500 was just 4 percent. That doubled in the 1980s, and continued to rise over the next twenty years.

            Look, what I’m saying is completely uncontroversial. In the 1950s and 1960s, the biggest companies in America dominated their industries and enjoyed what amounted to sinecures — that’s what Galbraith’s “The New Industrial State” is all about (although that book became out-of-date not too long after it was published in the late 1960s). In the 1950s, the top 130 manufacturing companies accounted for half of all U.S. manufacturing output. And the 500 biggest companies accounted for an extraordinary two-thirds of all nonagricultural economic activity in the U.S. And those companies’ performance was remarkably stable year after year. Nothing like this is true today. And U.S. companies were also massive global players (contrary to your comment about G.M.) — in 1960, the U.S. was responsible for a full 20% of the entire world’s exports.

            Today, by contrast, dominant players can see their franchises eroded extraordinarily fast — again, consider that Nokia and RIMM were by far the most important players i the cellphone market just a few years ago, and today are being completely written off, while the most important players in the market are companies that weren’t even in the cellphone market until the mid-2000s. There is literally no analogy to this in the corporate America of the 1950s. Or take a company like Acer, which most people hadn’t heard of in 2000, but which is now one of the biggest PC makers in the world, or LG, which went from a maker of cheap electronics to being far more profitable than Sony in less than a decade. I can adduce myriad similar examples from the past twenty years, but if you go back to the 1950s and 1960s, you’ll strain to find more than a couple such examples. The simple reality is that the business world today is far more competitive and far less stable than it was in the 1950s and 1960s, and investors have to take that into account when evaluating the prospects of individual firms.

    2. Tao Jonesing

      the 1950s and 1960s, it was incredibly rare for an industry-leading company to lose significant market share, let alone be completely eclipsed. These days, it happens regularly and with surprising speed (just look at Nokia and RIMM). Given this, any rational investor has to apply a higher discount rate to any company-specific investment.

      You seem to assume that “investors” (really speculators) used DCF models for pricing stock in the 1950s and 1960s, and I don’t think that’s the case.

      Much of the mathematical models of modern finance were not developed until the 1960s and 1970s, which means they did not come into vogue until later. Even if DCF were as old as the hills, until Miller and Modigliani’s paper in 1962(?), investors looked to dividends as the primary metric of stock value because they could not figure out how to compare companies having different capital structures. Then along came M&M . . .

      I do want to note here that your comment is a perfect example of a phenomenon I’ve been noting lately, which is the propensity to wrongly assume some things in history were completely different while simultaneously wrongly assuming other things were exactly the same. When it comes to confirmation bias, two wrongs do make a right.

      1. Foppe

        Don’t you get it? people back then were stupid [which is why they couldn’t be as effective in being greedy back then] while people were of course always just as greedy as they are now. It’s just a way of devolving responsibility on human nature, while simultaneously praising innovation. And it becomes easier to do the less you actually know of history.

      2. J. Lind

        DCF analysis was absolutely used by investors in the 1950s and 1960s. John Burr Williams invented the dividend-discount model in the 1930s, and it was in widespread use by the 1950s. (Dividends were a much more reliable proxy for free cash flow in that era, and you don’t need M&M to make it work.)

        1. Yves Smith Post author

          Wow, you do have an active imagination.

          Do you know how hard it is to do financial forecasts by hand? I’ve done them, they are brutal because you make ONE error and everything to the right (in the future relative to the error) is wrong. AND THIS WAS UP THROUGH 1983 at Goldman, which was clearly not constrained by money. Everyone used hand held calculators and accounting ledger paper and would look up numbers in physical annual reports and 10ks and enter them into the spread sheet.

          Well, take that back, the M&A department had created a merger model. And in 1983, the firm had developed tools that allowed standard types of analysis to be run off Compuserve. So people were starting to move stuff over to PCs, but that technology was just being implemented.

          There is a big difference between a technique existing and it being widely used by investors. Mutual funds weren’t a significant factor in terms of total assets managed until the 1960s.

          Do you think individual investors were doing DCF analysis in the 1960s, by hand (or even knew what they were)? Please. You didn’t even affordable decent calculators then, the best you could do is a SLIDE RULE. (there were admittedly limited function adding machines). Trust me, electronic calculators didn’t become affordable until the 1970s.

          You’ve just revealed your age.

          Similarly, MBA programs were the route by which financial technology got to corporate America. MBA programs were much smaller back then and MBAs were rare then (my father was the oldest guy in the Harvard class of 1965, and for the next 20 years in the paper industry, he was very unusual and regarded with some resentment in the industry in having an MBA. He worked at three different major papermakers and had contacts at other big firms).

          You are projecting modern standards and approaches back to an era when they were not the norm.

  12. Fraud Guy

    When confronted by the unknowability of the future in a work setting, I have put the question to everyone I have ever worked for whether they would prefer to have bad, made-up data or no data at all with which to make decisions. When pinned to the wall, every one of them has chosen the bad, made-up data path. This preference, in my opinion, is very strongly wired into human nature and into the incentive structures that almost everyone faces.

    1. Yves Smith Post author

      Yes, and the problem with reliance on bad data or methodologies is false confidence.

  13. Charles 2

    I wonder how many people read the “discussion of the technical stuff” and the original paper. If they had, they would see that the author is a noob in stochastic interest rate modelling. To quote him (emphasis mine)
    “They used a so-called geometric random walk for the fluctuating rate r, this being the most common mathematical process used in finance to model interest rate fluctuations (i.e. this isn’t a crazy or weird model, but a highly plausible one).”
    Well… No ! Long term interest rate modelling always incorporate a return to trend that essentially kills the phenomenon that is the subject of the scholarly article. The authors recognise it themselves at the end of the paper through a lapidary sentence. Obviously, one shouldn’t let reality stand in the way of publication scores and a “aha” paper in Bloomberg.

    It doesn’t mean that there is not a problem in finding what the long term trend is. The Stern review and Jim Grant have good discussions about this. I suspect that the “real” level (in both meanings of the word) is lower than what long term inflation linked bond show today. IMHO, it is around 0.5% for a stable population and 1% for a growing one.

  14. Tao Jonesing

    The sad fact is that DCF analysis drives our economy.

    All publicly traded companies are managed to meet or exeed the DCF models of the analysts covering them. Why? Because those DCF models define a consensus of the fair value of the stock on the secondary equity markets.

    As a practical matter, this means that firms are NOT managed to maximize profits but to simulate, on a quarterly basis, perpetual exponential growth at a rate greater than inflation (or whatever discount rate is used). This, in turn, means that microeconomics should be thrown out the window as irrelevant, and without its microeconomic foundations, all macroeconomic theories (including MMT) should be thrown out the window, as well.

    Finance is normative. Nothing else matters. Unless you do something to change the metrics of finance, which impose the expectation of perpetual exponential growth, fiddling around with the economics that finance supposedly rests upon (it’s actually the opposite, finance drives economics), will avail you nothing.

    1. Sock Puppet

      So say I have a million to retire on. Conventional dcf analysis tells me that 4% is reasonable. What if I can’t get 4% because we run out of exponential growth? Will I retrench further? Will prices of services like healthcare fall? Economists? Where’s your model?

      1. Foppe

        What on earth gives you the idea economists can model what will happen once the internal contradictions of capital accumulation become the dominant force? Anyway, wisdoms such as the ones you here derive from ‘conventional DCF’ are a huge part of the problem. For companies this means that they start cutting corners in all sorts of ways to realize (profit) growth, while investors who are looking for 4% growth in fully mature markets tend to become ever more desperate, and start investing in asset bubbles in order to achieve the desired yield, so that they are actually creating instability at the macro level, or what is sometimes called ‘systemic risk’.
        Therefore, I can give a few hints: lots of boom-bust cycles all over the world, creative destruction creating room for some new growth opportunities locally (though this is obviously highly dependent upon the local population earning enough for producers to be able to sell goods/services to them), etc.. The Enigma of Capital really is a great book.

        1. Sock Puppet

          Are you suggesting that economists’ models don’t work without exponential growth? I’m shocked!

  15. oar_square

    Discounting at 4% for 500 years assumes the existence of a risk free investment that will grow continuously at a rate of 4% year after year for 500 years.

    There is no such thing. There are however defaults, wars, disappearance of governments (bonds included), …
    These periodic setbacks are a certainty since exponential growth cannot go on indefinitely on a resource constrained
    planet.

    Be this as it may somehow I don’t think that confusion about proper discounting is the root of short term thinking.

  16. Roger Bigod

    I hate to go all geeky on y’all, but in the long run there is no positive interest rate. If the guys who built the pyramids could have bought a bond that paid 1%, every dollar invested would now be worth $5 x 10^20. That’s 500 million trillion.

    In the long run, we are all broke.

  17. Hugh

    My concerns are of a different order. Kleptocracy is what currently concerns me. I don’t see how something like DCF could work in such an environment except as a useful cover for looting.

    Looking longer term, this century will be dominated by overpopulation, state failures, resource depletion, environmental degradation, and climate change. We are already seeing effects from all of these things now. By 2030, conditions will be much worse. By 2050, they will go critical. It is why I say that by 2100, not that I will be there to see it, world population could be under a billion. Not all of this is inevitable. A lot of it can be reversed or mitigated. However, a lot of it can’t, at least not easily. And there can be both positive and negative contributions from factors not yet on the horizon.

    But both the near and longer term leave me wondering what the use of tools like DCF is.

  18. Patrick

    One hates to be practical but 500 years ago Henry VIII of England had been on the throne 2 years and was still on his first wife. Who knows what society, if there’ll be a society, will look like 500 from now. I bet Henry would be astonished how things turned out, not least of all having six wives!

    Discount rates out to 50 years when estimating infrastructure costs are of some consequence, for time scales longer than that then the pressure of events will overwhelm any calculation and make it meaningless. Discount rates in Francs, Marks, etc.

    1. Lyle

      For infrastructure 50 years is about the limit of how long it can go without making significant upgrades and repairs. So its not to bad a measure. It when you guess what the birth rate in 50 years is to do the 75 year Social Security results, that you are just layering uncertainty on top of uncertainty. The same is the problem with global climate change, we really can’t figure out the economic impact, and as noted earlier everyone doing the figuring will likely not be in a position to care by the time the problem come home being quite dead. Worring about the distant future means that one also has to add in things like an asteroid impact, a humongous volcanic eruption (Long Valley or Yellowstone type) etc.
      In one sense then the IT revolution has had an unanticipated impact in making it possible to do figures farther into the future, and there is the old saying figures don’t lie but liars figure.

Comments are closed.