Machine Learning and Economics

Yves here. I hate sounding like a skeptic, but how well machine learning and AI do depends very much on data integrity and data selection, such as which training sets you use for AI. Economics has such a strong track record of being not at all good with empirical research and ideologically driven that machine learning and AI look to be better ways to legitimate not so hot thinking.

By Silvia Merler, an Affiliate Fellow at Bruegel and previously, an Economic Analyst in DG Economic and Financial Affairs of the European Commission. Originally published at Bruegel

Machine learning (ML), together with artificial intelligence (AI), is a hot topic. Economists have been looking into machine learning applications not only to obtain better prediction, but also for policy targeting. We review some of the contributions.

Writing on PWC blog last year, Hugh Dance and John Hawksworth discussed what machine learning (ML) could do for economics in the future. One aspect is that of prediction vs causal inference. Standard econometric models are well suited to understanding causal relationships between different aspects of the economy, but when it comes to prediction they tend to “over-fit” samples and sometimes generalise poorly to new, unseen data.

By focusing on prediction problems, machine learning models can instead minimise forecasting error by trading off bias and variance. Moreover, while econometric models are best kept relatively simple and easy to interpret, ML methods are capable of handling huge amounts of data, often without sacrificing interpretation.

Susan Athey provides an assessment of the early contributions of ML to economics, as well as predictions about its future contributions. At the outset, the paper highlights that ML does not add much to questions about identification, which are of concern when the object of interest, e.g. a causal effect, can be estimated with infinite data. Rather, ML yields great improvements when the goal is semi-parametric estimation or when there are a large number of covariates relative to the number of observations.

The second theme is that a key advantage of ML is that it views empirical analysis as “algorithms” that estimate and compare many alternative models. This approach contrasts with economics, where in principle the researcher picks a model based on principles and estimates it once. The third theme deals with the  “outsourcing” of model selection to algorithm. While it manages the “simple” problems fairly well, it is not well suited for the problems of greatest interest for empirical researchers in economics, such as causal inference, where there is typically no unbiased estimate of the ground truth available for comparison. Finally, the paper notes that the algorithms also have to be modified to provide valid confidence intervals for estimated effects when the data is used to select the model.

Athey thus thinks that using ML can provide the best of both worlds: the model selection is data-driven, systematic, and considers a wide range of models; all the while, the model selection process is fully documented and confidence intervals take the entire algorithm into account. She also expects the combination of ML and newly available datasets to change economics in fundamental ways, ranging from new questions and new approaches to collaboration (larger teams and interdisciplinary interaction), to a change in how involved economists are in the engineering and implementation of policies.

David McKenzie writes on the World Bank blog that ML can be used for development interventions and impact evaluations, in measuring outcomes and targeting treatments, measuring heterogeneity, and taking care of confounders.

One of the biggest use cases currently seems to be in getting basic measurements in countries with numerous gaps in the basic statistics – and machine learning has been applied to this at both the macro- and micro-level. But scholars are increasingly looking into whether ML could be useful also in targeting interventions, i.e. in deciding when and where/for whom to intervene.

McKenzie, however, points to several unanswered challenges, such as the question of the gold standard for evaluating these methods. Supervised ML requires a labelled training dataset and a metric for evaluating performance, but the very lack of data that these approaches are trying to solve also makes it hard to evaluate. Second, there are concerns about how stable many of the predicted relationships are, and about the behavioural responses that could affect the reliability of ML for treatment selection. Lastly, McKenzie also points to several ethics, privacy and fairness issues that could come into play.

Monica Andini, Emanuele Ciani, Guido de Blasio, Alessio D’Ignazio take up the question of policy targeting with ML in a recent VoxEU article and two papers. They present two examples of how to employ ML to target those groups that could plausibly gain more from the policy.

One example considers a tax rebate scheme introduced in Italy in 2014 with the purpose of boosting household consumption. The Italian government opted for a coarse targeting rule and provided the rebate only to employees with annual income between €8,145 and €26,000. Given the policy objective, an alternative could have been to target consumption-constrained households who are supposed to consume more out of the bonus. Applying ML to two household survey waves, the authors implement the second strategy. An additional econometric analysis suggests that this version of targeting would have been better, because the effect of the rebate is estimated to be positive only for the consumption-constrained households.

In the second application, Andini et al. focus on the “prediction policy problem” of assigning public credit guarantees to firms. In principle, guarantee schemes should target firms that are both creditworthy and rationed in their access to credit. In practice, existing guarantee schemes are usually based on naïve rules that exclude borrowers with low creditworthiness.

Aldini et al. propose an alternative assignment mechanism based on ML predictions, in which both creditworthiness and credit rationing are explicitly addressed. A simple comparison of the growth rate of disbursed bank loans in the years following the provision of the guarantee shows – confirmed by a regression discontinuity design – that the ML-targeted group performed better.

Print Friendly, PDF & Email

31 comments

  1. fajensen

    Machine learning will do Absolutely Great in the current political environment, where the main applications of economics is in providing the tools and intellectual cover to drag-up fundamentalist religious beliefs with maths (macro-economics) and the gaming of KPI’s (micro-economics):

    https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/

    There is an unholy belief in the integrity of complex machines, which will make this an easy sell!

          1. larry

            I’ve seen it. Adam Curtis isn’t exactly strictly accurate in his depiction of everyone’s role in the ‘rise of the machine’. von Neumann wasn’t against machines per se, just ridiculous hype. Having said that, I like Curtis’s film.

    1. Synoia

      We know this, and the topic is one of standards, like the recently retired platinum.standard kilogram, and its use in calibration of derived standards.

      The is a whole body of Cali ration expertise in engineering, including clear definitions of precision and accuracy.

  2. Disturbed Voter

    Present day ML (aka automated system recognition) uses various techniques, primarily to tweak a hidden Markov model. In supervised learning the purpose is to refine the parameters of the model. In unsupervised learning the purpose is to assemble a plausible hidden Markov model. In the first case, the problem is confirmation bias. In the second case, the problem is unintelligibility. In the first case, the hidden Markov model is assembled by people, they apply what they know from other sources. In the second case, eg. deep neural networks … the algorithm can’t explain why it does what it does.

    In the big picture, there is the problem of group think. That if we used these techniques extensively, then as with present day economic models, eg. spreadsheets or dynamic programming … they will be used to drive policy … creating a feedback loop which creates an observer independence paradox … much like quantum mechanics.

  3. lyman alpha blob

    The main reason so many economic models don’t reflect the real world is because they don’t factor in human nature which is largely inscrutable. So we’ll get machines to figure it out instead? Sounds legit.

  4. larry

    The machine-human problem is quite deep and has to do with the design of the brain and the machine. We do not know in a deep way how the brain is designed, but we do know how computing machines are designed because we have designed them. To be brief, and not too technical, computing machines are partial recursive devices. This means that they run entirely on algorithms. The question then becomes whether the brain is a partial recursive device. A number of people who have looked into this have decided that the answer is that, in all likelihood, the brain is not a partial recursive device, even though some of its operations are partial recursive. Georg Kreisel was one of them. von Neumann recognized this as only partly a mathematical/logical problem; also needed was information about brain function, information that he did not have and we still don’t have. I don’t mean that we don’t have any, just that we don’t have what we need to decide this question at this time.

    A caveat to this: Roger Penrose has claimed that quantum effects in brain function can answer the question and that the development of the quantum computer could settle the question. To date, this is highly controversial.

    1. oliverks

      I am not sure I really buy Penrose argument. I think he is so horrified that intelligence may not be that impressive, he is grasping at straws to try and make intelligence seem more “deep”.

      On the other hand, Luca Turin claims smell is a quantum effect, and he does have some good points. So perhaps I shouldn’t be so quick to judge.

    2. Ape

      Nns are Turing complete. Brain can simulate anything including other brains.

      Qed, they are nothing special. It’s just a question of implementation methods.

      The brain is “everything” and no more theory in principle is needed.

  5. Mike Smitka

    For macroeconomics there simply aren’t enough data for ML to contribute much. Yes, interpolation of missing variables – but that’s been done for a long time without the jargon, using sparse observables in developing countries to infer movements of things that matter more. I remember a nice talk by Lance Taylor in the mid-1980s at Yale, describing doing that in Nigeria using a luggable computer as the voltage where he was varied from brownout to burnout.

    Back to data: GDP measurements are generated for each quarter, while the underlying structure of the economy changes. So if you go back 30 years you have only 120 observations to work with. (In the US there was no interstate banking in 1978, deposit interest rates were still capped, on and on, so going back 40 years is likely to be harmful, not helpful.) To extract information, you have to impose theory. Otherwise you end up with 1,200 explanatory variables and multiple perfect explanations of GDP combined with zero explanatory power.

    Plus which GDP measure do you use? The first estimate that may be the focus of decision-makers? The revised revision that comes out two years after the fact? Which interest rates? Fred contains 520,000 time series!! Use monthly observations and allow LM to interpolate? Each will surely give different answers, and absent theory (actually, even with theory!) requires arbitrary choices of what to throw into the ML algorithms.

    So … this is a solution in search of a problem. Even some of your examples above strike me as a trendy relabeling of existing techniques (Monte Carlo simulations), rather than anything new.

    1. ChrisPacific

      Yes, this was my view as well. ML is only as good as the data, and macroeconomics is lacking in that.

      It’s also heavily dependent on the loss function, which comes from your definition of success – another area where economists perform really poorly in general.

    2. a.matthey

      As a student in economic history, I am highly interested in this notion (ML/statistical approach applied to historical macroeconomic data) and the problems that prevent it. Where can I learn more?

  6. Ignacio

    Monica Andini, Emanuele Ciani, Guido de Blasio, Alessio D’Ignazio take up the question of policy targeting with ML in a recent VoxEU article and two papers.

    Disturbing. I really dislike this. I can imagine only one thing worse than an economist on policy targeting: an economist with a learning machine.

  7. Summer

    They don’t have to create AI or machine learning. They just have to convince people (or maybe teach them to convince themselves?) that machines “learn” or are human like. Then it becomes similar to religious beliefs.
    Because, at the end of the day, the one thing we do know that has been learned about the human brain is how to manipulate and market to an over-propagandized society.

  8. Mel

    You encounter the crux of artificial policies the day when you see an artificial decision and say “That’s wrong! But it can’t be. The thing knows more than I do.”

    If you ask,
    “What’s the reason for this decision?”
    “The immediate reason for the decision is L207216586786385982.”
    “What do you mean, L207216586786385982? Explain.”
    “There is no economics term for L207216586786385982. It’s a pattern that emerged from previous data. You never noticed it before.”

    1. larry

      To say a machine knows more than a person is to commit a fallacy. The machine doesn’t ‘know’ anything. Certainly not in the sense in which the term is appliid to living organisms.

      1. Mel

        Even if the machine has examined more information? A larger fact base?
        Say, the NSA systems in Utah, which possess facts about millions or billions of people I haven’t even imagined exist.

        1. larry

          Even so. Knowing is much more than having access to a database. It implies thinking. Having said that, there is no strict agreement on exactly what thinking comprises, but it is agreed that it involves more than the access to and processing of data, which is all that the machines we have designed so far can do. The ‘more data’ scenario has not fixed the problem of what it is for a machine to think.

  9. David Laxer

    Many of the current machine learning algorithms generalize observations in order to make predictions.
    Using big data machine learning algorithms, the patterns learned are correlations an NOT causation.

    Judea Pearl (inventor of Baysean Networks) discusses this brilliantly in his latest book and
    proposes solutions for AI to overcome the inherent limitations of generalizing observations (with potential biases, gaps, model-less model, etc.)

    https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/B07CYJ4G2L/ref=sr_1_1?ie=UTF8&qid=1543592993&sr=8-1&keywords=Book+of+Why

    1. rmrfstar

      Machine learning algorithms are simply non-parametric estimation techniques. There is nothing special about these vs. statistical models that say one or the other is better at measuring causation. Although there is valid concern that over-fitting and regularization in machine learning algorithms may lead to biased estimation.

      Rather causation is really about confounding variables. If confounding variables are omitted from the model this leads to a spurious association of cause and effect. This is why the randomized control experiment is the gold standard for measuring causation. There is a very small probability, which decreases with sample size, that a confounding factor will be correlated with those that did or did not receive the treatment.

      This is a good read on the subject from Google’s chief economist: http://www.pnas.org/content/pnas/113/27/7310.full.pdf

      1. Ape

        There is a difference – explicit equations that are being tested. A physics plus statistics rather than statistics with no physics.

  10. The Rev Kev

    Machine learning is only as good as the information and databases fed to it. Anybody remember the time that Microsoft released the Tay AI (https://www.technologyreview.com/s/610634/microsofts-neo-nazi-sexbot-was-a-great-lesson-for-makers-of-ai-assistants/) onto the internet – and that the net turned it into a racist, sex-crazed neo-Nazi within hours?
    I have no trust in Silicon Valley types doing a good job of doing this properly as they seem incapable of getting rid of their own prejudices even in their own companies. They are more likely as not as to fed their AI the works of Ayn Rayd and say this is how things are supposed to be. Their algorithms will certainly be full of flaws and assumptions. I would rather that it be programmed by publicly audited algorithms first but we know that this will never happen.

    1. rob

      Exactly.
      the past often clarifies the future.
      People who believe some computer driven fairy will figure it out, are fodder for those who know it won’t.

  11. Ape

    “One aspect is that of prediction vs causal inference. Standard econometric models are well suited to understanding causal relationships between different aspects of the economy, but when it comes to prediction they tend to “over-fit” samples and sometimes generalise poorly to new, unseen data.

    By focusing on prediction problems, machine learning models can instead minimise forecasting error by trading off bias and variance. Moreover, while econometric models are best kept relatively simple and easy to interpret, ML methods are capable of handling huge amounts of data, often without sacrificing interpretation.”

    This is all so wrong. Explicit theory has the possibility of predicting outside of a regime. Almost all ml can not. It’s just a statistical analysis of the samples taken! Most ml today has no further knowledge of the world.

    Gigo dummies! Gigo.

    The lack of education in scientific methods by scientist is exasperating.

  12. nonsense factory

    Run that AI on a training set of data consisting of economists prediction’s vs actual outcomes and see what kind of reliability factors come out.

    Oh, and this is rubbish:
    “…Standard econometric models are well suited to understanding causal relationships between different aspects of the economy.”

    As just one example, look up the published econometric results in the eary 1990s that claimed they had solid proof that NAFTA would raise the wages of workers in the United States, for example.

Comments are closed.