By Philip Pilkington, a writer and research assistant at Kingston University in London. You can follow him on Twitter @pilkingtonphil
We have a lot to be thankful for today that we owe to Alan Turing – who is generally recognised as among the first, if not the first, computer scientist. But, on the other hand, we also have a lot that we can trace back to Turing that we should be in no way grateful for as it has filled our minds with stupidities and our universities with people talking nonsense. Without detracting from Turing’s undoubtedly important achievements we here focus on the latter and how some of Turing’s ideas came to infect the human sciences in general and economics in particular.
Alan Turing: Pre-Internet Troll
Perhaps there is some irony in the fact that one of the men responsible for the invention of the modern computer was also an insufferable troll who seems to have persistently engaged in acts that were designed to disturb the emotional equilibrium of those around him. Turing’s biographer Andrew Hodges relates one of these incidents which highlights well Turing’s charmingly knavish nature:
Alan was holding forth on the possibilities of a ‘thinking machine’. His high-pitched voice already stood out above the general murmur of well-behaved junior executives grooming themselves for promotion within the Bell corporation. Then he was suddenly heard to say: “No,I’m not interested in developing a powerful brain. All I’m after is a mediocre brain, something like the President of American Telephone & Telegraph Company”. The room was paralysed while Alan nonchalantly continued to explain how he imagined feeding in facts on prices of commodities and stocks and asking the machine the question “Do I buy or sell”?
It seems that it is in this vein that we should read his seminal 1950 paper ‘Computing Machinery and Intelligence’. What Turing was ostensibly dealing with in this paper was whether or not a computer could be said to “think”. However, at the very beginning of the paper Turing redefines “think” to simply mean that a computer might imitate a human being so perfectly that the person cannot distinguish between the computer and another human being. This, of course, is not the typical manner in which to discern if someone or something is thinking, however we shall consider this point in more depth later on. For now let us simply examine what Turing was doing.
In the paper Turing proposed what came to be known as the “Turing test”. In this test a person would sit in front of two curtains, behind one is a computer and behind the other is another person. The person in front of the curtains would then communicate with the two mystery entities using a keyboard and a screen. Finally they would try to discern which of the entities is human and which is machine.
There is a strong element of trolling manifest in this thought experiment. Turing begins the paper in question by trying to set the reader off-balance by making the case that if you put a man and a woman behind the curtains most people would not be able to guess which is which if the man wants to trick the person guessing. Turing then goes on to make the even more disconcerting proposition that if we replace either the man or the woman with a machine we still might not be able to tell them apart. His account is altogether unsettling – and one gets the distinct impression that this is purposefully so.
There is much fictional literature which deals with the anxiety Turing plays off. Many writers have noted that machines that mimic humans are for some reason extremely disconcerting. They appear to imitate life and this leads us to question whether there is life behind the icy exterior – this then leads us to begin to question what, in fact, life is. Sigmund Freud was well aware of the psychological effects such fantasies or thought experiments could have. Indeed, a discussion of such a fantasy occupies much of his classic paper entitled “The Uncanny”. Freud summarises the effects of what he calls “the uncanny” as such:
The subject of the “uncanny” is a province of this kind. It undoubtedly belongs to all that is terrible — to all that arouses dread and creeping horror; it is equally certain, too, that the word is not always used in a clearly definable sense, so that it tends to coincide with whatever excites dread.
The Uncanny, then, is the province of the modern day troll. Freud then goes on to discuss the point of reference of another author on the uncanny and it is here that he introduces the automaton or the machine-imitator of the human being. The other author is a literary critic called Jentsch who has taken up a problem that will strike us as being almost identical to that of Turing’s test:
In telling a story, one of the most successful devices for easily creating uncanny effects is to leave the reader in uncertainty whether a particular figure in the story is a human being or an automaton; and to do it in such a way that his attention is not directly focused upon his uncertainty, so that he may not be urged to go into the matter and clear it up immediately, since that, as we have said, would quickly dissipate the peculiar emotional effect of the thing.
Although Jentsch is discussing a work of horror-fiction we can see that the same narrative device is at work in Turing’s discussion of the computer and the human. The trick is to disconcert the reader into trying to clear up the problem posed. First you set the reader off their emotional equilibrium with an offensive problem, then you watch them twist themselves into pretzels trying to figure the whole thing out. There is a significant degree of rhetorical manipulation here – similar to what we see when an internet troll tries to throw someone out of emotional equilibrium so that they can then control what the victim talks about and does. Something very similar at work in the Turing test and this is why, it seems, that so many have taken up the challenge without questioning the basic premises.
How to Always Win at a Turing Test
It is not difficult to devise an extremely Freudian strategy to beat the machine in the Turing test time and again. All you have to do is ask the two entities behind the curtains a series of questions in a joking or sarcastic manner. Eventually it will become clear which entity is able to pick up on the joking or sarcastic tone and that entity will be the human. Yes, this will be more difficult to accomplish using a keyboard and screen than it would be face-to-face, but you can usually convey joking or sarcasm even through type alone.
The reasons that this will always work is because machines do not and cannot possess the ability to recognise jokes or sarcasm, which represent a completely different, context dependent type of language comprehension that only humans possess. For a computer the language that is fed into it can only say one thing. It must adhere to very strict rules and cannot be substantially ambiguous – which, of course, is the nature of the joking or sarcastic remark. In contrast to the limitations of machine-language human language can say two things, three things, many things.
“A man walks into a bar…” That statement can mean two different things. In one situation the man might order a drink, in the other he might get a bump on his head. It is such ambiguities in meaning that jokes play on and it is this nuance that no computer can pick up on.
Patient: “Doctor, doctor, I feel like a pair of curtains.”
Doctor: “Well, pull yourself together then!”
The humour – for what it is – in this joke arises because the doctor flips the context around on the patient. The patient – we assume (although again this is just an assumption on our part) – comes to the doctor and figuratively tells him that he feels like a pair of curtains. The doctor then takes this statement literally and utters a well-known phrase which overlaps with the patient’s metaphor to convey that the patient should get it together and sort out his problems. Meaning here is operating at any number of different levels and while we could input a set of rules into a computer to identify these sentences as a joke, the computer would never be able to “get” the joke in the same way that a human can because the machine would never be able to grasp the different levels of meaning operating at once that produce what we might call the joke-effect.
Yes, we could imagine that a computer could be programmed to recognise every joke or rhetorical nuance ever said before at any time in history, but then all we would have to do is come up with some new joke or rhetorical nuance and the machine would become confused. The difference then between a human being and a computer is that the human being has an entirely different relationship to language as the machine. Whereas machine-language is precise and adheres to strict rules, human language is ambiguous, creative and tends to bend the rules that it implicitly relies upon.
The important thing to recognise here is the difference in the types of communication taking place. When a machine communicates it is doing so on the basis of a “signal/noise” dynamic. This is represented in the diagram below.
The computer tends to get a mixture of signal and noise as an input and it then tries to disentangle the signal from the noise and process the information using a set of rigid, pre-established rules. The key thing to note here is that it is assumed that there exists some underlying and unambiguous “signal” underlying the information that is being inputted that can be extracted using the pre-determined rule set.
Human communication is entirely different. In human communication there is no signal or noise. That is not simply how the process works. Human communication is heavily context dependent and there is rarely, if ever, a true signal underlying the information being conveyed that is then directly processed by the person listening.
To put this more colloquially: people spend most of the time miscommunicating with one another. This may seem odd and dysfunctional but it is not so. Consider an extreme example of this and we will see how the process works. A couple are watching film together. The woman indicates that she is far too warm in an effort to get the man to turn the heating down. The man takes this as a signal that she wants to have sex and makes an advance. Although this is not the information that the woman was originally trying to convey it then activates an underlying desire that outweighs the annoyances associated with the temperature and our couple has a nice evening.
What appears to have been an act based on communication par excellence is in fact an act that has its roots in a fundamental miscommunication. This is actually how most human communication functions on a day-to-day basis. The reason that society does not crumble under such pressure is because we have various norms and taboos in place and people, to a very large extent, act in line with these. These rules and norms, however, are infinitely more flexible than the rules required for machines to process machine-language. But despite their often ambiguous nature, these rules do function quite well in holding the social fabric together (most of the time, anyway).
This is precisely, for example, why communication often breaks down when a person visits a totally alien culture. Suddenly a gesture that is a greeting in one’s native society becomes an act of war when applied in the new context. How much chaos has been caused throughout human history by the miscommunication that occurred to different underlying social norms between different groups of people? Quite a lot, one would imagine.
Yes, Turing was very clever in telling a story that brought these issues up, as Jentsch said, “in such a way that attention is not directly focused upon the uncertainty”, but in doing so he was manipulating his audience emotionally. They came away from the piece largely thinking that Turing had established the criteria by which communication and thinking could be judged, but all he had done was engage in misdirection through clever rhetoric. By tricking people into thinking that machine-communication and human-communication were identical, Turing was able to convince innumerable people that they could use the language of cybernetics in the human sciences – and this is where the whole thing got remarkably dangerous.
Machine Dreams: Economics Becomes Computer Science
As Philip Mirowski has shown in his wonderful book ‘Machine Dreams,’ it was not long before the language of computer science permeated deeply into the discourse of post-World War II neoclassical economics. If the reader is in any way familiar with the discourse of neoclassical economics they will not be remotely surprised. This is because neoclassical is, at its heart, all about such signal and noise types of communication.
Neoclassical economics is primarily concerned with how prices signals communicate information in different marketplaces. For the neoclassicals markets are conceived to be a cacophony of human desires but which, through the process of bargaining, are eventually reduced to certain price signals that convey who gets what. With the “noise” of different desires overcome, the price signals manifest themselves and a harmonious communication takes place between all the actors. Everyone gets what they want at a given price.
This is the underlying assumption made by modern neoclassical financial theory – also known as the Efficient Markets Hypothesis (EMH). Here the market is conceived of a bunch of rational and irrational individuals. The rational individuals are acting in line with “true” information – that is, they are valuing assets in line with their “true” values which in turn are based on a “rational” evaluation of how much the asset will be worth in the future. The irrational individuals are not doing this however; they instead are acting on “false” information that is not arrived at in a rational manner. Thus, the rational individuals are seen as being “signal-traders” and the irrational individuals are seen as “noise-traders”. The market is then, like the computer, thought to establish perfect communication by eliminating the “noise-traders” through competition while promoting the “signal-traders”. Since the noise-traders are actin- stupidly, the signal-traders will make all the money and the noise-traders will go broke.
Variants of this theory can then be thought up where the noise-traders get the upper hand and the signal-traders lose which causes the market to become dominated by “noise” and thus become unstable. This makes up many of the modern theories of financial instability and is even used by some economists to explain the 2008 crisis. However, as we have seen, the whole premise of the theory is wrong. The theory conceives of people as computers and not as human beings with an entirely more complex relation to language and communication. It assumes that there is some fundamental “signal” underlying all the “noise”, but this simply not the case.
As we have already showed human communication is not a signal/noise relationship. It is context dependent and relies on highly flexible norms, rules and a perception of what Others think the “normal thing to do” is. The same is true when individuals interpret information – say, the price of an asset (a Mortgage Backed Security, maybe?). They do not look at it as a computer might applying strict rules and inflexible criteria. Instead they see it through the lens of the far more flexible, context-dependent norms and rules of the institution they work in at that particular moment in history. This, in turn, is dependent on what everyone else in the market is doing. Keynes recognised this when he wrote about the marketplace as a sort of “beauty contest” (the reader might also watch this video):
[P]rofessional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practise the fourth, fifth and higher degrees. (GT, Chapter 12)
And it is for this reason that neoclassical pricing theory in general and neoclassical financial theory in particular needs to be done away with completely. Human beings are not the rational calculators that neoclassicals think they are. They are certainly not so in financial markets but neither are they so in markets more generally. What is generally referred to derogatively as “herd behaviour” is nothing more than a manifestation of how human communication actually operates at a very fundamental level.
This is the case that needs to be made for financial regulation today. We must not say that “noise-traders” sometimes get ahead of “signal-traders” as the neoclassicals would have it. This is complete sophistry and just obscures the problem completely. No, the market is not a Rational Calculator at all, it is just a bunch of individuals who act in line with highly flexible norms and rules that evolve over time. We, as a society, can impose limiting restrictions on what norms and rules win out through our legal institutions however. We cannot rely on supposed “rationality” any more than we can rely on “clear communication” – such considerations are meaningless when applied to human beings. Instead we must have clear legal and institutional rules in place so that people know what they are allowed and what they are not allowed to do.