Philip Pilkington: Mistaking Men for Machines – How Neoclassical Economics Relies on Computer Science to Misunderstand Human Communication

By Philip Pilkington, a writer and research assistant at Kingston University in London. You can follow him on Twitter @pilkingtonphil

We have a lot to be thankful for today that we owe to Alan Turing – who is generally recognised as among the first, if not the first, computer scientist. But, on the other hand, we also have a lot that we can trace back to Turing that we should be in no way grateful for as it has filled our minds with stupidities and our universities with people talking nonsense. Without detracting from Turing’s undoubtedly important achievements we here focus on the latter and how some of Turing’s ideas came to infect the human sciences in general and economics in particular.

Alan Turing: Pre-Internet Troll

Perhaps there is some irony in the fact that one of the men responsible for the invention of the modern computer was also an insufferable troll who seems to have persistently engaged in acts that were designed to disturb the emotional equilibrium of those around him. Turing’s biographer Andrew Hodges relates one of these incidents which highlights well Turing’s charmingly knavish nature:

Alan was holding forth on the possibilities of a ‘thinking machine’. His high-pitched voice already stood out above the general murmur of well-behaved junior executives grooming themselves for promotion within the Bell corporation. Then he was suddenly heard to say: “No,I’m not interested in developing a powerful brain. All I’m after is a mediocre brain, something like the President of American Telephone & Telegraph Company”. The room was paralysed while Alan nonchalantly continued to explain how he imagined feeding in facts on prices of commodities and stocks and asking the machine the question “Do I buy or sell”?

It seems that it is in this vein that we should read his seminal 1950 paper ‘Computing Machinery and Intelligence’. What Turing was ostensibly dealing with in this paper was whether or not a computer could be said to “think”. However, at the very beginning of the paper Turing redefines “think” to simply mean that a computer might imitate a human being so perfectly that the person cannot distinguish between the computer and another human being. This, of course, is not the typical manner in which to discern if someone or something is thinking, however we shall consider this point in more depth later on. For now let us simply examine what Turing was doing.

In the paper Turing proposed what came to be known as the “Turing test”. In this test a person would sit in front of two curtains, behind one is a computer and behind the other is another person. The person in front of the curtains would then communicate with the two mystery entities using a keyboard and a screen. Finally they would try to discern which of the entities is human and which is machine.

There is a strong element of trolling manifest in this thought experiment. Turing begins the paper in question by trying to set the reader off-balance by making the case that if you put a man and a woman behind the curtains most people would not be able to guess which is which if the man wants to trick the person guessing. Turing then goes on to make the even more disconcerting proposition that if we replace either the man or the woman with a machine we still might not be able to tell them apart. His account is altogether unsettling – and one gets the distinct impression that this is purposefully so.

There is much fictional literature which deals with the anxiety Turing plays off. Many writers have noted that machines that mimic humans are for some reason extremely disconcerting. They appear to imitate life and this leads us to question whether there is life behind the icy exterior – this then leads us to begin to question what, in fact, life is. Sigmund Freud was well aware of the psychological effects such fantasies or thought experiments could have. Indeed, a discussion of such a fantasy occupies much of his classic paper entitled “The Uncanny”. Freud summarises the effects of what he calls “the uncanny” as such:

The subject of the “uncanny” is a province of this kind. It undoubtedly belongs to all that is terrible — to all that arouses dread and creeping horror; it is equally certain, too, that the word is not always used in a clearly definable sense, so that it tends to coincide with whatever excites dread.

The Uncanny, then, is the province of the modern day troll. Freud then goes on to discuss the point of reference of another author on the uncanny and it is here that he introduces the automaton or the machine-imitator of the human being. The other author is a literary critic called Jentsch who has taken up a problem that will strike us as being almost identical to that of Turing’s test:

In telling a story, one of the most successful devices for easily creating uncanny effects is to leave the reader in uncertainty whether a particular figure in the story is a human being or an automaton; and to do it in such a way that his attention is not directly focused upon his uncertainty, so that he may not be urged to go into the matter and clear it up immediately, since that, as we have said, would quickly dissipate the peculiar emotional effect of the thing.

Although Jentsch is discussing a work of horror-fiction we can see that the same narrative device is at work in Turing’s discussion of the computer and the human. The trick is to disconcert the reader into trying to clear up the problem posed. First you set the reader off their emotional equilibrium with an offensive problem, then you watch them twist themselves into pretzels trying to figure the whole thing out. There is a significant degree of rhetorical manipulation here – similar to what we see when an internet troll tries to throw someone out of emotional equilibrium so that they can then control what the victim talks about and does. Something very similar at work in the Turing test and this is why, it seems, that so many have taken up the challenge without questioning the basic premises.

How to Always Win at a Turing Test

It is not difficult to devise an extremely Freudian strategy to beat the machine in the Turing test time and again. All you have to do is ask the two entities behind the curtains a series of questions in a joking or sarcastic manner. Eventually it will become clear which entity is able to pick up on the joking or sarcastic tone and that entity will be the human. Yes, this will be more difficult to accomplish using a keyboard and screen than it would be face-to-face, but you can usually convey joking or sarcasm even through type alone.

The reasons that this will always work is because machines do not and cannot possess the ability to recognise jokes or sarcasm, which represent a completely different, context dependent type of language comprehension that only humans possess. For a computer the language that is fed into it can only say one thing. It must adhere to very strict rules and cannot be substantially ambiguous – which, of course, is the nature of the joking or sarcastic remark. In contrast to the limitations of machine-language human language can say two things, three things, many things.

“A man walks into a bar…” That statement can mean two different things. In one situation the man might order a drink, in the other he might get a bump on his head. It is such ambiguities in meaning that jokes play on and it is this nuance that no computer can pick up on.

Patient: “Doctor, doctor, I feel like a pair of curtains.”

Doctor: “Well, pull yourself together then!”

The humour – for what it is – in this joke arises because the doctor flips the context around on the patient. The patient – we assume (although again this is just an assumption on our part) – comes to the doctor and figuratively tells him that he feels like a pair of curtains. The doctor then takes this statement literally and utters a well-known phrase which overlaps with the patient’s metaphor to convey that the patient should get it together and sort out his problems. Meaning here is operating at any number of different levels and while we could input a set of rules into a computer to identify these sentences as a joke, the computer would never be able to “get” the joke in the same way that a human can because the machine would never be able to grasp the different levels of meaning operating at once that produce what we might call the joke-effect.

Yes, we could imagine that a computer could be programmed to recognise every joke or rhetorical nuance ever said before at any time in history, but then all we would have to do is come up with some new joke or rhetorical nuance and the machine would become confused. The difference then between a human being and a computer is that the human being has an entirely different relationship to language as the machine. Whereas machine-language is precise and adheres to strict rules, human language is ambiguous, creative and tends to bend the rules that it implicitly relies upon.

The important thing to recognise here is the difference in the types of communication taking place. When a machine communicates it is doing so on the basis of a “signal/noise” dynamic. This is represented in the diagram below.

The computer tends to get a mixture of signal and noise as an input and it then tries to disentangle the signal from the noise and process the information using a set of rigid, pre-established rules. The key thing to note here is that it is assumed that there exists some underlying and unambiguous “signal” underlying the information that is being inputted that can be extracted using the pre-determined rule set.

Human communication is entirely different. In human communication there is no signal or noise. That is not simply how the process works. Human communication is heavily context dependent and there is rarely, if ever, a true signal underlying the information being conveyed that is then directly processed by the person listening.

To put this more colloquially: people spend most of the time miscommunicating with one another. This may seem odd and dysfunctional but it is not so. Consider an extreme example of this and we will see how the process works. A couple are watching film together. The woman indicates that she is far too warm in an effort to get the man to turn the heating down. The man takes this as a signal that she wants to have sex and makes an advance. Although this is not the information that the woman was originally trying to convey it then activates an underlying desire that outweighs the annoyances associated with the temperature and our couple has a nice evening.

What appears to have been an act based on communication par excellence is in fact an act that has its roots in a fundamental miscommunication. This is actually how most human communication functions on a day-to-day basis. The reason that society does not crumble under such pressure is because we have various norms and taboos in place and people, to a very large extent, act in line with these. These rules and norms, however, are infinitely more flexible than the rules required for machines to process machine-language. But despite their often ambiguous nature, these rules do function quite well in holding the social fabric together (most of the time, anyway).

This is precisely, for example, why communication often breaks down when a person visits a totally alien culture. Suddenly a gesture that is a greeting in one’s native society becomes an act of war when applied in the new context. How much chaos has been caused throughout human history by the miscommunication that occurred to different underlying social norms between different groups of people? Quite a lot, one would imagine.

Yes, Turing was very clever in telling a story that brought these issues up, as Jentsch said, “in such a way that attention is not directly focused upon the uncertainty”, but in doing so he was manipulating his audience emotionally. They came away from the piece largely thinking that Turing had established the criteria by which communication and thinking could be judged, but all he had done was engage in misdirection through clever rhetoric. By tricking people into thinking that machine-communication and human-communication were identical, Turing was able to convince innumerable people that they could use the language of cybernetics in the human sciences – and this is where the whole thing got remarkably dangerous.

Machine Dreams: Economics Becomes Computer Science

As Philip Mirowski has shown in his wonderful book ‘Machine Dreams,’ it was not long before the language of computer science permeated deeply into the discourse of post-World War II neoclassical economics. If the reader is in any way familiar with the discourse of neoclassical economics they will not be remotely surprised. This is because neoclassical is, at its heart, all about such signal and noise types of communication.

Neoclassical economics is primarily concerned with how prices signals communicate information in different marketplaces. For the neoclassicals markets are conceived to be a cacophony of human desires but which, through the process of bargaining, are eventually reduced to certain price signals that convey who gets what. With the “noise” of different desires overcome, the price signals manifest themselves and a harmonious communication takes place between all the actors. Everyone gets what they want at a given price.

This is the underlying assumption made by modern neoclassical financial theory – also known as the Efficient Markets Hypothesis (EMH). Here the market is conceived of a bunch of rational and irrational individuals. The rational individuals are acting in line with “true” information – that is, they are valuing assets in line with their “true” values which in turn are based on a “rational” evaluation of how much the asset will be worth in the future. The irrational individuals are not doing this however; they instead are acting on “false” information that is not arrived at in a rational manner. Thus, the rational individuals are seen as being “signal-traders” and the irrational individuals are seen as “noise-traders”. The market is then, like the computer, thought to establish perfect communication by eliminating the “noise-traders” through competition while promoting the “signal-traders”. Since the noise-traders are actin- stupidly, the signal-traders will make all the money and the noise-traders will go broke.

Variants of this theory can then be thought up where the noise-traders get the upper hand and the signal-traders lose which causes the market to become dominated by “noise” and thus become unstable. This makes up many of the modern theories of financial instability and is even used by some economists to explain the 2008 crisis. However, as we have seen, the whole premise of the theory is wrong. The theory conceives of people as computers and not as human beings with an entirely more complex relation to language and communication. It assumes that there is some fundamental “signal” underlying all the “noise”, but this simply not the case.

As we have already showed human communication is not a signal/noise relationship. It is context dependent and relies on highly flexible norms, rules and a perception of what Others think the “normal thing to do” is. The same is true when individuals interpret information – say, the price of an asset (a Mortgage Backed Security, maybe?). They do not look at it as a computer might applying strict rules and inflexible criteria. Instead they see it through the lens of the far more flexible, context-dependent norms and rules of the institution they work in at that particular moment in history. This, in turn, is dependent on what everyone else in the market is doing. Keynes recognised this when he wrote about the marketplace as a sort of “beauty contest” (the reader might also watch this video):

[P]rofessional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practise the fourth, fifth and higher degrees. (GT, Chapter 12)

And it is for this reason that neoclassical pricing theory in general and neoclassical financial theory in particular needs to be done away with completely. Human beings are not the rational calculators that neoclassicals think they are. They are certainly not so in financial markets but neither are they so in markets more generally. What is generally referred to derogatively as “herd behaviour” is nothing more than a manifestation of how human communication actually operates at a very fundamental level.

This is the case that needs to be made for financial regulation today. We must not say that “noise-traders” sometimes get ahead of “signal-traders” as the neoclassicals would have it. This is complete sophistry and just obscures the problem completely. No, the market is not a Rational Calculator at all, it is just a bunch of individuals who act in line with highly flexible norms and rules that evolve over time. We, as a society, can impose limiting restrictions on what norms and rules win out through our legal institutions however. We cannot rely on supposed “rationality” any more than we can rely on “clear communication” – such considerations are meaningless when applied to human beings. Instead we must have clear legal and institutional rules in place so that people know what they are allowed and what they are not allowed to do.

Print Friendly, PDF & Email

114 comments

  1. Chris Engel

    Computers are the ultimate rational agents.

    In the neoclassical world they could just do all the trading and we could extract the rents and enjoy leisure over labor.

    1. digi_owl

      HFT? Except that they are basically proving that interaction of apparently rational rules can have irrational results (flash crash, anyone?).

  2. holygrail

    It’s kind of a side argument but I don’t think your depiction of computers and Turing’s test is correct.

    If the computer can’t recognize sarcasm or jokes, assuming it doesn’t lack the cultural references of course, then it’s failed the test period. It’s not human level intellect. You seem to start with the assumption that because right now no computer can it’s theoretically impossible to pass the test. This is unknown, you can’t “prove” it with vague descriptions of signal/noise transmissions.

    For example, if one was to emulate a human brain on a computer neuron by neuron, senses, memories and all, it would be in principle indistinguishable from the real thing and it would definitely be able to understand sarcasm and it would pass the Turing test. It is not known whether this is possible but there are many very smart people that think it is and are trying and making progress in research in this direction. There are also other people pursuing a similar agenda with different methods (not based on emulation of biology).

    The fact that computers are built out of simple components and basically are deterministic machines ruled by mathematical rules doesn’t mean that they can’t achieve any degree of complexity. You say:

    > The difference then between a human being and a computer is that the human being has an entirely different relationship to language as the machine.
    > Whereas machine-language is precise and adheres to strict rules, human language is ambiguous, creative and tends to bend the rules that it implicitly relies upon.

    That’s a category error. Machine language and human language are not comparable, “language” is a misleading word there. The machine language can be combined and machines can be programmed to conform with any set of rules. There’s no reason known why machines couldn’t be programmed to learn human language. Human brains are thought to be extremely complex machines, we don’t understand how everything works yet but there’s no reason known why we shouldn’t be able to replicate that machinery once we understand enough. Right now we simulate the folding of proteins, we’ve been able to teach computers the “language” of molecular biology. Is there a reason why we can’t progress in that path?

    Turing’s test is no trolling at all and it’s in fact beautiful in its simplicity. Intelligence is elusive: we can’t describe it very well but we can usually tell if it’s there pretty easily if we try. Turing’s test evades all need to define, categorize o philosophize about intelligence and it goes right down to what matters: is it there?

      1. vlade

        Actually, I can turn lead into gold, the problem is that it’s very expensive and gold tends to be radioactive (mercury to gold is a bit less expensive – they managed it almost 100 years ago – but the gold still is still radioactive). The question is whether radioactive gold matters if the point is to rebury it anyways ;).
        Interestingly, gold to lead is much simpler, but then, alchemist managed that (or at least turning gold into less gold) hundreds of years ago…

        1. Kurt Sperry

          Late to the party–here on the West coast of North America these interesting discussions are pretty much in the can and unlikely to be read or replied to by afternoon here–but having read through, a brief request: Maybe we are looking at “intelligence” in too anthropocentric a manner. There are (at least) as many types of intelligence as their are sentient species, and when flattering ourselves that our intelligence is special we should remember that we are hardly impartial to judge. There is a real possibility a truly objective observer might conclude some cetaceans perhaps are in fact more intelligent than us humans.

          I think a creating a computer that can problem solve dynamic problems, navigate in 4D space, do multiplex communications including accurately reading body language while balancing, closed loop controlling elaborate muscular systems etc. etc. etc. as well as even, say, a corvid or a octopus would be full proof of concept. And species we probably wouldn’t associate with intelligence at all do most of those successfully.

          Can we drop the axiomatic assumption that we humans can objectively define the yardstick with which we measure other species against ourselves? Simply ceasing to assume human intelligence is massively different in kind or categorically superior from other species’ can be brought to bear to re-examine all sorts of philosophical questions in profound ways. Try it as a thought experiment if nothing more.

          1. LifelongLib

            “Late to the party–here on the West coast of North America…”

            Here in Hawaii I often miss the party completely…

          2. LifelongLib

            I suppose when we think of AI we’re thinking of a machine that can make human-like (and humanly-comprehensible) decisions “better” than a human can, or do so in situations where human decision-making is impossible. A machine that somehow developed its own type of intelligence might be worse than useless.

      2. vlade

        And on Dreyfus – even he admits that sub-symbolic methods (like the evolutionary approach to problem solving I already mentioned) bypass his arguments. The biological and psychological assumptions that existed in AI in 60s are no longer there now, and I’d even say that to an extent the epistemological and ontological ones are gone too (in terms of having perfectly defined symbols that capture everything).

        Again, I say this is a misleading path though – the (flawed) argument that markets are rational and can be approximated by simulating fully rational actors does not rest on whether AI is or isn’t achievable, but if it looks like that you may hand an easy victory to the others (P claims that no AI possible means our theories are wrong, we can persuade that no-AI-possible is wrong, therefore our theories are right).

        1. Philip Pilkington

          Don’t be too concerned with “easy victories” I hand to others. Let me worry about that.

          Regarding Dreyfus’ acceptance of “learning machines”, I think he’s softened in his old age — and he’s been accosted by the AI establishment. His original critiques apply just as well to pattern recognition software and neural nets and all that; new technologies rely only similar methods that are used in modern econometrics and remain reliant on a view of the world that symbols can replace human consciousness (only one component of which relies on symbols). This is as flat wrong as anything else.

          To put it one way: an econometrics program cannot generate context or create new data.

          1. craazyman

            Doctor doctor I feel like wigwams and tee-pees!

            Ahhh, I think you are two tents.

            This looks like a good one Phil. Maybe you wont get sucked up into the giant maggot afterall. Haven’t read it yet but I will and I have anything remotely intelligent to say, I won’t hesitate.

          2. vlade

            Learning machines and evolved machines are two very different things.

            Saying “we cannot evolve machine to mimic humans” is something much much stronger than “we cannot engineer machine to mimic humans”. Engineering involves understanding, evolution doesn’t (in fact, the longer it takes for something to evolve, the harder it is to understand, which may well lie behind Moravec’s paradox). Of course, for us as humans it’s hard to admit we may not be able to understand something (even though all of us on daily basis use things we don’t understand at all, and likely no single person understand in their entairety, but at least we have a warm knowledge that potentially someone might be able to understand it).

            Of course, the other drawback is that evolution takes time, so trying to evolve a machine to mimic a human (MTMH)is not something that will ever happen overnight. But saying it’s fundamentally impossible is way way too strong a claim (if nothing else, it begs a definition of machine. Will we accept only silicon as machine? Or will other comopounds, up to and including organic, be prohibited?)

            Ultimately, the only argument as to why we shouldn’t be able to evolve MTMH is a religious one, i.e. there’s something out there that’s much more than just a matter, is fundamentally unknowable/unmanipulable by us. And then it comes down to an opinion/belief.

          3. Philip Pilkington

            Okay. Well when someone evolves a machine to be able to tell a good joke that isn’t a pun and isn’t based on pattern recognition (see comment and article to ‘ok’ below), then get back to me.

            You’re right. This is a question of belief. When confronted with the issue of machines not being able to understand context Turing said:

            “We cannot so easily convince ourselves of the absence of complete laws of behaviour … The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, ‘We have searched enough. There are no such laws.'”

            Two observations. (1) Turing’s argument is based on belief in finding “complete laws of behavior”, not dissimilar to medieval theologians trying to get a grasp on the infinite essence of God. (2) Turing seems to think that we only know if we can find such laws through “scientific observation”. This is completely wrong-headed in my opinion. I think you can tell quite well whether machines can think in a certain way by exmining their underlying framework and comparing it to human consciousness. By Turing’s logic if you cannot disprove something positively then it may exist and is worth seeking out — perhaps he should have spent his life chasing bigfoot.

      3. holygrail

        Dreyfus critique was correct of the AI approach followed in the 60s. It’s not so relevant anymore in modern day, which has largely moved on from the “classic” AI approach of that time. And it’s not true that most modern theoriticians accepted that (please stop repeating it), most ignore it in fact. People who he criticized as Peter Norvig are actually busy being succesful applying their ideas in self driving cars, medical diagnostic systems and language interpreters (all achieved with modern techniques that weren’t fully realized when Dreyfus wrote his books).

        You can’t consider it proven or a finished debate as much as you want but it’s not. Many smart people are working in progressing in this path, do you really think they would do so if it was so clear it’s possible in principle? And even if it’s far for complete that research gives practical applications all the time.

        Your view of computers as symbolic or strict rule based machines is misleading even if we never achieve human level intellect. Even nowdays we have achieved things like artificial vision capabilities, which is a problem that contains a lot of ambiguity and interpretation. This is not possible with mathematically straighforward rules only and in fact this implementation uses pattern matching abilities similar to the structure the brain is supposed to use.

        Your assertions like “a computer will never be able to get a joke”

      4. Andrew

        I think you can tell quite well whether machines can think in a certain way by exmining their underlying framework and comparing it to human consciousness.

        If we understood human consciousness, then yes, we could do that. That would also mean that either (1) we could create an AI equivalent to a human or (2) we could not create such an AI because humans have some immaterial component (i.e., a soul).

        If you deny the existence of a soul, I don’t see how you can argue that AI is impossible. A computer with AI need not closely resemble today’s computers. I’m not arguing that we’re anywhere close to AI; I’m arguing that it must be possible if one takes a materialistic view of the world. Simply perform a complete simulation of the brain. You didn’t say that AI needs to be efficient.

        computer: one that computes; specifically : a programmable usually electronic device that can store, retrieve, and process data (Merriam-Webster online)

    1. Thorstein

      Of course if a computer could be constructed to neuron-by-neuron replicate a human brain, then it would have to be taught, and it would not reliably follow orders unless great investments were made in strictly controlling its inputs. The MOTU have little interest in investing in such a an artificial brain. (On the other hand, they are heavily invested in controlling the inputs to brains made the old-fashioned way.)

      the MOTU are, of course, extremely fond of brains that follow orders mindlessly. And, yes, such brains are clueless about sarcasm. Was it the case that, leading up to the Great Financial Crisis, the price signals were sarcastic?

    2. Yalt

      You are assuming that it is possible to emulate the brain, neuron by neuron, on a Turing machine. There are mathematical arguments (see Penrose for example) that this is not the case, that it is *in principle* impossible.

      1. Birch

        Computers have a high signal to noise ratio. That’s why they reproduce information put into them accurately; the noise rarely overwhelms the on/off signal of the computation so every bit is reproduced the same as it was recorded. To wire a modern computer up just like a human brain would not result in a computer that acted like a brain, even if it were structurally possible.

        The human brain has a way lower signal level. This means that many of the signal impulses are washed out by noise, but it also means that there are many magnitudes more signal impulses using the same power a modern computer would use. This low power usage is what allows the human brain to have so many impulses without overheating or starving. So rather than rely on accuracy, the human brain uses a ridiculous number of redundant impulses, many of which are noise, and even many of the non-noise signals are contradictory. Sorting through these contradictory signals to find ‘truth’ (or the most relevant outcome) may be the key to thought and contextual understanding.

        For a computer to have all the connections of a human brain, it would need to operate at way lower signal levels so it wouldn’t overheat, and so it could be made small enough to function. This lower signal level would allow much more noise to influence the outcome of computations. Such a computer could theoretically approach a human way of thinking if it learned how to effectively sort through the noise; but it would be useless for playing back recorded music, looking at pictures, or computing precise stock trades because it would have it’s own influence on how such things were performed and presented. The question then would be why we would want to build such a machine when we have humans already.

        I heard about this on the CBC from a guy trying to design a noisy computer chip with super low power draw and way more switch points. Sorry, can’t remember his name.

        1. Philip Pilkington

          Two quick points.

          (1) Your conception of the human brain and how it works uses computer metaphors. If the human brain actually possesses properties that are not accounted for in your metaphorical framework then said framework cannot be used to really talk about human consciousness. Then its a bit like how certain mecahnists used to talk about the human brain in the 18th century using mechanical metaphors. They were just taking a dominant metaphor that was popular in science and applying it to something completely different while producing no results.

          http://mechanism.ucsd.edu/~bill/teaching/w12/philneuro/metaphorsandconceptionsofbrain.key.pdf

          (2) There is an assumption in the reconstructing neuron-by-neuron argument that the whole is just the sum of its parts. This is probably not the case not just in the brain but in biological organisms more generally. In fact, to even begin to talk of constructing a human brain from scratch while we cannot even build the equivalent of an ant is completely fanciful. Personally I don’t believe we’ll EVER achieve these things. But even if we do… step-by-step…

          1. Birch

            I’m with you there, Philip. The computer and human brain comparisons usually start from the understanding that both are fundamentally binary. I was just trying to point out one of the most obvious ways that these binary systems are fundamentally different. Not only are they built different and function different, but their uses are also completely different.

      2. holygrail

        There are no arguments fully accepted that it’s “in principle” impossible.

        Penrose’s argument is that there might be quantum induced phenomena in the brain that is not Turing-computable. This is pure speculation and he has no proof whatsoever for this. It’s just a posteriori explanation for why it’s so complex to understand the human brain. It’s very unlikely that it’s a good argument and it’s not yet even a thesis. Also, there are quantum computers in the works so there’s no reason that even if it were true it would be the end of it.

      3. LifelongLib

        There’s an old Star Trek episode where a “computer” that can control a starship is being tested. Mr. Spock soon observes that it’s not behaving like a computer at all. It turns out that it’s actually an electronic replica of the inventor’s brain, including a genius IQ and an undiagnosed mental illness. Mayhem follows as the device becomes increasingly paranoid.

  3. jake chase

    Thanks for explaining why I enjoy listening to George Carlin, and never listen to Clinton, Obama or any other politician under any circumstances.

    1. ted braun

      Jake Chase is a very wise man, and you are probably more fun to be with than those who listen to Clinton and Obama. The talking heads speak and their listeners echo.

      1. NotTimothyGeithner

        This isn’t quite right. At least from my experience, the followers of people like Obama/Clinton/JFK just make up what they heard or don’t care. Feeling good is their pay off. Or at least, I’ve failed to meet followers who could repeat what these people said beyond the buzz words.

        Republicans repeat nonsense from their leaders and are on watch for people not towing the line, and maybe this s the key difference between Democratic and Republican followers. When Obama said that he and Romney had the same position on Social Security at the debates, Democrats never said a word, but if Romney had violated a key tenet like that Republicans would go crazy.

  4. vlade

    If we define Turing as troll for use of language, I believe we can categorize this article as trolling with the same arguments. As a commenter before me said, puting equal between machine language and human language is misleading. Even comparison between machine language and the ways neurons communicate is misleading.

    Incidentally, I’d recommend you read up on evolutionary algorithms used in evolving electronic circuits – not computer simulated one, but real circuits.

    It’s fascinating, as the results can be nothing that human engineer would ever create (becase the solution often ends being supercomplicated system of feedback cycles) and indeed there was a circuit evolved which stopped working properly when a _disconnected and unused_ part of the board was removed. So implying that human brain can’t be ever replicated in silicon is just an opinion, not a supportable fact.

    Overall, that part very much detracts from a much more valid point that assuming rationality in market is wrong.

    1. Philip Pilkington

      “If we define Turing as troll for use of language, I believe we can categorize this article as trolling with the same arguments.”

      Haha! Yes. But different target. And slightly different arguments.

    2. Gerard Pierce

      One of the more interesting results was an evolved oscilator circuit – which had no components capable of oscilating. It turned out that the evolved circuit was able to function by parisiting on the 60-cycle electrical power in the lab. It could not work at all if you removed it to a location with no 60-cycle power.

  5. burnside

    Philip, the present state of AI rather makes a hash of your notions of it, but we follow your argument nonetheless.

    1. Philip Pilkington

      Care to be specific about what exactly AI has disproved about what I’m saying about the difference between machine communication and human communication?

      P.S. I agree about his biographers, but I think of Turing as something of a lovable rogue…

      1. burnside

        I think some of the best recent work is coming from Marvin Minsky at MIT. He’s been pressing the layered complexity of ideation on his colleagues for some time now.

        You’d could easily imagine the fellow had read your characterization of AI. And then got to work.

        1. Philip Pilkington

          He did, in a sense. He engaged with Dreyfus (reluctantly and after years of ridicule). But what he hasn’t understood is that machines cannot overcome these problems because of their nature. You cannot get them to form layered understanding by building more complex circuits or putting pattern recognition software into them.

          As I said above, the AI community has largely accepted Dreyfus’ arguments, but they have no idea what to do with them. My suggestion? Stop trying to think that machines can mimic humans ala Turing. Its simply not true.

          1. burnside

            Have been reading Dreyfus on phenomenology. Perhaps not here or now, but I suspect someone will eventually get it right.

            I agree, then, regarding the current state.

          2. Philip Pilkington

            You’re far more optimistic than I! I’m glad Dreyfus has penetrated the AI community though. At least now they’re asking the right questions — whether they can answer them is an entirely different question.

  6. burnside

    I might add that Turing’s biographers – book-form or essayists – tend not to like him very much. This may or may not have anything to do with Turing himself.

  7. ok

    “The reasons that this will always work is because machines do not and cannot possess the ability to recognise jokes or sarcasm, which represent a completely different, context dependent type of language comprehension that only humans possess. For a computer the language that is fed into it can only say one thing. It must adhere to very strict rules and cannot be substantially ambiguous – which, of course, is the nature of the joking or sarcastic remark. In contrast to the limitations of machine-language human language can say two things, three things, many things.”

    That’s patently false. There is nothing that stops computer from working with imprecise things or things that turn into different things in different contexts. It is in fact quite easy.

    That of course doesn’t mean the possibility of artificial computer brain is given.

    1. Philip Pilkington

      They can “work with imprecise things”. Sure. But they cannot comprehend meaning at two different levels simultaneously. Totally impossible. A computer cannot and will never be able to “get” a joke.

      1. ok

        What does it mean to “comprehend meaning” and to “get a joke”? How do you recognize someone/something can do it? You are here using terms that we are trying to define and understand, effectively running circular argumentation.

        I think what is blinding you is that you think because at the basic level computers use ones, zeros and simple binary logic, that they are forever bound to such simple schemes, unable to work with more complex things.

        Again, that notion is false. Not only binary logic isn’t the only possible approach to artificial machines, but you fail to see that simple things can lead to much more complex things. You know, emergence and all that. Human brain is good example.

        And so computers, among other complex things, can simulate quantum mechanics, which for sure isn’t known for its either-or nature and avoidance of the problem of coping with different things simultaneously.

        Sure, pure symbolic manipulation seems to be dead end for AI, but computers abilities certainly aren’t limited to just that.

        1. Philip Pilkington

          It’s not a circular argument.

          “What does it mean to “comprehend meaning” and to “get a joke”?”

          I went through this in depth in the piece. I said that, unlike a machine, people generally do not communicate clearly at all. Most human communication is miscommunication. Jokes highlight this dimension of human existence and this is what makes them funny.

          On the rest of it, I just diagree. But I’m not going to covince you. Besides, the piece was more so about applying computer-style reasoning to human behavior. No one seems to have bothered with that though…

          1. ok

            Sure, people don’t communicate clearly. But there is nothing fundamental that forces computers to process and communicate only clear precise “terms”, each with one exactly defined meaning at a time.

          2. Philip Pilkington

            Well, that’s good to hear. I’ll expect to have my computer making me laugh within a few years then.

            Yeah… I’ll remain skeptical.

          3. ok

            They already can do that, as you pointed out below (well depending on your age and taste of humour).

        2. Daniel

          This is what happens when smart people think their expertise in one field translates to another. It’s equivalent to the argument that flying cars will never exist because cars have four wheels and wheels are very poor wings.

          That’s a shame because I think the main argument is spot on. The comment section is dominated by discussions about the setup as opposed to the meat of the argument.

      2. larry

        Now why might this be? If “getting a joke” involves non-computable functions, then the computers we can at present conceive can not get a joke.

        Another way of looking at this issue: If jokes depend on non-computable functions, and our computers are partial recursive, then it might be that the human brain can not be completely described by means of recursive functions. Then it follows that the human brain is not a recursive machine, which are the only type we are able to design.

        This also means that the human brain does not satisfy the Church-Turing thesis and, hence, is not a Turing machine. All Turing machine output is produced by means of computable functions, leaving aside the Halting problem.

        Looked at from the vantage point of Goedel’s incompleteness theorems, we can ask whether a machine of the sort we are considering could have conceived of or “computed” this set of theorems. One human brain did this and we can, therefore, conclude that at least this brain may well not have been a Turing machine. Since that brain wasn’t a Turing machine, given the similarities in the genetic construction of all human brains, it follows that all human brains may fail to be Turing machines.

        A related way of looking at this problem is via Searle’s Chinese Room thought experiment. Symbolic information is inputted into the room in Chinese. The room is inhabited by a human being who knows no Chinese but has a translation book. By means of this book, the human is able to produce relevant, readable Chinese output. The Chinese room is analogous to a computing machine that takes input and produces output according to an algorithmic process. No understanding is involved or is needed. According to Searle, the room functions as a syntactic processor. But humnas essentially invoke semantic processes which enable them to understand what they do. On this basis, Searle conclues that a human being is not a computing machine. This is a mere sketch but this thought experiment permeates the literature on whther minds are machines.

        ****************
        Turing’s test: while you set out a feature of Turing’s personality that might be less appealing (Keynes had a similar quirk, which had unfortunate consequences for the UK), this has nothing to do with the logic of Turing’s test, as I am sure you would agree. The test does not depend on any assumptions about the setup of the machine involved in the test. It is a pure behavioral test, and B F Skinner would have loved it. That is both its strength and its weakness.

        Turing’s conclusion, that if the differences in output do not allow you to differentiate between the machine and the human, then the ways in which such output is arrived at is irrelevant is surely misconceived. Any explanation concerning such output would involve causal stipulations that would be different with respect to the two distinct “machines”. Causal explantions about how outcomes are brought about inevitably bring in questions of intentionality. Such intentions have no place in Turing’s test. Indeed, he didn’t want them there.

        1. Philip Pilkington

          Agree on the Searle stuff. Yes, the brain is not a computing machine despite what others on here — who apparently have the “expertise” I lack — insist.

          Regarding Turing’s personality, it does show through in his prose. And anyway, I’m trying to tell a story. People make it interesting.

          Regarding it being a behavioral experiment. No, I disagree. Turing says clearly that his test will establish whether the machine is “thinking” or not. Skinner, who I strongly dislike for other reasons, didn’t make any claims to say anything about “thinking”. Turing was making grandiose philosophical claims in the paper and these proved very destructive in what followed.

          1. larry

            I was not suggesting it was a behavioral experiment, just that it was a test that a psychological behaviorist of a certain stripe would devise – where there was no atttempt to asses (internal) psychological states, hence behaviorist. In the test, such states were considered irrelevant. All that mattered was the external output, exactly as such a behaviorist would prescribe. So-called radical behaviorism was an intellectual force to be reckoned with at the time. And yes, he did contend that a result of ‘no difference’ would indicate thinking, whatever that might mean. There are serious difficulties of definition here, which the test neatly circumvented, to its detriment.

            In this latter respect, think about Asimov’s “invention” of the positronic brain. Asimov nicely got around having to describe how such a brain worked by having it discovered accidentally and admitting that no one knew quite how it worked. All that one knew for certain was that such a brain could be programmed with the three laws and tested to assess whether the programming took, destroying it if it didn’t.

            Having a brain whose internal workings could not be characterized but which could be programmed in a certain way doesn’t seem much different from some complex derivatives, a few of whose authors admitted that they didin’t know how what they had created “worked”. (Of course, these weren’t “discovered accidentally”.)

            Radical, or Skinnerian, behaviorism, while differing in certain respects are eerily familair in others to the work of certain neoclassical economists. One arresting difference is in their attitude to data. For a radical behaviorist, data was sacrosanct. But it was a very restricted kind of data – only that which you could directly perceive and measure. No hypthesized inner states here. And the experiments, carried out primarily on rats and pigeons, were considered extrapolatable without alteration to humans.

            This restrictive evidential bias led to absurd commnents about the character of human thinking – for instance, that we had no ideas, only experiences (Skinner said this of himself and considered it to be true of everyone.) That such comments contravened experience was considered to be of little moment. The rest of us were deeply confused, by means of our untested experiences.

            The neoclassical case is more complicated and you have discussed it elsewhere. But consider Milton Friedman and his attitude to data. For him and others, the theory was more important than the data, so important that if the data and the theory conflicted, he considered there to be something wrong with the data.

            For the radical behaviorist, there was nothing wrong with the data, only what it appeared to mean (how the data was to be interpreted). If there was a problem, it was with the theory. Their clean, operational approach was to be preferred over the methods of others, for instance those of the Gestaltists, with their reliance on unobservable internal states.

            The neoclassical rationality hypothesis appears to conflict with our everyday experience. So much the worse for our experience. It just shows that the ordinary person is confused. The rationality hypothesis was constructed to deal with recalcitrant data. Where the radical behaviorists removed impediments unsupported by their kind of evidential concerns, the neoclassical theorists added elements, whether supported by the data or not.

            Both approaches contrue reality in their own image with evidence twisted to fit theoretical concerns. It should be the other way around.

          2. holygrail

            Phillip please realize that you keep making assertions that you have no proof for and you are not giving any credible argument to defend them.

            Nobody comments on the rest of your piece because you start the article with wrong assumptions. From a bad assumption you can prove anything so why bother with the rest of it?

            > “The brain is not a machine”
            > “A computer will never be able to get a joke”

            You haven’t offered any backup for these even if you think you have. Neurologists and computer scientists all over the world are researching on this, it’s a very interesting problem and not solved yet.

            Bear in mind it’s perfectly possible that the brain *is* a machine but we are just unable to understand and reverse engineer it. Even in that case it would probably mean we can get some partial success and AI and medicine can leverage it (I believe we’re already doing that in some systems). If everyone believed you that sort of research should cease?

        2. ok

          If it involves non-computable functions, if brain can’t be described in means of recursive functions, if brains may not be Turing machines (discovering Goedel’s incompleteness theorem in no way means Goedel’s brain wasn’t Turing machine)… It’s just lots of ifs.

          I have always hard time understanding these arguments, because we know there is a machine capable of making jokes. It’s called human brain. So unless you want to bring in God and eternal soul, there unquestionably is a way to build thinking machine just by cleverly arranging atoms and molecules.

          Given how human brain came to being (unless you believe evolution is wrong) and how big variations there are in its capabilities not only among humans, but among all animals (unless you believe in soul), I can’t see why it should be fundamentally impossible to create different machine with similar or better capabilities.

          By the way, how do you determine something is capable of intent? By looking at it and counting how many arms, legs and heads it has?

          1. larry

            The question isn’t whether a brain is a machine of some sort, but whether the computers with which we are familiar and our brains are the same or very similar kind of machine. It is this which is in dispute. Few dispute that our brains carry out recursive functions – for example, it can carry out arithmetic operations, which are recursive functions. The issue is whether this comprises all the sorts of “computing” that the brain can engage in.

          2. ok

            1) It’s far more that just dispute. It was stated as fact that computers can not perform specific brain-like tasks, no question about it. Which I think is the main sticking point, this notion that there is simple, final proof computers can’t replicate brain and some people just don’t get it.

            2) If you accept brain as sort of real machine but still dispute computers can replicate its abilities, then it means you are disputing that computers can simulate reality. But apart from scaling, even current computers seems to have no fundamental problem simulating physical reality. Do you think its obvious fact that running simulation of replica of brain down to quantum mechanics still wouldn’t yield the right result?

          3. holygrail

            If somebody (perhaps Philip Pilkington? :) discovers that the brain is theorecially beyond Turing-computability (ie: the brain is a hypercomputer), it would be the first such case known to man. It would be a startling discovery, probably revolutioning science and changing the world as we know it. It might even unlock the secrets to build a hypercomputer!

            That’s what a lot of people’s assumptions in this thread, as well as the assumptions in the article, imply. If there was a solid reason for that impossibility it would be much more incredible than if the brain is just basic chemistry that we can simulate but we just don’t know how.

    2. Philip Pilkington

      Here (all the other critics would do well considering this too):

      http://www.nytimes.com/2013/01/06/opinion/sunday/can-computers-be-funny.html

      ==================

      ” As it turns out, this is one of the most challenging tasks in computer science. Like much of language, humor is loaded with abstraction and ambiguity. To understand it, computers need to contend with linguistic sleights like irony, sarcasm, metaphor, idiom and allegory — things that don’t readily translate into ones and zeros.

      On top of that, says Lawrence J. Mazlack of the University of Cincinnati, a seminal figure in the field of computational linguistics, humor is context-dependent: what’s funny in one situation may not be funny in another. As an example, he cites Henny Youngman’s signature line, “Take my wife — please,” which came about by accident when an usher seating Youngman’s wife mistook the comedian’s request for a gag.

      The cognitive processes that cause people to snicker at this sort of one-liner are only partly understood, which makes it all the more difficult for computers to mimic them. Unlike, say, chess, which is grounded in a fixed set of rules, there are no hard-and-fast formulas for comedy.

      To get around that cognitive complexity, computational humor researchers have by and large taken a more concrete approach: focusing on simple linguistic relationships, like double meanings, rather than on trying to model the high-level mental mechanics that underlie humor.”

      ======================

      And if you examine the article you’ll see that they don’t look at “double meanings” at all, but instead at puns which are entirely different. Puns do not rely on double meanings, but phontetic simililarities. It is not difficult at all for computers to recognise these as they are not context dependent.

      1. ok

        Maybe you should read the article yourself, because it in fact proves your point invalid. Not only they don’t seem to have fundamental problems with contexts and simultaneous different meanings, they already have programs with rudimentary abilities to get a joke.

        1. Philip Pilkington

          Nope. Actually read my comment and you’ll see that they don’t deal with double meanings. They deal with puns which are based on phonetic similarities. And the machine that can recognise jokes is just like the one I imagined in the piece that was programmed to recognise jokes based on certain common factors.

          1. ok

            Pun is about simultaneous different meanings. Thus your article demonstrates that computers can work with such things and they are not locked to one category per thing at a time.

          2. Philip Pilkington

            No. Contrary to what many think puns do not rely on a dual meaning to language. It’s an important issue. I’m not going to broach it in any detail here. Puns focus on phonemes. Phonemes do not contain meaning. They are just sounds.

            If you give me a dictionary I can make puns in language I cannot speak quite easily. I just focus on words that have similar phonetic constructions and replace one with another. That’s why it works with a computer.

          3. ok

            Except phonemes are only part of pun. If you would just go on mindlessly changing words for similarly sounding alternatives, then in most cases it would be regarded as gibberish/mistake/typo, in cases where you flipped word for its synonym nobody would notice it at all. And there is the harder task of spotting pun, which of course is impossible to do without taking context into account. In short it’s not as simple as you think.

            But that’s not so important, because quite strangely your argument wasn’t that computers can’t simulate the ability to understand meaning of something, but rather you claimed that they can’t simultaneously work with two different meanings of one thing.

            Here you see computers in fact can work with multi faced things, so if one day computer will be able to understand meaning, which is implicitly included in your argument, then it certainly could understand two at a time and notice conflicts between them.

  8. ted braun

    Fundamentalists in religion and in science are not fun to be around because of their lack of humor. John Gray’s upcoming book The Silence of Animals should be insightful. We have become more like the machine than making the machine more human. Jacques Ellul pointed this out years ago. But even by typing this out on a machine I am taking it more seriously than I should because now my nuanced communication is concretized in a less nuanced manner than they were ever intended.
    Our faith in technology should be wrapped in sarcasm even on the routine functions of economics. Nassim Taleb does a good job of sarcastically and humanly debunking technological fundamentalists in economics.
    I just rambled, so you have to decipher whether or not to take it seriuosly or whether I was being sarcastic. The biggest danger is that we take our nuanced thinking and imagine that techology can do it for us. The more our economy depends on technology we can’t be shocked to see it dehumanize us.
    Well I had to write this quickly before i get busy with my day. I have to go laucnch a few drones and then do my on line trading tonight. Crazy grandparents invested in community for their retirement and told the same old jokes down at the bar. The machine has advanced us beyond that naive generation. I guess Blackberry’s launch was not that good. I was hoping that I could buy that phone and stock tonight. I do my research at lunch, between drone launchings, to find out what I should be investing in.

  9. MacCruiskeen

    “In human communication there is no signal or noise.”

    No, that makes no sense. If it were true, you wouldn’t be able to use a telephone or even speak. You wouldn’t be able to pick out the sound of a human voice from the background. Put down the Turing and pick up the Shannon.

    1. Garrett Pace

      I agree, I found that an overbold statement too. When women try to communicate with men and vice versa, the signal/noise business becomes crucial and poignant. It’s not very easy to know what to pay attention to, what it is the person is trying to get across.

      1. MacCruiskeen

        That’s not what I referring to. When you speak, that speech is the signal. Other sounds are the background noise that your brain has to filter out. This is something that all human brains do normally and extremely efficiently–so efficiently that PP basically glossed right over it. What he’s really talking about is symbolic interpretation, which is different problem and not related to the signal/noise graph he presents. For instance, for this message to go from my PC to yours, they and all the machines in between have to interpret several layers of symbolic protocol. People must also do this when they process language. There’s a lot of work going on before you even get to where you “he said/she said” type of confusions.

        1. Garrett Pace

          I wasn’t disagreeing with anything you were saying. I was expanding on it. If I was only reiterating your statement then I had better not to post at all.

  10. Dagnarus

    On the computer science side of this article. The turing test is a means to try and evaluate whether a computer has achieved some kind of intelligence as opposed to just being a glorified calculator. It touches on the fact that it really is difficult to define what intelligence really is. Having some kind of understanding of what it would mean for a computer to be intelligent is necessary in order to understand how to create an actual intelligence, or whether it is even possible. For example this article attempted to prove that it would be impossible to create a true artificial intelligence based upon the fact that no computer could ever truly pass the turing test. Also to give a bit more context to who Alan Turing was, in 1952 Alan Turing was charged with homosexuality and sentenced to chemical castration and committed suicide in 1954. It may well have been that the question “what does it mean to be human?” had a special significance to him as someone who was somehow treated as being subhuman.

    Now as to the economics part of the article, if what your saying is correct the neoclassicals somehow took the idea that perhaps a machine could be created which simulates all the infinite complexities of human being, and turned it into human being are really just simple deterministic machines? Personally the main problem with what you laid out as their position is that it assumes that their was one and only one universal true answer to the value of things, rather than an understanding that their could be multiple valid interpretations of value.

    1. Philip Pilkington

      (1) On Turing: yes, Turing led a difficult life. Although it may not be immediately obvious I was actually portraying him as a lovable rogue in the piece. Although in trying to get peoples’ goat he did launch an ugly attempt by the human sciences to co-opt cybernetics.

      (2) On “multiple valid intepretations of value”. Yes. That is exactly what I’m saying. By treating people like computers the neoclassicals actually think that there is one correct input value. That is just wrong.

  11. cobaltin

    Nice rundown, thank you. Just a small point. I’d just note that the class of humans also includes many people who fail to understand sarcasm and puns. This is especially true of autistic men and women (and those with Asperger’s syndrome, which is may be more common among scientists).

    If we accept that people with Asperger’s are human and that they can think, then the Turing test, as you set it up, wouldn’t be able to “always” identify which interlocutor is human and which is a computer. In your set up, it’s simply a test for finding entities that understand puns and sarcasm vs. entities that don’t. (You might be right to say that “All computers don’t understand sarcasm” but wrong in saying “All humans understand sarcasm,” so a sarcasm-identifying test wouldn’t be foolproof in trying to sort humans from computers.)

    1. Philip Pilkington

      You caught me! I based the idea on the the observation that certain people cannot recognise certain jokes. I think that people with Asbergers can to some extent. Autism? Nope, you’re quite right about that one. Also, in schizophrenia and other serious psychoses jokes cannot be recognised.

      So, yes. My Turing test only works if the person behind the curtain does not suffer from these disorders. But given that these are considered “disorders” then I think I’m in the clear. For example, if we require a robot to mimic a human walking up a stairs as proof of its competence we can hardly pull out a guy in a wheelchair as a counterargument. “Well, the robot performs as well as the guy in the wheelchair therefore…” I don’t think anyone would accept this argument, so I think mine is safe.

      1. ok

        But why would you want to single out people who don’t understand jokes? Do you think they are not humans? Or that they don’t understand meaning of words?

        If we only could make computer that has the same abilities like humans, even autistic ones, but just can’t understand jokes. Same thing with having computer that is able to navigate through environment with the same ease as average guy in wheelchair. It seems the problem is you don’t understand what the hard parts are and where are the current limits.

        1. Philip Pilkington

          It is absolutely not my intention to say that people with autism are not human. That is grossly offensive.

          However, they do suffer from a language/commuication disorder and I think that judging a computer’s ability/inability to communicate needs to be assessed based on “normal” human communication and not on people who have a communication disorder.

          I don’t think that this should be controversial. I’m sure people with communication disorders can do a lot of things well. Just as computers can do lots of things well. (I am NOT equating the two, by the way…). But we need to establish some sort of set of criteria for judging this and the standard should be people not suffering from communication disorder.s

          1. ok

            And it wasn’t my intention to imply that you think autistic people are worth less than “proper” humans. Quite to the contrary I was pointing out how arbitrary your border is by showing that in your search for true human mind you excluded “entities” you would otherwise consider to be capable human beings.

            The catch is that the mental human condition isn’t some binary trait that things either have or don’t have, with nothing in between. Rather it’s continuous… well not even line, more like multidimensional space. Franky, autistic people themselves aren’t homogeneous mass when it comes to jokes. They have problems with language jokes, but often they can understand visual jokes, they can improve by learning and some can understand jokes even if it takes them very long time (which of course in normal conversation is problem, but they are still capable of figuring it out eventually). And others can use various communication tricks that make you think they understood the joke even if they didn’t.

            In it’s basics Turing test is just variation of the duck test. If it doesn’t understand jokes, but you still can have interesting and thoughtfully conversation with it about history, math or some other topic, if it seems to have own desires and opinions, and perhaps it can design working fusion reactor over weekend, who cares what exactly it is built from? And more interesting question, would you turn such thing off even if it was saying to you it doesn’t want to be turned off? It can’t take jokes so it’s just computer, right?

          2. JTFaraday

            “…if it seems to have own desires and opinions, and perhaps it can design working fusion reactor over weekend…”

            LOL. You just never know, do you.

          3. nobody

            It is likely that Turing was autistic, and, if not, at the least his ways of thinking, behaving, and being (mis)perceived seem to intersect in a number of ways with how many autistics commonly think, behave, and are (mis)perceived.

            Here is a paper that explicitly discusses the Turing test in relation to autism. One of the co-authors is herself autistic:

            http://www.gmu.edu/centers/publicchoice/faculty%20pages/Tyler/turingfinal.pdf

            Also, if you wish to avoid being offensive towards autistics, please understand that many do not like being described as “suffering from” being who they are, in much the same way that people who are attracted to others of the same sex are not likely to like being described as “suffering from” their sexual orientation. And “communication differences” would be better than words like “disorder.”

          4. ok

            BTW, Turing test has nothing to do with autism or the neo-classical reductionist approach. The only thing it does is removing physical appearance from the equation so you can’t dismiss human abilities purely on the fact that what you see before you is big blinking box (which fact also doesn’t mean there can’t be hidden human inside the box). But everything else like complexity, emotions or irrationality still is (crucial) part of the test.

  12. F. Beard

    Hey, Philip. Do you believe in God? How else then do you account for mankind’s uniqueness?

  13. Garrett Pace

    when will they take over the production of art? If art becomes a constricted matter of process and form, limited in irony and layered meanings, they probably could no problem.

    That was a crucial part of the nightmares in Huxley’s Brave New World and Orwell’s 1984 – the only purpose of art was to prop up the powerful or distract the populace. A machine could easily generate such art. Here’s 1984:

    There was a whole chain of separate departments [in the Ministry of Truth] dealing with proletarian literature, music, drama and entertainment generally. Here were produced rubbishy newspapers containing almost nothing except sport, crime, and astrology, sensational five-cent novelettes, films oozing with sex, and sentimental songs that were composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.

    And Julia describing the books they produce in “Pornosec”:

    Oh ghastly rubbish. They’re boring really. They only have
    six plots, but they swap them round a bit. Of course I was only on the kaleidoscopes…

    Could people tolerate such things? I think we already do, whether they are machine creations or not.

    1. F. Beard

      I lost any respect I might have had for a lot of modern art when I wrote a simple program (for a Commodore 64, I recall) using random number generators and graphic functions to create quite passable modern art, at least every 10 runs or so.

      1. citizendave

        Speaking of the Commodore 64 (I still have mine), my first PC arrived with 256 kilobytes of RAM. I added 768 kilobytes worth of chips to bring it up to one Megabyte of RAM. Bill Gates famously said that 1 MB would probably be all we would ever need. Look how wrong you can be. I think the argument about the Turing Test stands up well enough, but only in regard to the state-of-the-art. I believe we humans will eventually build machine intelligence that will emulate human intelligence better than several members of my high school class, for example.

        The problem could be reframed in terms of freedom versus determinism. Can the human mind be truly free to think original thoughts? After reading Leibniz long ago, I was left with some doubt. Try as I might, I could not find the well-spring of my own thoughts. They seem to originate in my own mind, but they just sort of appear out of nowhere, and I try to catch them, like Patti Smith says in Woolgathering. So could a machine think? Could a machine escape from the inevitable sequence of causation? Probably not. But we could imagine that it could use sophisticated probabilistic weighting techniques to make leaps of faith, or otherwise emulate the things we do when we make a WAG.

        When I started reading Econned, I began to think that the fundamental mistake economists make is treating people as predictable. I think that’s Philip’s main point, that we are not robots, we are not entirely rational, or at least not always rational, and we would do well to scrap any notions that we humans can be relied upon to do the right thing. The Libertarians seem to me to believe that with minimal government and the least possible regulation, society would not devolve into chaos, because everybody would be rational and would do the right thing. I seriously doubt that it would turn out that way. The thin veneer of civilization depends in great measure on order maintained by the police state. Mind you, this is not to say I approve, only that it seems to be a realistic assessment of human nature.

        A better approach to economics would be to ditch the neo-liberal orthodoxy, infuse heavy doses of morality and ethics, and plan for a long future for humanity on a finite Earth. And I believe we need public policy — law and regulation — to keep us headed in that direction.

  14. craazyman

    Well Phil honestly. I read it all, even the part about the date rape. bowhahaha. just kiddng. It sounded consensual as far as I could tell, using my context-dependent-heuristically-variable-mindwave-sentience.

    Serously, this is probly the best post I’ve seen from you so far. It doesn’t wander and wobble too much and you nail the thing pretty cleanly.

    Yer hehd is not rising into the giant white maggot of abstract cerebralism like the dude in FANTASTIC VOYAGE at all. /The giant maggot is the Turing machine.

  15. Rob Lewis

    Gee, I thought it was pretty much settled that humans aren’t classical “rational actors.” But this statement is unfounded and false:

    “…machines do not and cannot possess the ability to recognise jokes or sarcasm, which represent a completely different, context dependent type of language comprehension that only humans possess.”

    The only way you can seriously maintain this is to ascribe supernatural powers to human brains. Do you really want to go there?

    I may not live to see it, but if humans can manage to continue making intellectual progress, it’s only a matter of time before your computer can be as sarcastic as Don Rickles…if it wants to be.

    1. bobh

      Mr. Pilkington’s point seems uncontroversial and obvious, and I don’t understand the strong resistance to it from you and others. Computers don’t (can’t) have opinions and they don’t (can’t) make jokes. Humans can write programs for them that allow them to appear to have opinions or to be making a joke, but it isn’t the same thing and it never will be. Of course, that’s just my opinion, but it is based on a lot of mental operations that go way beyond processing data and making calculations. You can disagree, but if a computer tells me I’m wrong, I will just laugh at it/him.

      1. Rob Lewis

        I submit that this betrays a lack of understanding of the workings of the brain:

        “Computers don’t (can’t) have opinions and they don’t (can’t) make jokes. Humans can write programs for them that allow them to appear to have opinions or to be making a joke, but it isn’t the same thing and it never will be.”

        Evolution wrote programs for humans that allow us to appear to have opinions or to make jokes. At some level, “appearing to have an opinion” and “having an opinion” are indistinguishable.

        My favorite way to think about this is the “neuron replacement” thought experiment. Imagine that scientists could create a tiny chip that mimicked the function of a single neuron. This is certainly conceivable, since at the level of single neurons we understand pretty well how things work.

        Pick a human subject, and replace ONE of his/her brain neurons with this chip. I think we could agree that the person would look, feel, and act pretty much the same. OK, then replace another neuron with the chip. Same story. But why stop there? Replace a billion neurons with chips. Has something fundamentally changed? At some point, does the “human spirit” leak out irreplaceably? Where exactly is that point? How do you know?

        1. bobh

          “mimicked” “conceivable” “understand pretty well”
          Your argument has fallen in uncertainty and can’t get up.

  16. Garrett Pace

    An Object

    This thing, that hath a code and not a core,
    Hath set acquaintance where might be affections,
    And nothing now
    Disturbeth his reflections.

    -Ezra Pound, 1912

  17. TomDor

    I think new tools cause us to change our perspective and thought patterns to adapt – our neural maps. Not long ago, we found out that our psychologists had been filling up libraries with theories that were based on a small subset of humans. Further, they found that what, we called normal, was based on a subset that, when compared across the globe, was not normal. Neural-plasticity and adaptation have given us norms of economics which, IMO are self destructive to our species. It may not be beyond reasoning that we humans may adapt to the point where funny may exist in computer-able like formations only and, our humanity may recede to the level our tools (computer ability) allows us. Still funny but, only within the confines of what is programmable.
    Oh well, gave it a try – probably took a terd on this stage.

  18. craazyman

    wait a minute. It just occurred to me someone might think the Turing Machine was a real person, putting them on by pretending not to get their joke.

    But I guess if it kept going and going you’d know the machine was lost, because even a jokester can tell when enough is enough.

    Phil I thought your theory broke down there for a minute, but then it rescued itself in my mind.

    1. Philip Pilkington

      Thank you. That was very interesting. I’ve always thought that Chomsky’s approach was one of the best — although it probably has limitations. I’d specifically highlight this with relevance to the debate here, however:

      ===========
      “For example, to get into a more abstract kind of language, there’s substantial evidence by now that such a simple thing as linear order, what precedes what, doesn’t enter into the syntactic and semantic computational systems, they’re just not designed to look for linear order. So you find overwhelmingly that more abstract notions of distance are computed and not linear distance, and you can find some neurophysiological evidence for this, too. Like if artificial languages are invented and taught to people, which use linear order, like you negate a sentence by doing something to the third word. People can solve the puzzle, but apparently the standard language areas of the brain are not activated — other areas are activated, so they’re treating it as a puzzle not as a language problem.”
      ==========

      Of course, I’d say that this isn’t a puzzle in the sense that it has a necessary and single solution. This is more like a creative game — one of interpretation.

      Person A: *Utters ambigious sentence*

      Person B: “What did that other person mean by that amgibious sentence?” *Creative neurons fire that produce an interpretation based on personal memories and personality* “Aha! They meant X… well that fits in nicely with my view of the world”

      Person A: (To himself) “What on earth is this person going on about, that’s not what I was talking about at all… Oh well, social manners say that I should smile and nod…”

      1. larry

        Personally I prefer Montague’s approach. But Chomsky’s theoretical apparatus renders physhological linguistic experimental design easier. It is also easier to understand. That said, Montague’s approach is deeper and more fundamental.

  19. Jim

    Philip stated: “We cannot rely on supposed “rationality” any more than we can rely on “clear communication”–such considerations are meaningless when applied to human beings. Instead we must have clear legal and institutional rules in place so that people know what they are allowed and what they are not allowed to do.”

    You have raised an extremely important issue—the limits of reason—but you have not yet had the courage to apply this issue to you own preferred economic, political and psychological stances.

    What if the “rationality” in your own economic/political perspective is as responsible as that of the “rationality” of the neo-classical economists for our present crisis?

    What if there has been a incremental decline(over the past 160 years) of the subjective internalization of norms requiring, in turn, an increasing number of bureaucratic agencies in both the public and private sectors to enforce stability?

    And what if the enforcement of such “rationality” is only accelerating our present economic/political/cultural crisis?

    1. Ramon

      The problem with this is that you’re assuming that the issue is whether one set of rules is better than another, whereas the problem is more one of who gets to apply the rules. In this case, neoclassicals are writing democratic majorities out of the script, and they’re using scientific gibberish to justify this disenfranchisement. Markets know best, and common folks shouldn’t meddle with them (see this for an example of this mindset). Unfortunately neoclassical propaganda has made this belief mainstream, and any that challenge it find that their ideas are beyond the sphere of acceptable debate. Because TINA. But the economy is a human activity that affects all of us, and as such a democratic society have the right to regulate it. Even if we get it wrong. (The economic meltdown of 2008 was brought about by the dismantling of such rules.)

  20. Cat

    The other similiarity between Economics and Computer Science is everyone and their sibling thinks they understand it because they read a book somewhere.

    As other people have mentioned there certain human’s whose brains are not ‘wired’ in such a way they don’t understand or can tell if the other person is joking, but the same is true cross culturally in that an American couldn’t tell if a German was telling a joke or even understand the joke.

  21. kevinearick

    ah, sarcasm, and why the robots hate it so…

    back in ‘frisco,’ an artificial intelligence project moving in typical glacier speed with a high speed facade. Change the digital building colors and everyone goes o-o-o-o-o-o.

    boss…the cloud, the cloud….

  22. jurisV

    After wading through all these comments I’m beginning to suspect that Philip’s post was really a test designed to out Turing machines. If I had to venture a guess based on comments it would be that a great many of us would have trouble “proving” that we are, in fact, human. I feel pretty good about Craazyman, Phil, Ramon, and Jim being at least partially human, and anyone else who actually addressed the main theme of Phil’s post — the misbegotten roots of neo-classical economics.

    There is so much certainty in these comments about the possibility (long-term) of fabricating computing machines that can function like human brains, or at least successfully mimic them. However, I believe that we first need to understand thoroughly how the human brain functions! Unfortunately, we don’t have a model yet that we could use to develop a human-brain-like computing machine in the lab.

    From what I’ve read recently there is no thorough understanding of how our brains actually function. Antonio Damasio especially disabused me of any wisp of certainty I had, but left me with an strong sense of awe at what goes on in that gray mass. A lot of humility is in order.

    1. Brooklin Bridge

      Once we get the hardware and software that functions sufficiently like a human brain to mimic one, we will likely know exactly how the brain works. The artificial one will probably work almost exactly as does the real one.

    2. holygrail

      You are right, we don’t know how it works. Therefore we don’t know if it’s possible or not yet to build a brain with machines. But Philip assures us it’s not, and that’s incorrect.

      It’s not worth commenting on the article (neoclassic economics), because he’s comparing it to an incorrect view of computer science. Bertrand Russell proved that if you start with the falsehood 0=1, you can prove you are the pope. I wouldn’t think you want to discuss about whether you look like the pope or not but address the 0=1 right? :)

  23. Ms G

    “Antonio Damasio especially disabused me of any wisp of certainty I had, but left me with an strong sense of awe at what goes on in that gray mass. A lot of humility is in order.”

    I agree.

  24. Brooklin Bridge

    I find absolutes such as “never” difficult to accept in CS in so far as computers and software (and engineers) seem to break one never after another.

    Also, I was under the impression that AI research into things like humor had taken a slightly different tack of late (the last 10 years) in that they no longer require the computer to “get it” so long as it can solve the problem or even change the problem to something sufficiently analogous as to be reconverted subsequently, etc. In other words, look for different ways to define the task as a way to accomplish the task since ultimately, it may be functionally just as good or good enough. Fake it till you make it. Cheat if it works. Hell, humans do it all the time. One such solution to the Turing challenge, for instance, would be to have the software assume the human was joking or being sarcastic on a random basis to fake him/her out. On such an occurrence, the computer would attempt to construct a reply that was 1) pertinent to the subject at hand, 2) slightly sarcastic based on a formula (if using voice, use an edgy tone, for instance). Even if cludgey, it might fake the human out just enough to get him reading in humor and sarcasm in the dryness of the other answers. And so on. Just as designers found on the net that transactional errors were frequently permissible for non critical applications and thus changed some of the assumptions for web (and subsequently even local process) based business applications, so AI is looking in different places to come up with acceptable solutions. These assumptions come from conversations with developers so don’t ask me for links. If I’m wrong, fine. I could get into it further with a considerable amount of effort, but it doesn’t strike me as critical so I’m perfectly willing to accept, “you don’t know what you are talking about”, because when it comes to AI, I don’t.

    In the meantime, I still fine it hard to believe software can’t approximate the recognition of humor in such a way as to get it 4 times out of 10 for instance which for the purpose of Turing’s challenge, might be sufficient. The thing is, then one would probably find it was sufficient to solve a whole bunch of other humor related tasks where accuracy wasn’t critical and so on.

  25. skk

    you pass the Turing Troll test – by repeating 4 times that Turing was trolling – cf “Pre-Internet Troll”, “insufferable troll” “element of trolling”, “modern day troll”.

  26. Claudius

    How many dogs does Phil own?

    For most humans, imprecision and inconsistency are a function of linguistic properties (Phil’s “ man walks into a bar”) related to the content of the statement in relation to the data set it describes: either more than one possibility (ambiguity and uncertainty) or no possibility (certainty) is compatible with the available information (of a drunk mad with a bruised head?).

    ‘Information is perfect when it is precise and certain. Imperfection can be due to imprecision, inconsistency and uncertainty and is a result of imperfect data (fuzzy data)’; for deciding if a statement is true or false ambiguity/uncertainty is commensurate with a lack of information about the world. Imprecision and inconsistency are commensurate with information itself; whereas uncertainty is a commensurate relationship between information and subjective knowledge about the world.

    a.)Phil has at least two dogs and I am sure about it.
    b.) Phil has three dogs but I am not sure about it.

    So, how to know which is “really” true (closer to the truth) and which is better at telling me; another human or a computer algorithm based on a specific data set as it relates to people called Phil owning dogs?

    There are several modes available in reaching a conclusion (correct or incorrect): relationship between the probability of either statement being true in its frequency; the physical (real world likelihood) possibility; the subjective probability assessment (what has my experience taught me about Phil?); belief and epistemic possibilities (I’m a Japanese national, where one dog policy is the norm); and the relationship between the physical properties and the epistemic properties (can people like Phil really own two dogs

    Computer models, generally, deal (and, often only need to deal) with ‘frequency, and the physical (real world) possibility’ (such as HFT algo’s) mode. Humans, most often, deal with all modes, mutually and inclusively. Humans consider successively and separately the various aspects of imprecision, inconsistency and uncertainty. All algorithmic systems act “rationally” – that is to say they work with a complete and perfect (as in design perfect) set of data. The data sets are not universal and (they’re not intended to be, yet). They are certainly not intended to model the heuristic nature of human experience; they are, design with a specific (limited) purpose.

    In short, when faced with imperfect data, a rational humans (“humans”) economic agent first tries to understand the form of imperfection he/she is facing, and then see which mode(s) is/are the most appropriate. The real challenge for humans is in recognizing the nature of the imprecision and uncertainty faced in a given problem – the degree of irrationality.

    So, if you want to find out who or what is behind the curtain? Ask, “How many dogs Phil has”. If you get a rough answer, it’s a human.

  27. Eric

    Another way to win the Turing Test: Ask what the square root of 477,206.3 is.

    The thing behind the curtain who even bothers to answer, is the computer.

    If there’s no response, then I would admit that the AI scientists had made some headway in imitating a human, and congratulate them on spending like a hundred million bucks to design a computer that doesn’t work.

  28. wunsacon

    >> “A man walks into a bar…” That statement can mean two different things. … Meaning here is operating at any number of different levels and while we could input a set of rules into a computer to identify these sentences as a joke, the computer would never be able to “get” the joke in the same way that a human can

    If that’s your Turing test, some of my friends already fail. Are they computers?

    Some of this humor depends on cultural information humans spend a lifefime acquring. I know because I have to explain a lot of humor to some friends who hail from other countries. Some don’t even pick up on the next joke, even if it’s similar to the last one I explained. So, “humans” can’t innately do some of the things you fault computers for.

    >> Yes, we could imagine that a computer could be programmed to recognise every joke…

    We’re not going to have to program a computer to recognize every joke. When we talk of machine *learning*, we mean the same kind of learning that humans do.

    @ holygrail:

    >> You are right, we don’t know how it works. Therefore we don’t know if it’s possible or not yet to build a brain with machines.

    Not “yet”. Just like in 1961 we couldn’t “yet” go to the moon. But, we will. And if you’re around by 2030, you will see it.

    Now, I can’t predict whether it’ll be a good thing for us humans. If we create machines in our image (rather than according to our ideals), we are fucked.

  29. Roger Erickson

    Ya think? Pilkington’s essay is a fair restatement of the theory of evolution, where excessive amounts of data are ALWAYS available, and adaptive evolution consists of arduously SELECTING what amazingly little data actually matters. At any one instant, most data is irrelevant.

    Really interesting how far & wide the comments diverge!

    Anyway, back to the essay. To organize on a larger scale? That takes tuning.

    What is distributed tuning in increasingly distributed systems? Exactly when all “strong” signals are roughly in gridlock [NOT equilibrium, which sends the wrong idea] AND when further system change is guided by the continuous emergence of new permutations of formerly negligible or weak signals.

    Walter Shewhart summed this pretty well, several decades before Turing’s musings about signals and local decisions. Shewharts terse note was that “Without context, data is meaningless.”

    The comment above that connecting decision-making algorithms to intentionality was “unfair?” Now that was LoL funny! Was that a computer accidentally making an ironic joke? :)

    All decision-making systems are eventually linked to survival as the overriding intention. Hence, all biological sensory systems are leanly tuned to extract only context-relevant data from a much broader spectrum of available data. Ditto for analysis and motor systems. An adaptive entity – including human cultures – is always tuned to context.

    Detecting a signal from noise is only step 1.

    Detecting rare system/context-relevance from masses of meaningless signals is step 2.

    Detecting even more rare emerging signal-pattern-envelopes from growing collections of meaningful signals? That massively parallel pattern recognition is what easily distinguishes massively parallel human central-nervous-systems from even the most advanced electronic CPUs now available.

    Orthodox economics as we know it seems to have lost track of the following hierarchy, and of how species and nation survival is selected from all the useless data.

    Outcomes
    Goals
    Policies
    Strategies
    Tactics
    Tools/Commodities/Data

    Beneath all the selection and tuning tools a system builds up, there is another, deeper system feature that is preserved if any system is to survive. That is an intrinsic ability to change absolutely any part of the system. Only infinite flexibility preserves Adaptive Rate.

    Nowadays, Turing might have been driven to ask whether expressions of national Policy Agility can distinguish blind institutional momentum (lobby “machines”) from an actual Thinking Nation.

    The answer is easy. Getting a distributed electorate to ask the question frequently enough is the hard part. No coordination => no adaptive culture. Just a bunch of individual machines transiently lining their pockets, unable to recognize emerging cultural outcome patterns fast enough to matter.

    And yes, HFT is just complicated naval gazing. Class war, by definition, diverts analytical capabilities from higher margin to lower margin activities – i.e., from systemic return-on-coordination to components blindly hoarding assets locally.

Comments are closed.