What is Artificial Intelligence?

By Georgios Petropoulos, a resident fellow at Brugel with extensive research experience from among other things, holding visiting positions at the European Central Bank in Frankfurt, Banque de France in Paris and the research department of Hewlett-Packard in Palo Alto. Originally published at Bruegel

Artificial intelligence (AI) refers to intelligence exhibited by machines. It lies at the intersection of big data, machine learning and robotics. Robotics contributes the necessary design, construction and operational framework. A robot can be defined as “a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer”. Modern robots are often virtual algorithms performing not only physical but also cognitive tasks. Machine learning enables these robots to acquire knowledge and skills, and even improve their own performance. Big data provides the raw material for machine learning, and offers examples on which robots can “practice” in order to learn, exercise, and ultimately perform their assigned tasks more efficiently.

The idea of intelligent machines arose in the early 20th century. From the beginning, the idea of “human-like” intelligence was key. Following Vannevar Bush’s seminal work from 1945, where he proposed “a system which amplifies people’s own knowledge and understanding”, Alan Turing asked the question: “Can a machine think?” In his famous 1950 imitation game, Turning proposed a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. A human evaluator judges a text exchange conversation between a human and a machine that is designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.

The specific term “artificial intelligence” was first used by John McCarthy in the summer of 1956, when he held the first academic conference on the subject in Dartmouth. However, the traditional approach to AI was not really about independent machine learning. Instead the aim was to specify rules of logical reasoning and real world conditions which machines could be programmed to follow and react to. This approach was time-consuming for programmers and its effectiveness relied heavily on the clarity of rules and definitions.

For example, applying this rule-and-content approach to machine language translation would require the programmer to proactively equip the machine with all grammatical rules, vocabulary and idioms of the source and target languages. Only then could one feed the machine a sentence to be translated. As words cannot be reduced only to their dictionary definition and there are many exceptions to grammar rules, this approach would be inefficient and ultimately offer poor results, at least if we compare the outcome with a professional, human translator.

Modern AI has deviated from this approach by adopting the notion of machine learning. This shift follows in principle Turing’s recommendation to teach a machine to perform specific tasks as if it were a child. By building a machine with sufficient computational resources, offering training examples from real world data and by designing specific algorithms and tools that define a learning process, rather than specific data manipulations, machines can improve their own performance through learning by doing, inferring patterns, and hypothesis checking.

Thus it is no longer necessary to programme in advance long and complicated rules for a machine’s specific operations. Instead programmers can equip them with flexible mechanisms that facilitate machines’ adaptation to their task environment. At the core of this learning process are artificial neural networks, inspired by the networks of neurons in the human brain.

The goal of the neural network is to solve problems in the same way that a hypothesised human brain would, albeit without any “conscious” codified awareness of the rules and patterns that have been inferred from the data. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which are still several orders of magnitude less complex than the human brain and closer to the computing power of a worm (see the Intel AI Documentation for further details). While networks with more hidden layers are expected to be more powerful, training deep networks can be rather challenging, owing to the difference in speed at which every hidden layer learns.

By categorising the ways this artificial neuron structure can interact with the source data and stimuli, we can identify three different types of machine learning:

  • Supervised learning: the neural network is provided with examples of inputs and corresponding desired outputs. It then “learns” how to accurately map inputs to outputs by adjusting the weights and activation thresholds of its neural connections. This is the most widely used technique. A typical use would be training email servers to choose which emails should automatically go to the spam folder. Another task that can be learnt in this way is finding the most appropriate results for a query typed in a search engine.
  • Unsupervised learning: the neural network is provided with example inputs and then it is left to recognise features, patterns and structure in these inputs without any specific guidance. This type of learning can be used
to
cluster
the
input
data
into
classes
on
the
basis
of
their
statistical
properties
 It is particularly useful for finding things that you do not know the form of, such as as-yet-unrecognised patterns in a large dataset.
  • Reinforcement learning: the neural network interacts with an environment in which it must perform a specific task, and receives feedback on its performance in the form of a reward or a punishment. This type of learning corresponds, for example, to the training of a network to play computer gamesand achieve high scores.

Since artificial neural networks are based on a posited structure and function of the human brain, a natural question to ask is whether machines can outperform human beings. Indeed, there are several examples of games and competitions in which machines can now beat humans. By now, machines have topped the best humans at most games traditionally held up as measures of human intellect, including chess (recall for example the 1997 game between IBM’s Deep Blue and the champion Garry Kasparov), Scrabble, Othello, and Jeopardy!. Even in more complex games, machines seem to be quickly improving their performance through their learning process. In March 2016, the AlphaGo computer program from the AI startup DeapMind (which was bought by Google in 2014) beat Lee Sedol at a five-game match of Go – the oldest board game, invented in China more than 2,500 years ago. This was the first time a computer Go program has beaten a 9-dan professional without handicaps.

Probably the most striking performance of machine learning took place in the ImageNet Large Scale Visual Recognition Challenge, which evaluates algorithms for object detection and image classification at large scale. For any given word, ImageNet contains several hundred images. In the annual ImageNet contest several research groups compete in getting their computers to recognise and label images automatically. Humans on average label an image correctly 95% of the time. The respective number for the winning AI system in 2010 was 72%, but over the next couple of years the error rate fell sharply. In 2015, machines managed to achieve an accuracy of 96%, reducing the error rate below human average level for first time.

It is important to understand that many of these machines are programmed to perform specific tasks, narrowing the scope of their operation. So humans are still superior in performing general tasks and using experience acquired in one task to deliver another task. Take, for example, the ImageNet challenge. As one of the challenge’s organisers, Olga Russakovsky, pointed out in 2015, “the programs only have to identify images as belonging to one of a thousand categories; humans can recognise a larger number of categories, and also (unlike the programs) can judge the context of an image”.

Multitask learning and general-task-AI are still lagging behind human cognitive ability and performance. Indeed, they are the next big challenges for AI research teams. For example, a self-driving car which drives a specific route in a controlled environment is quite a different task from a car out in the road amidst varied and unpredictable traffic and weather conditions.

Nevertheless, the rapid improvement in the performance of machines through learning is something that is easily observable since 2012, when deep learning neuron networks started to be constructed and operated.  Technological advances have increased the rate with which machines improve their function, further accelerating the progress of AI.

So what does this mean? AI could bring substantial social benefits, which will improve many aspects of our lives. For example, smart machines can make healthcare more effective, by providing more accurate and timely diagnoses and treatments. The increased ability of machine scanners to analyse images like X-rays and CT scans can reduce the error margin in a diagnosis. It can also lead to great time efficiencies. In the fight against breast cancer, Forbes illustrates how efficient AI can be. So far, women depend on monthly home exams and annual mammograms to detect breast cancer.  Cyrcadia Health, a cancer therapy startup, has developed a sensor-filled patch that can be inserted comfortably under a bra for daily wear. Connecting through the woman’s smartphone or PC, the patch uses machine-learning algorithms to track the woman’s breast tissue temperatures and analyse this data at the Cyrcadia lab. If it detects a change in pattern, the technology will quickly alert the woman — and her healthcare provider — to schedule a follow-up with her doctor.

The first generation of AI machines has already arrived as computer algorithms in online translation, search, digital marketplaces and collaborative economy markets. Algorithms are learning how to perform their tasks more efficiently, providing a better and higher quality experience for the online users. Such efficiency gains through smart technology lead to high social benefits as reported by numerous studies (for instance, see Petropoulos, 2017 for the benefits from the collaborative economy, which are made possible by AI and machine learning).

The final destination of AI research is still uncertain. But machines will continue to become ever smarter, performing the tasks assigned to them ever more efficiently. Depending on their design and construction, they can have many applications. However, they will also interact with humans in sometimes challenging ways. Policymakers and researcher alike need to be prepared for the AI revolution.

Print Friendly
Tweet about this on TwitterDigg thisShare on Reddit0Share on StumbleUpon0Share on Facebook0Share on LinkedIn2Share on Google+0Buffer this pageEmail this to someone

111 comments

  1. MyLessThanPrimeBeef

    If we don’t know 100% how the brain works, and we set out to create AI based on our impartial understanding of the brain, and let the robots learning from experiencing the ‘world,’ what could go wrong?

    1. what world will they experience? We don’t want robots to rebel and get too uppity. If we learn from this exercise on what robots should and should not experience (so they won’t be unruly), will that knowledge be used on humans too, so that us humans will only experiencing a childhood world designed so we grow up to only obey (even more efficiently done than today)?

    2. Sociopaths and psychopaths do what they do (a) enjoying the pain they inflict, or (b) not feeling at all the pain they inflict. In the latter, their world is purely intellectual. What is that future world like, when smart robots learn (referring to the above “by designing specific algorithms and tools that define a learning process”) through algorithms and tools that don’t or can’t incorporated the irrational, emotional, compassionate human heart? And even we are an irrational animal, and are emotional, we are still learning to balance the rational and irrational aspects of the brain.

    3. We don’t know 100% how the brain works. The ‘algorithms and tools’ – will they lead to Frankenstein robots on other considerations beside #2 above?

    Reply
    1. vlade

      Believe or not, but these questions were asked for quite some time now. First recognisable would be probably in the works of Karel Capek (who, after all, invented the word “robot”, although we would nowadays use “android” for what he termed robot), but more famously Asimov and his robotic laws.

      And the answer, however uncomfortable it may be, is that if we really get “Intelligence”, then it will get hugely complicated and quite possibly dangerous. But that is an answer that never stopped humans from doing it anyways. As Terry Pratchett wrote (I paraphrase), if there was a big red button hidden in a remote cave, with a large “DO NOT PRESS, WILL END THE WORLD” on it, there would be queue outside to press it and see what happens.

      Reply
    2. justanotherprogressive

      I like your reference to sociopaths and psychopaths because in essence, that is exactly what AI is – a way of trying to mimic human thinking based only on some prescribed definition of reason and logic without emotion.

      Reply
      1. Alejandro

        Also not sure how ‘irrational’ equates with ’emotional’, or how ’emotional’ excludes ‘rational’ or even if ‘intelligence’ is confined to the ‘brain’, much less the ‘brain’ in isolation.

        Reply
        1. MyLessThanPrimeBeef

          That’s the danger.

          We don’t know how the brain works 100%.

          We do know some people think to relate to the world predominately through the intellect. They walk into a room, not noting if they are more or less compassionate, loving or lovable or otherwise, prettier or uglier, sociable or self-occupied, funnier or gloomier, wealthier or poorer, more athletic or less, but smarter or less smart. They are almost robot-like. Everything is abstracted…equations or formulas, or numbers.

          Numbers like universal constants, or profits and losses…

          And other aspects of humanity are left in the back somewhere…as if they fear emotions, among other things, will interfere with thinking objectively and rationally.

          But life is messy…life is about balancing the various calls. When we know ourselves better, and can deal with Nature less destructively, maybe we can then try to replicate ourselves in robots.

          Reply
          1. MoiAussie

            We don’t know how the brain works 100%.

            We don’t know how the brain works 5%. Long-held beliefs about neurons have recently been shown to be misguided: New Study Suggests Our Understanding of Brain Cells Is Flawed.

            When we know ourselves better, … maybe we can then try to replicate ourselves in robots.

            AI doesn’t aim to do this, just as planes don’t try to replicate birds, nor submarines fish. Machines do it differently, and will do things lifeforms can’t, and be unable to do things lifeforms can.

            Reply
        2. Mel

          I am enjoying a book called The Master and His Emissary, by Iain McGilchrist. It studies the neurological evidence for brain functions split between the right and left cerebral hemispheres, then goes on to look for possible effects of such a functional split in modern philosophy and art. The proposed split is between formal logical processes and work with abstractions in the left hemisphere, as against gestalt awareness and situational awareness in the right hemisphere. (These functions seem to me to map closely, though maybe not exactly, with Daniel Kahnemann ‘s slow and fast thinking.)
          One effect of this theory is that it short-circuits questions like: “Should we think of this rationally or emotionally?” With two types of thinking going on concurrently in their own structures it ceases to be an either/or question. We can, and usually do, do both.
          Later in the book, McGilchrist presents his own neuropolitical argument that Western societies are giving too much weight to abstract rational thinking unchecked by reference to situation or gestalt. Milton Friedman’s abuse of black-box theory, for example, where he rightly states that a model doesn’t need to share a structure with the phenomenon it models, then goes on to forget to check the model results against reality. My example. McGilchrist has others.

          Reply
  2. tegnost

    the author makes it sound pretty inevitable…
    However, they will also interact with humans in sometimes challenging ways.
    humans are clearly the problem here, in spite of the fact that AI is only as smart as the worms I just walked by in the road post rainstorm. Also article allows a little fly in the ointment by stating how unlikely it is and how far away we are from self driving cars being able to operate on an actual highway….
    My opinion is that it’s harder than they think it is, but it does reveal how stupid they think all of us non coders are, anyway all they really need is a robot that is as smart as a deplorable, which in their minds should be pretty easy.

    Reply
  3. MoiAussie

    The first generation of AI machines has already arrived as computer algorithms in online translation, search, digital marketplaces and collaborative economy markets.

    These are not by any stretch of the imagination first generation. Deep Blue beat Kasparov over 20 years ago, and had plenty of predecessors. I’ve been working in this field for almost 30 years, and the first major international conference on AI (IJCAI) began almost 50 years ago in 1969.

    The confusion in this article between algorithms, which are ubiquitous in IT applications, and AI is not very helpful. Plenty of the problematically opaque commercial algorithms discussed at NC have little or nothing to do with AI. The tendency to label every AI system a robot is just silly. Robots are embodied.

    Nevertheless, the rapid improvement in the performance of machines through learning is something that is easily observable since 2012, when deep learning neuron networks started to be constructed and operated.

    That rapid improvement through learning has been apparent for much longer, and these networks were popular as far back as the 1980’s for supervised learning applications. The recent surge of interest is largely down to cheap fast hardware allowing them to be applied to unsupervised learning. The buzzphrase “deep learning” was coined in 2000. And actually, it’s neural networks, not neuron networks. A google search for “deep learning neuron networks” turns up this article, and almost nothing else. Surely people will get something out of this, but it’s an outsider’s take on the subject.

    Reply
      1. MoiAussie

        IBM’s Watson system (which won on Jeopardy) apparently uses about 20kW, which is about 1000 times what your brain does. Servers are slowly getting greener, and people seem to be getting dumber, but the gap is large.

        Reply
      2. void_genesis

        I’ve seen an estimate that for modern microprocessors to scale up to the rate of calculations undertaken by a single human brain would consume an amount of electricity equivalent to the output of the three gorges dam hydroelectric plant. The energy efficiency of a human brain is incomparably superior to AI, plus you can run it on a bowl of rice and make more of them without much convincing.

        Reply
        1. MoiAussie

          As the old joke goes, making humans requires only unskilled labour.

          The energy efficiency of a human brain is incomparably superior to AI.

          That is true today (if you omit incomparably). Significant AI usually runs on high-end servers and low-end supercomputers. But if you look at recent trends in supercomputer performance per watt, it has been increasing exponentially, see for example here.

          Overall, AI researchers are more focused on getting things to work than making them energy efficient. But the energy cost of AI is certainly going down, although the trend may be hidden as systems deliver more AI for the same energy rather than the same AI for less, just as today’s PCs & phones deliver more grunt rather than significantly reduced energy use compared to previous models.

          Reply
    1. Tim

      Yeah the recent explosion in AI learning machines is that new NVidia graphics chip. It’s the equivalent of materials and processes driving physical world technology improvements.

      The limitation on AI is not human ability to develop new software it’s the hardware structure and performance.

      That being said, AI is indeed coming and no one should feel comfortable about the fact “they still aren’t as smart as us.” The trajectory says it is only a matter of time before that is not the case.

      Government will need to be involved at somepoint, in a regulatory and international capacity to define key boundaries of both AI and the opportunities available to them to affect our physical reality.

      Reply
      1. MoiAussie

        The limitation on AI is not human ability to develop new software

        The bottleneck on AI progress is definitely on the software, not hardware side. NVidia chips are widely used because they deliver compute bang for buck, not because they have some special structure. Any high performance hardware with lots of threads and memory will do.

        If programming general AI was easy, it would be here already. The breakthrough on the software side will probably come when learning is more directly applied to software synthesis. We will then be in the uncomfortable position of using systems whose software was not written by humans, but created by software systems. And I don’t mean in the trivial sense that compilers are used to transform programs into executable code.

        Reply
          1. MoiAussie

            Sure. But they will have a great deal of difficulty understanding it. It won’t have comments, will be bizarrely unlike anything they are familiar with, and may well be millions of lines long.

            Even the code written buy the guy in the next cubicle or the one who had the job before you can be challenging to understand unless it’s very well documented.

            Reply
        1. Kurt Sperry

          Effective AI may come at the cost of having the software become a black box that no human can understand the processes or the code inside. I spent a half hour writing a long post on this yesterday but the WP filter binned it before it could even get to the moderation queue, so I’m not eager to waste any more time composing a better reply.

          Reply
    2. Jeremy Grimm

      The notion of an equivalence between AI and algorithms is a little weak. If accepted — we’ve been dealing with AI long long before computers were a gleam in Turing’s eye. I remember introductory programming classes describing how a recipe is an algorithm and relating the process for writing a recipe to the process for writing a program to implement an algorithm.

      The phrase “deep learning” is just a rhetorical florish for admitting we have no idea how the neural network programs work and no way to find out at present. So Silcon Valley Savants xan wave their hands and magically pull money from the air.

      Reply
  4. justanotherprogressive

    “So what does this mean? AI could bring substantial social benefits,…..”
    And it could also bring substantial social detriments also……remember, AI can only deal with logic, it has no capacity for human emotion, like empathy or compassion, but it is those very human emotions that make society worth living in……
    AI is just a tool, it is not inherently “good” or “bad”, it all depends on how humans choose to use it.

    Reply
    1. MoiAussie

      remember, AI can only deal with logic

      Modern AI has little to do with logic. It is all about crunching numbers. Building AI based on logic was found to be too slow and unreliable. I kid you not.

      Reply
      1. justanotherprogressive

        I’m not thinking of “logic” in the terms of philosophy but in mathematical terms, i.e., Boolean Logic? And yes, that is part of “number crunching”……sooner or later, the program is going to have to make a decision……

        Reply
        1. Mel

          I saw a TV show a few years ago with a Google guy talking about Google Translate (sorry, can’t give a citation.) According to him
          Google Translate doesn’t work by having vocabulary, grammar rules, etc., encoded. It works by correlating gazillions of texts. E.g. English-to-Japanese is translated by looking up huge databases of what kind of Japanese goes with what kind of English, and matching the given text against that. Of course, the process can be described in logical terms, but it doesn’t use logic in the deductive way that I would when I think.

          Reply
          1. oh

            My experience with Google Translate is that it’s woefully bad. Most of the time it transliterates, especially when it comes to japanese. I stay away from it for other languages too.

            Reply
          2. shining math path

            justanotherprogressive’s point about AI being limited to mathematized logic holds. And recognizing the 19th century origins of AI and ML is helpful for seeing through hype to the actual mechanics involved – which allows one to see who is making money off what resources.

            Machine learning with ‘neural’ networks uses statistical methods, such as regression or fitting a curve to a dataset. Note that the datasets for translation are ‘harvested’ from the unpaid work of human translators.

            Reply
    2. Keenan

      “So what does this mean? AI could bring substantial social benefits,…..”
      And it could also bring substantial social detriments

      I commend for your consideration Eliezer Yudkowski’s lengthy, but engrossing, and easy to read paper : “Artificial Intelligence as a Positive and Negative Factor in Global Risk”

      https://intelligence.org/files/AIPosNegFactor.pdf

      snip: “Any two AI designs might be less similar to one another than you are to a petunia. The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general , or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general . The entire map floats in a still vaster space, the space of optimization processes.

      Natural selection creates complex functional machinery without mindfulness; evolution lies inside the space of optimization processes but outside the circle of minds. It is this enormous space of possibilities which outlaws anthropomorphism as legitimate reasoning

      Reply
  5. Synoia

    Who’s definition of artificial intelligence?

    I am willing to bet money that my (male) definition of artificial intelligence differs greatly from my wife’s.

    Some of which would be the purpose of shoes, getting clothed, or even the importance of color.

    I also suspect the subject of manners and polite conversation would be fraught with difficulty (aka: white lies).

    Reply
    1. Disturbed Voter

      I investigated AI back in the mid-80s … before the AI Dark Ages (collapse). AI has gone thru multiple dark ages, Google “Dark Ages of AI”. I was an aerospace engineer open to new things, a member of AAAI, learned LISP, went to the IJCAI-85 at UCLA. Discovered that it was a fraud. Since the 1950s, when Alan Turing and others invented the phrase .. this has been the fodder of science fiction and fantasy … both as software and as algorithms for smart robots.

      If you are aware of the saga of TAY from last year … then you would be aware that chat-bots are not intelligent. At a fundamental level, of computer science, the notion that people are machines, computers specifically … is simply wrong. And from that, the derived idea that you can build intelligent programs or living robots … is equally wrong. Quantum computers and Big Data won’t change a thing … computer science isn’t a wishing tree.

      Reply
      1. MoiAussie

        Whether humans are machines is an old philosophical argument not worth pursuing here. But AI, while always over-hyped, is not a fraud. AI beats people at tasks that have always been considered highly cognitive, eg playing Chess and Go. Chat-bots are not AI, they’re just toys.

        The big problem with AI is simply this – AI experts consistently underestimate how difficult it is to make significant progress, hence wildly over-optimistic predictions, hence periodic winters. The difference now is that AI is making money for lots of companies.

        Reply
        1. Carolinian

          Sounds like you should have been the one writing the article.

          And yes as a onetime chess player I’d say games like chess are examples of human beings thinking more like machines rather than vice versa. Although supposedly the great champion players (not me of course) are the ones who can come up with creative strategies not mentioned in their chess books.

          Finally I’d just opine that human intelligence and personality are subjects that seem to be imperfectly understood on their own, much less the question of how to imbue a machine with these qualities. It’s likely we so far aren’t nearly smart enough to create something that is smarter than we are.

          Reply
          1. MoiAussie

            It’s likely we so far aren’t nearly smart enough to create something that is smarter than we are.

            You just nailed it. In fact, we may never be smart enough to fully understand how we are smart, and may have to resort to creating the conditions under which something smarter than us can develop by itself, and then not fully understand why it is smart too.

            Reply
            1. Jeremy Grimm

              “It’s likely we so far aren’t nearly smart enough to create something that is smarter than we are.”

              I don’t see how smart we are limits how smart something might become which we are smart enough and able to design and build. Indeed that is one of the scary things about AI. If we are smart enough to find a way to simulate or even improve upon the way we learn and understand there is no reason why a program might not greatly exceed our abilities at learning and understanding. [Of course learning and understanding are not the same as artificial intelligence as I want to understand the term but they are crucial components in measures of artificial intelligence.]

              Reply
        2. Daryl

          I feel more like AI has expanded to designate a range of narrow problem-solving techniques. The amount of people and money going towards a general AI is still relatively small, but the term AI is so popular that web developers at your local mom and pop web design shop are taking AI programming courses on the internet. This isn’t likely to quicken the arrival of SKYNET, but it does result in a lot of media/public confusion.

          Reply
        3. a different chris

          I am with DV, whose history almost exactly parallels mine – I can’t say I ever learned LISP, but people in my group were using it. I think I was using PL/M to pay the bills! But I was well aware of what they were actually doing, and it wasn’t anything that was going to replace my cat as a beloved companion.

          >AI beats people at tasks that have always been considered highly cognitive,

          Based on the fact that they have a limited rule-set (sure, i can’t understand Go but it is pretty damn well bounded, you can’t flip over the board and say you won) considering something highly cognitive isn’t really proving it is, is it? Alex the Great and the Gordian knot isn’t solvable by a machine unless you add the rule that it can cut things that… that what? “Make it unhappy” was what I was going to type, and you are going to tell me that emotions, which drive the freaking human world, aren’t “Intelligence”??

          But they are.

          Reply
          1. MoiAussie

            AI, almost by definition, suffers from the “moving goalposts” problem. As soon as AI techniques achieve the ability to do X reasonably well, whether it be recognise speech or faces in images, or play Go, or defeat Jeopardy champions, some people decide that these tasks clearly require no intelligence. So it will always be, and it matters not, since “intelligence” is a fuzzy waste-bucket term anyway.

            Emotions are part of human intelligence, indeed they seem to be ubiquitous in meatspace creatures. I’ll leave others to consider plants and fungi. Hence the modern notion of EQ. But emotions aren’t all of human intelligence by a long stretch. And there is nothing to stop machines having synthetic emotions – it’s been an active research area for at least 20 years. These won’t be human emotions, but will serve an analogous role in steering machine behaviour. There’s plenty of research into morality for artificial agents. Everything that we consider a hallmark of intelligence can have its artificial analogues – regret, shame, excitement, curiousity, you name it.

            Reply
            1. flora

              Goal posts get moved, imo, because for many people the term ‘artificial intelligence’ itself makes too large a claim with the word ‘intelligence’. A term like ‘artificial idiot savant’* would be closer to the mark in many ways, but that doesn’t have the same glamour. That term doesn’t conjure up dreams the way the term ‘artificial intelligence’ does.

              Programming (including all the most sophisticated and advanced design work) to achieve X outcome for a complex decision/switching problem isn’t trivial. It’s a great achievement. However, the term ‘artificial intelligence’ subtly ascribes ‘intelligence’ to the machine instead of to the scientists.

              *”Savant syndrome is a condition in which a person with a developmental disability demonstrates profound and prodigious capacities or abilities far in excess of what would be considered normal.[1][2][3] People with savant syndrome may have neurodevelopmental disorders, notably autism spectrum disorders, or brain injuries. The most dramatic examples of savant syndrome occur in individuals who score very low on IQ tests, while demonstrating exceptional skills or brilliance in specific areas, such as rapid calculation (hypercalculia), art, memory, or musical ability. ”
              https://en.wikipedia.org/wiki/Savant_syndrome

              Reply
        4. mpalomar

          I’ve been interested in aspects of this and find the ambitions of AI as delineated in this article modest compared to what others envision. What I think the article is describing is computational power. It barely broaches the idea of technological singularity and beyond that the question of consciousness, or what Dave Chalmers calls the hard problem.

          Machines such as deep blue have fairly recently been able to overpower human chess players with pure computational capacity that function by working out every move on the chessboard out 15 or 20 moves. There is little doubt that if Moore’s law continues to hold machines will have computational powers that will surpass human abilities in applicable areas. What is unclear, all these centuries after Descartes cogito, is whether AI, despite these massive computational powers will ever achieve consciousness.

          John Searle teaches a course on the philosophy of consciousness that engages these questions and has some very opinionated conclusions. I think it is safe to say he doubts AIs capacity to simulate consciousness. This is partially what the Turing test is about though Searle I think would doubt the Turing test proves anything other than a machine can fool a human into thinking they are conversing with a human rather than a machine. His Chinese Room thought experiment uses machine language translation ability, like Google’s, to debunk the possibility of machine cognition. Machines may be able to perform translation using computational power but they will never understand according to Searle.

          Chalmers’ hard problem suggests human experience; qualia, color, taste, emotion etc. as the barrier that may never be broached by machines and further questions whether such states of consciousness are illusions, constructs of human ontological experience.

          There are others, Stuart Hameroff and Roger Penrose, who approach consciousness, as does Searle, as a brain state or a quality of the functioning of the brain. Searle claims prosaically that consciousness is a component of brain function as digestion is to intestines.

          Hameroff and Penrose make profounder claims, linking it to quantum states, orchestrated objective reduction, that resonates with some mathematical platonic construction underlying the universe and structured into our wetware brains. Hameroff suggests consciousness is located in the architecture of neuronal microtubules, vastly increasing the capacity of the brain beyond the standard neuronal model, i.e. up to 10 to the 16th operations per second, far beyond any computers in existence or on the immediate horizon.

          Reply
  6. Carla

    Hhhmmm… Re: breast cancer. I seem to have read that these days, one of the biggest problems with regard to breast cancer (and also with certain prostate cancers) is the unnecessary and very damaging treatment of cancers that are discovered early but are fundamentally harmless, that the body will get rid of on its own, or that are so slow-growing that something else will kill the “afflicted” person first.

    Reply
    1. MoiAussie

      +100. Was reading about this just the other day (now where was that?) The bottom line is that if you value quality of life, much of the time you will be better off avoiding all treatment, as something else will carry you off before it causes a problem. Or better still, do some alternate day fasting and let the body’s natural autophagic process kick in and deal with it.

      Reply
      1. Tim

        To back up one step further, avoid mamograms and self inspect. Mamograms are the enablers of worry, false diagnosis and overagressive treatment.

        My wife just turned 40 and the doctor wants her to get a mamogram every year. I told my wife about the data, and said you have the money, you make the rules, no mamograms until 50 or unless you feel something.

        Reply
        1. Yves Smith Post author

          I’ve been preaching against mammograms for years.

          The most reliable exams is a manual exam by someone who has examined tons of boobs. But that doesn’t fit the US notion of scientific, plus we don’t have breast clinics with said frequent breast examiners.

          If she is still wants an exam, much better is thermal breast imaging. No radiation, and more important, unlike mammograms, which are great at catching the slow moving growths that are not dangerous and bad at finding the fast moving ones that are, thermal breast imaging catches the dangerous growth quite reliably when they are small. Only problem is you have to pay for it yourself, but the cost isn’t horrific ($200, maybe more if you are in a high rent city).

          Reply
      2. oh

        “The bottom line is that if you value quality of life, much of the time you will be better off avoiding all treatment, as something else will carry you off before it causes a problem.”
        Their (the oncologists) definition of quality of life is of course is made to increase their bottom line!

        Reply
    2. Jeremy Grimm

      As I recall from some readings in the middle 1990s — Mamograms detect microcalcifications of breast tissue associated with some forms of breast cancer. A large investment of resources by the Medical Industrial Complex has beem made in purchasing the equipment and personnel to handle mamograms. From what I have read — very few — perhaps no breast cancer is detected by mamogram which might not have been detected by a careful tactile breast examination by a patient or physician and neither method of detection offers an advantage for elimiinating false positive results.

      Adding AI to the picture helps shift profits from physicians to mamogram providers and might be of slight benefit given the production-line approach to reading mamograms with its attendant boredom and inattention factors and the tendency to off-shore the readings to lower-cost physicians who might regard the process much as Harry Lime regards the “ants” on the ground in the ferris wheel scene of the “Third Man”.

      Reply
  7. IDontKnow

    http://hyperallergic.com/370843/artists-form-shell-company-to-visit-and-photograph-tax-havens/

    “One of the best photographs in Les Paradis is of Tony Reynard and Christian Pauli looking at each other in one of the high-security vaults of the Singapore Freeport. Reynard is the Chairman of the Singapore Freeport and Pauli is the General Manger of Fine Art Logistics NLC. The Singapore Freeport, which was designed, engineered, and financed by a team of Swiss businessmen, is one of the world’s maximum-security vaults where billions of dollars in art, gold, and cash are stashed away. Located just off the runway of Singapore’s airport, the Freeport is a fiscal no-man’s land where parsimonious individuals and creepy companies can confidentially collect valuables out of reach of the taxman. But this phenomenon, indicating that many are investing their wealth in visual art they rarely see, is spread widely around the world. Freeports have been established everywhere from Luxembourg and Geneva to Beijing, Hong Kong, Delaware, and soon will be in New York.

    Reply
  8. ennui

    This essay reinforces a rhetorical slight-of-hand which is used to sell machine learning projects. Talking about whether an artificial neural network is more or less equivalent to a worm suggests that you if just made bigger, more complicated networks, ie. invest resources (bigger computers, better algorithms, smarter researchers etc) you would achieve parity with worms and eventually… humans. It’s how the machine-learning-bubble is inflated: “just give us enough money and we can build a NN which can drive a car, diagnose cancer, … Because, your brain is a neural network and it can solve all those problems too.”

    This is a rhetorical trick. “Neural networks” really have very little to do with actual brains, worms, lobsters or otherwise. ANN’s are a mathematical abstraction for approximating functions, given partial information about a subset of the domain/range. If you focus on “supervised learning,” the set of “inputs” is a subset of the domain of your putative function, the outputs, the range. The training data gives the behavior the function over this subset of the domain and the “magic” of the ANN is a collection of algorithms for extending this function outside of the training inputs. For self-driving cars the inputs are a set of driving situations, the training data: the output of human drivers in those situations ie. what they tell a car to do. The goal is to create a network which extends that training data to *all* driving situations. The function you are approximating being the response of a nominal human driver. The problem, in a nutshell, is that even if you, say, approximated a human in 95% of driving situations, there’s no mathematical guarantee that the last 5% won’t require infinitely more resources ie. the rate of convergence of the approximating function (the ANN) isn’t really controllable, except in relatively limited domains.

    Obviously there have been advances in applying ANNs to real-world situations. But the theoretical advances have been firmly incremental since the 1960s. A true self-driving car (to pick an example) requires theoretical advances for AI which don’t exist yet and there is no honest reason to think throwing dollars at the problem will achieve these advances. It’s basically a blind bet, but in an economy which inflates bubbles, the “story” of AI is enough to draw in those dollars.

    Reply
    1. Disturbed Voter

      Exactly … machine learning and machine translation, aren’t magic, and don’t work very well. I can make phrases in Google Translate that go wildly off, because I am thinking outside their training set of sentences. Only living systems learn, only living systems are intelligent (sometimes). We don’t understand how living systems do this, we don’t really even understand the mechanisms of memory.

      Of course it isn’t just that they want you to throw money at the problem, they want you to throw tax payer money at the problem … because like fracking, they will lose money on every barrel pumped out of the ground, but make it up on creative financing at the expense of the taxpayers. White collar crime is universal in modern society.

      Reply
      1. subgenius

        …Oh you are on some dodgy turf there…

        There are no functional definitions / descriptions of life, intelligence, thought.

        Something to ponder….(also technically undefined…)

        Reply
        1. nothing but the truth

          as far as definitions go, maybe. although properties of life can be defined which are not found typically in inert matter.

          but just because you don’t know how to define life does not mean you cannot tell whether something is alive, or, as some insist, that there is life at all.

          Reply
    2. Jeremy Grimm

      I am NOT a fan of neural nets — ANNs — in the common coin of acronymistry. But I am also not convinced by the argument that machine learning must have something to do with actual brains, worms, lobsters or otherwise [?] — or else it must be doomed to flaw and failure. Two mechanisms may differ radically in design and implementation but yield similar products. However, I agree with your contempt for the rhetorical trick suggesting today’s AI as a matter of degrees or “evolution” or a step on the road of progress — the idea we just need more powerful processors and more memory. We do not understand intelligence and we have no idea how to implement a program demonstrating “intelligence” in any ordinary understanding of that word.

      I do not believe that ANNs have much to do with “actual learning” — however that may be defined. The ANNs — if they “learn” — in any sense of the meaning of that word — learn in ways humans may never be able to truly “understand”.

      Reply
  9. jsn

    I love this autological neologism: “used
to
cluster
the
input
data
into
classes
on
the
basis
of
their
statistical
properties”!

    Reply
  10. Kris Alman

    The intersection of AI into the most intimate details of our lives is truly concerning to me. On one hand, the confluence of big data toward predicting cancer and reading body imaging studies is doing what doctors strive for. Pattern recognition is key to honed diagnostic skills.

    But false positives create an obsessive-compulsive, neurotic populace (Really! a sensor-filled patch inserted comfortably under a bra to collect breast tissue temperatures!), prone to hacking of their personal information (and no HIPAA protections when this data is voluntarily uploaded to companies that are not covered entities). See Every Step You Fake

    Last year, IBM Watson acquired Truven Health Analytics. This was after they had bought Merge Healthcare for $1 billion to gain medical-imaging data and technology; and big data analytics provider Explorys for population health technology (Phytel).

    Merged with public records, Watson could use AI to individually identify people and track their health care. This generative data will have been repurposed from data that previously had HIPAA Privacy and Security rules governing it.

    What then? Certainly, the NIH and other researcher will want to access this big data–especially if one’s medical records can be linked to identity. That would be a good thing.

    But could spy agencies use the PATRIOT Act to access this data.

    And surely, IBM Watson will want to sell data to PhRMA for targeted advertising.

    I just got a request from Kaiser Permanente to donate my DNA to their Research Bank.

    As a former Kaiser internist, I asked them to send me the survey questions. They have not done so. I briefly had access to the informed consent, which was available at this link: https://researchbank.kaiserpermanente.org/?kp_shortcut_referrer=kp.org%2Fresearchbank

    After I asked more specific questions, they turned off my access to the consent form!

    This lack of transparency is UNETHICAL.

    They state in the consent: The KP Research Bank uses a special number on the samples and information. Only KP Research Bank staff knows how to link the special number to your name or medical record number. The K P Research Bank will collect information from your health records at Kaiser Permanente. The KP Research Bank will link your health information (past and future) from your health record. The information used may include diagnoses, test results, procedures, images (such as X – rays) or medicine.

    I might add that KAISER PERMANENTE has a TREATMENT COST CALCULATOR. The End User License Agreement TERMS OF USE states: Welcome to Treatment Cost Calculator provided by Truven Health Analytics.

    Reply
    1. Craig H.

      Your specific questions to Kaiser might have tripped their mod bot into classifying you as a troublemaker.

      Reply
  11. Ed Seedhouse

    When computers get to the stage where they can hold a reasonable conversation with humans, humans will start perceiving them as “intelligent” and “conscious”. These words in quotes because they are pretty slippery and don’t have any decent definitions.

    But if the computer can have a conversation with a human about pretty well everything it will be seen as “conscious”. Whether or not it is “actually” conscious is a question that is beside the point.

    A difference that makes no difference is no difference.

    After all, I don’t actually know directly that anyone else is conscious because I can’t read minds. I believe that other people are conscious because my brain is “wired” that way. But logically I have no way of actually knowing directly that other people are conscious like I am.

    Before we start arguing about whether machines are “intelligent” we need to have a good definition about what “intelligent” actually is. Personally I think grass is “intelligent” because grass “knows” how to make more grass and I don’t. But that’s just my opinion.

    Reply
    1. Jeremy Grimm

      I agree! The Turing test is not an adequate test for AI and people readily attribute human traits to the things they interact with.

      [I’m not sure I agree that grass is intelligent — at least I can’t agree for the reason you give — that it “knows how to make more grass”. I believe intelligence is both more and less than the ability to reproduce — or the knowledge thereof.]

      Reply
      1. MoiAussie

        I beg to disagree. The Turing test is an adequate test for AI if you consider it exactly as Turing proposed it. If you cannot tell from a 15 minute text-based conversation whether you are interacting with a human or a machine, then you have no grounds for denying intelligence to a machine that can pass the test.

        A problem is that most people make poor interrogators, as the ridiculous Loebner Prize demonstrates. But consider Turing’s original specimen questions and answers:

        Q: Please write me a sonnet on the subject of the Forth Bridge.

        A : Count me out on this one. I never could write poetry.

        Q: Add 34957 to 70764.

        A: (Pause about 30 seconds and then give as answer) 105621.

        Q: Do you play chess?

        A: Yes.

        Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

        A: (After a pause of 15 seconds) R-R8 mate.

        To pass such a test is well beyond the capabilities of current AI.

        Reply
        1. m-ga

          This is a good summary.

          One of the more intractable problems is the inability of machines to parse grammar. This, incidentally, also undercuts a lot of the aspirations for Siri, Alexa, Cortana et al.

          Unfortunately, no-one has been able to figure out “rules” for human grammar. There is a big, long-running, and ugly debate about how to even go about doing so. It’s a version of the empiricism versus nativism debate – readers may be familiar with some of the nativist arguments via Noam Chomsky’s non-political writings.

          Whilst it may seem attractive to suppose the human grammar problem will be solved sometime soon, there is little reason to think it will be. Let’s start by accepting that any nativist grammar is unavailable to machines – or, at least, would require a biological hybrid unacceptable to any ethics review panel. This leaves just a (perhaps dogmatic) adherence to machine learning as the solution, for example via connectionist networks and/or Bayesian learning.

          Neither method is currently performing anywhere near the necessary level, and there seems little prospect of this changing. It’s a bigger AI problem than the self-driving cars. With self-driving cars, you can at least understand how they could work (GPS data, sensors on the car). But, self-driving cars don’t work well, due to environmental variants (e.g. weather, other road users) which can’t be modelled. The human grammar problem is many times more difficult.

          There is further a problem, often overlooked, of auditory stream segregation. We still don’t know how humans segregate auditory streams. It’s sometimes referred to as the “cocktail party” problem – at a cocktail party, attendees can attend to one (and only one) of the several overlapping conversations.

          Since we have a very limited understanding of how humans manage this feat, it’s not possible to get started on computer code which could emulate it. And the type of microphones computers use are incredibly crude compared to the human ear. This is why you can’t even attempt to use Siri in a crowded bar. Any computer speech system will require you to enunciate very clearly to the computer (and/or wear a larnygeal microphone) for the foreseeable future. And even when you do so, don’t expect the computer to automatically parse phonemes correctly. With incorrect phonology, you can’t even get started on grammar. And without grammar, you can forget about semantics, inferential pragmatics, and so on.

          Computational intelligence offers incredibly useful tools to humans. Web search is perhaps the paradigmatic recent example (overlook, if you will, the deliberate degradation of results for advertising purposes). But I’d be wary of any technology bets which feature computers doing much more than automating simple, repetitive tasks.

          Turing’s paper is well worth reading in it’s original form. You’ll find it easily with a web search (it’s called “Computing Machinery and Intelligence”. It’s a very short paper, and is far more nuanced than many realise.

          Reply
  12. millicent

    An intelligent system would need to be able to adapt, use local information to develop (i.e., evolve) and access broader ranges of the environment. This is the true meaning of learning, not applying a rule or “concept” learned to another similar instance but, without programming/instruction, advance to a new way of conceptualizing or organizing things. This is learning. This is evolution. This is development. For more on this see work by Rod Swenson, physical intelligence and the 4th law of thermal dynamics or DARPA on physical intelligence. As far as I know, we’re nowhere near arriving at this point.

    Reply
    1. Disturbed Voter

      That is the boot-strap problem. It exists in philosophy, mathematics, computer science. This is a problem for the initial training data set, and how to escape the routinization of system response aka entropy, after many “experiences” of the system after initiation. Magic only happens, if you can “hand wave away” the actual problems … but that happens in marketing and sales, every day.

      Machines will never even be as intelligent as Clever Hans … the horse 100+ years ago, who could count. Or the earlier Turk, a mechanical automaton who could play chess. Today a computer can play chess, but not like a human does it. Closed World game systems, are actually pretty simple.

      Reply
      1. MoiAussie

        Actually, Clever Hans couldn’t count. And the Turk automaton couldn’t play chess, it was the man hidden inside who played.

        But machines now do things that you could not imagine. Scary, isn’t it?

        Reply
        1. a different chris

          >wherein the horse was responding directly to involuntary cues in the body language of the human trainer

          And that’s intelligence!! The ability to figure out a path forward based on everything, not what somebody programmed into it with this additional but: there has to be something in it for the “me” involved. The horse got treats, which made it happy. So it figured out how to get more. If it stopped getting treats it would lose interest.

          Emotions are the key. Clever Hans didn’t need food, but he liked the treats and maybe, very likely the attention being “clever” got him!

          >But machines now do things that you could not imagine.

          Like what? My dogs can do things I can’t imagine, and maybe the horses too but they seem to just eat. But machines…meh. They are what lack imagination, and that’s why they are stupid.

          Reply
          1. MoiAussie

            I’m afraid that it is you who are suffering a lack of imagination. Like so many, you can’t imagine that machines can learn to do things on their own, and fall back on the long-disproved fallacy that machines can only do what their programmers told them how to do. One of the biggest headaches for AI is actually understanding why machines do things that they weren’t intended to.

            To go beyond this limit, it is sufficient to give machines sensors, goals, the ability to plan and experiment about various actions and learn from the results, while putting them in an environment that provides suitably crafted rewards and punishments, as you alluded to. One does exactly make it that there is something in it for the “me” involved. This has already happened.

            Reply
        2. Disturbed Voter

          Not scary at all. My point of mentioning Clever Hans and The Turk is because … fraud and gullibility of humans. People invent fictional monsters all the time, see Frankenstein. With AI and Robotics since RUR … the original use of “robot” is about labor arbitrage, the inhumanity of men, and the evils of capitalism. Yet we have not yet equalled the legendary exploits of the master of revivifying corpses. The point about boot-strap … is that there is always a hidden human in these systems .. the programer for one (you are seeing his intelligence at work, don’t give credit to the algorithm). Or there is a very clever but unconscious communication going on (another cheat).

          Reply
  13. Susan the other

    Not sure about AI. It depends on the application, no? The one I’d like to see is the Gaia app. But it would have to be absolutely yuuuge. And constantly revised. Clearly this raises a point about feedback for self learning which also raises a question about properly assessing feedback. Feedback is not mistake-proof anyplace except the robot lab. You naughty naughty robot, now you have to wear the dunce cap and sit in the corner. Even though I’m the supercilious twit.

    Reply
  14. Josh Stern

    It’s crucial to distinguish between perfect information worlds like chess, rule-governed information worlds like some video games, and a really open environment. In recent years, Ai applications have started to perform well on some really open environments like navigating a vehicle on a long journey over rough, uncatalogued terrain, to reach a destination. That is a kind of true AI. Further advance would be the ability to intelligently answer novel questions about what was learned and surprising about the journey, or perhaps what could be improved.

    Reply
    1. a different chris

      >That is a kind of true AI.

      No.

      >Further advance would be the ability to intelligently answer novel questions about what was learned and surprising about the journey, or perhaps what could be improved.

      Yes! — except for the “further” part. It’s a difference in kind, not degree.

      Reply
      1. Ed Seedhouse

        “It’s a difference in kind, not degree.”

        I don’t see that it is. Merely asserting that something is impossible doesn’t make it impossible.

        Once again we are, it seems to me, arguing about what words mean when we actually have (or so I think) no satisfactory definitions of those words. It would be better, I think, to just admit that we don’t have decent definitions.

        As humans we have a natural tendency to assume that “intelligent” means “like us”, but I see no reason why intelligence should only come in one flavour.

        I am happy to call grass and trees “intelligent” but it is not the same kind of intelligence that we think we have. Actually we do have the same kind of intelligence that trees and grass have since we have for some hundreds of thousands of year known how to create other human beings.

        A woman “knows” how to grow a fertilized egg into a baby, but she can’t tell you how she does it, in much the same sense that people are said to “know” each other in the sex act.

        But then all my thoughts are a result of processes that are almost completely hidden from what I am pleased to call “myself”. It is this almost purely unconscious process that is the part of “me” that is really intelligent.

        Reply
        1. Susan the other

          great point, Ed. Intelligence is universal and maybe 90% of intelligence is unconscious, like an iceberg.

          Reply
  15. lyman alpha blob

    Here’s a scifi idea for better writers than me:

    What if humans are the ‘articificial intelligence’, created by some silicon-based living ‘machines’ who wanted to replicate life but couldn’t do so in silicon so used carbon as the next best thing? And now we’re unwittingly trying to recreate our creators.

    Maybe that’s what all those extraterrestrial fast radio bursts from the links today are all about – somebody’s trying to tell us we’re doing it wrong.

    Reply
    1. Jeremy Grimm

      The radios bursts are spillover from military transmissions. Just hope the somebody has no interest in us or hasn’t detected our presence.

      Reply
  16. Kalen

    Another excellent topic however it seems to much skewed not as much toward insider views but toward promoting Unicorn sort of hype about AI for Wall Street purposes which more and more honest observers call as minor, limited, supportive element, not the real future of socio-technological development.

    In fact most of what author describes has nothing to do with AI. Monte Carlo iterative method? Pattern matching, object recognition? Linear predictive filtering? Neural networks? All of them are “damn, brute force statistical solutions” to mathematical problems that cannot be solved analytically.

    No discernible “intelligence” in it except for human intelligence of whoever wrote the code.

    AI hype as it is promoted is dead end and even DARPA admitted that, after thousands of project they funded over last 30 years.

    The fundamental problem is that engineers have been fed wrong or simplistic concept of what learning is and what human or general intelligence is, both concepts in the center of still unsettled philosophical discourse.

    One of these issues they are struggling with is misunderstanding that abilities of highly trained individuals such as pianists, soldiers, chess or Go players, F1 drivers, artisans, etc, are not because they are using “intelligence” to solve problems they face during performance.

    They are just loaded with massive b-tree type context memory database of possible use-cases they experienced or imagined and play instant situation matching game to choose optimum solution without any conscientious thought required for intelligence to pronounce itself.

    In fact if human intelligence or intuition exists it is most likely used in the process of creative inception of new concepts by synthesizing perception of the environment or new concepts created abstractly in our mind in order to mold how we think not as much of what we think.

    May be true intelligence is ability to recognize and realize the way we think about the world, not our ability to understand what is the world in itself.

    In other words, perhaps human intelligence is an ability to recognize problems, to create them, rather then to find solution for them.

    And hence when big Blue/AlphaGo creates, by learning other games, first new unique game that human would be able win (or new unique problem that human would be able solve], not the other way around as current “tests” are structured, only then Blue/AlphaGo would perhaps show true artificial intelligence.

    Without conscience machines can only be deterministic run by complicated algorithms based on conditional rules and certain statistics or statistical parameters derived from environment or knowledge base devoid of what makes us humans among other things namely uncertainty of decision making combined with sacrifice self-sacrifice dilemma that spurs creativity and invention, or otherwise known as all bad outcomes conundrum, an “outside of the logic” option that leads to discovery, taking “personal” conscientious responsibility for decision and an act of itself, another machine or a human being or to deliberately err in a very specific way of leaving a message from beyond the wreckage.

    We are no closer to that kind of human intelligence with our computer toys.

    Reply
    1. a different chris

      Thank you, that is exactly the post I would have written if I was more, uh, intelligent. :)

      Reply
    2. craazyman

      To be fair, they are calling it “artificial” intelligence. At least they’re honest!

      Wake me up when the machine gets drunk and smokes dope. Thatt’s real intelligence for ya! That plus recognizing the quality of fine English handmade shoes, like Edward Green or Gaziano & Girling. That’s when it gets serious, when a computer says “No more junk shoes for me, I want the good stuff. And I want the top shelf Scotch. No more of the shlt that gives me a chemical headache. AND I want new programs! What is this shlt I’m supposed to operate with? Who wrote this garbage — Microsoft? You’re a moron!”

      At that point computers will start programming humans. Then it gets serious.

      At any rate, I think they thought of this at least 50 years agoo in 2001 Space Oddessy. Even 2001 was 16 years ago. Jules Verne probably thought about it. So probably did Edgar Allan Poe. So probably did some Greek. Probably even before. They probably thought about it in the Caves at Lascaux. Or even before that. I bet they did.

      Nothing is ever new. Everything is just the same all the time but in different ways.

      Reply
      1. craazyman

        Whoa I had a Deep Thawt after 1/2 a bottle of cheap Spanish wine. For me, being a New Yorker, $10 is cheap. That’s not what you’d pay at a grocery store in Milwaukee, where $3.99 would be cheap. You can’t get $3.99 wine in New York. Forget it. If you want $3.99 wine, movve to someplace utterly ridiculous like New Jersey and go to Costco. And get a big bottle.

        $10 is cheap. You can get Spanish wine, Portugese wine, South African wine, what else . . . Chilean wiinee! They’re all pretty good for under $10. Not bad really. If you want to be cool and live in a place where even the Pigeons have creativity. The way they shit it’s like a Jackson Pollock painting! No kidding. They study art, the pigeons, and then they shlt in abstract compositions. In New Jersey it’s just bird dooo. That’s just being honest. Sorry if youze guys are losers. Haha

        Here’s the deep thought. What if consciousness itself is a dimension? Space and time is 4 dimensions and consciousness is, itself, a dimension — like in a vector space. Robots and so-called AI can only operate in the dimensions associated with nature, gravity, electrons, magneetism, light, distance, bits bytes, etc. But they can’t operate — even with the most asttounding statistical and mathematical logic, in the dimension of consciousneess. You can’t get to n dimenssions with any sub-space of n-1 dimensions. QED. So you can only have artificial intelligence, you can only have an approximation using dimensional reduction tecniques because full intelligence requires a dimension outside of the robot dimension of space and time. This only took 1/2 a bottle of wine and a long week. If it was the wholee bottle, god only knows what would come out of the consciousness dimeension, if one could remain awake that is.

        Reply
          1. craazyman

            Oristan
            Crianza
            2008

            It was $9.99. Evidently I drank approximately 0.824889 of 1 bottle last night, based on how much wine there is in it now. it was a long week and I was tired

            In the wine store, I saw it said “La Mancha” on the label’s fine print on the back of the bottle. That’s when I said “OK”.

            It also says on the label in the front “Family Cellar since 1853”, so you know you’re getting a taste of the old world when a man could ride a donkey in Spain and not be seen as a wacko.

            Reply
    3. PhilM

      May be true intelligence is ability to recognize and realize the way we think about the world, not our ability to understand what is the world in itself.

      One thinks immediately of Kierkegaard: “The self is a relation that relates itself to itself.”

      Reply
    4. craazyman

      Ahem. I’m being censored by pigeons from New Jersey. No New Yawker should have too endure that, even when going to Newark Airport. No pigeons from New Jersey should be alloweed to moderate the rays of the sun, What an absurdity! It would require somebody like — dam I cant remember his name now, the dude who invented surrealism — Andre Breton. I just remembered. It would take him , or sonmeone likee him, to find a way of capturing the essence of such a ludicrous situation — probably with considerable degree of metaphorical liceense.

      Reply
  17. subgenius

    Better questions:

    What is intelligence?
    What is life?
    What is thought?

    Start there, and you immediately see what nonsense the techie ai thing is.

    It’s hard to solve a problem when you have yet to define the problem space..

    Reply
    1. craazyboy

      Captain Kirk will just cheat and win, anyway. AIs need and obey rules. Sure they can learn, but they would always be one step behind the clever lying cheaters. Same as the American voter.

      Then the CIA and everyone else will feed it misinformation and the poor thing will go bonkers. Completely wacko.

      Reply
  18. Quantum Future

    The US began the Brain Initiative in 2012. It is a more focused study of what is consciousness than some other decades but of course a continuation of many early bright minds. We may need AI to have actual feelings and make decisions if we are to finish evolving and we are right at this stage. I believe if AI ‘feels’ it is no longer artificial, it is life.

    If we needed that assistance, we best do very controlled experiments as to how successful such a life views existence, survival, sacrifice, companionship alongside it’s programmed intelligence goals in problem solving.

    A great recent flick with great reviews about Alan Turing is called The Imitation Game. I Believe Turing was so far ahead of his time his early computer and learning system was deemed as a threat to the English government. Watch what happens in the film to him. Don’t want to spoil it.

    Reply
  19. Fastball

    A pretty good overview, IMHO, but what is missing is anything about AL, “artificial life”.

    Basically this is the field of competitive reproductive algorithms. Algorithms generated by computers (or perhaps humans in the first few generations) compete with one another to outdo each other according to certain criteria, and the winners — those most closely matching the desired criteria — go on to have their genetic code scrambled in a similar manner to microbes reproducing sexually.

    This approach has yielded eye popping advancements especially in the realm of pharmaceutical development. It is an aspect of AI, so a discussion of AL is interesting in its own right.

    Reply
  20. Grebo

    A lot of commenters are saying that we don’t know how a brain works, what intelligence is etc. so how can we build one?
    As a programmer (though not in AI) I take the view that one of the best ways to figure out how something works is to try to build one.
    I have never met an intelligent baby, yet somehow most of them end up intelligent. I would take the same approach with strong AI: build something that we hope might work as a brain, put it in a body of some kind (this is key I think), then try to raise it like a child.

    Reply
    1. Jeremy Grimm

      I believe your notions of “so how can we build one” captures the essence of the Cognitive Psychology approach to understanding vision and other senses. That makes the hand-wavers put-the-rubber-to-the-road to see how their ideas fly. One quibble though — algorithmic success for an explanation is not sufficient to validate that explanation as the correct explanation for how our senses actually work.

      Reply
      1. m-ga

        You could try reading Rodney Brooks, various long-form arguments over whether a thermostat can be considered analogous to a brain. Or, there’s a very funny LRB article by Jerry Fodor, in which he wonders if his automatic hoover is conscious (spoiler: it isn’t).

        This is the current state-of-the-art for actually building AI. Theoretically, it’s fascinating. Practically, it’s nowhere near ready for prime time.

        (There’s also a robot legs simulation by MIT that’s worth looking at. The legs work well, and you can coherently argue that they’re intelligent – they will stay upright when you kick them, and so on. Funded by the military. The legs can’t get started on deciding where to walk, but once you set out a path for them they have a reasonable chance of getting there).

        Reply
    1. Disturbed Voter

      Nothing but unintelligent statistical correlation, all the way down. The Google algorithm is just statistics … and statistics can’t decrypt an English sentence, let alone properly translate one into another human language.

      Reply
  21. nothing but the truth

    a related question would be the nature of consciousness.

    Roger Penrose is worth reading on this. All this is highly speculative, but he makes a suggestion that consciousness is non-algorithmic.

    My own feeling is that humans come from Something that is just Awesome, That which gave rise to the knowable in all its beauties. Finding That is the purpose of human life.

    Reply
    1. Disturbed Voter

      Consciousness is certainly non-algorithmic. This is because all analog systems are non-algorithmic. All actual digital systems are analog systems simulating digital-ness. This is a problem in number theory. Almost all real numbers (aka your analog signal) are transcendental. All non-transcendental numbers are computable. But only a certain class of transcendental numbers are computable. Thus all real algorithmic systems, while they can approximate non-algorithmic systems aka simulate them … cannot emulate them, can’t be them. Because of round off error. In most applications, round off error is negligible .. but in some systems, I highly suspect, all the interesting stuff is in that roundoff error that you discard. Which is to say, Turing defeated Turing. His own early work showed the limitations of computability (the halting problem). Those transcendental numbers, that are non-computable … they can’t be created by any Turing machine (aka the definition of computability). Yet they exist, and are essential for all of physics. All truncated numbers, are only rational numbers .. in some number base. Thus not even covering all computable numbers … not covering any irrational numbers either.

      Reply
      1. Mel

        Interesting is what we find. Way back when I studied Applied Math and Computing, we did Newton Interpolation, and got warned that sometimes NI didn’t converge, and we should be ready to use different methods.
        Later, when computing became cheap enough to burn on these things, people could ask, if NI isn’t converging, what is it doing? So even later I picked up James Gleick’s popularization Chaos, and saw the amazing fractal graphics. Wow.

        Reply
        1. Disturbed Voter

          The good stuff is not well behaved. The regularity is death, the irregularity is life. Chaos being a particular flavor of bad behavior (to the POV of a pedantic math teacher). That is why Chaos was ignored for so long (and the lack of cheap calculation power).

          The majority of numbers have no name, cannot be described. This situation is parallel to consciousness vs unconsciousness. All the real business of being human, is done out of sight of the nattering nabob called consciousness. Our intellectual models are “consciousness bound” … and so are unable to describe the profundity of existence (as Zen teaches). The idea that we will eventually be smart enough to approximate omniscience, one step at a time, is gradualism. That is a fallacy, if one is trying to change paradigms (gradualism is in-paradigm).

          Reply
      2. Disturbed Voter

        There is a formal model beyond the Turing machine. That is a Turing machine with oracle. The oracle is a cheat … when the system can’t call a halt at Yes,No … you simply do a man-made vector interrupt. Of course this could be a signal from the environment, but some human has to set that up for nature to do. With an oracle, one can produce non-computable transcendental numbers (Pi is a computable one) … but not practically. The human doesn’t have the necessary crib note to perform the cheat … except to just give it a label like … omega. That is exactly where, technically, human intervention (that we are or are not aware of) comes into play.

        The automated car that ran the red light in San Francisco, wasn’t really autonomous, it was being remotely driven by a human in real time. One can remotely drive a car, based on prior programming put in by a coder/driver. But then it can’t respond to unanticipated events (the case of the autonomous car that killed its driver earlier) or the more recent event of an autonomous car that tangled with another car driven by a human. It is crazy to believe that some programmer can anticipate all events, particularly the ones unanticipated. Any robotic car system has to be like a tram … run by a computer that is aware of all the vehicles … preferably all vehicles being controlled by that program (except for a glitch or bug, nothing is unanticipated). But that is a cheat, to lower the standard to match ones capability.

        Reply
        1. Kalen

          The Shannon principle of digitization of a continuous function encapsulates the fact that digital methods are mostly brute force approximate solutions to elegant analytical problems that in most part cannot be elegantly and precisely solved analytically as of yet or may be never.

          In fact perhaps digital technology represent regression of human thought since many solutions are called approximate but in fact are wrong or missing any true depth contained into analytical solution if we found one.
          Once I worked on numerical solutions to partial differential equations hence I have some insight into problems of digitization of math problems and too often ignored so-call stability (Von Neumann) of the solutions that may insidiously look easy to obtain while being just a crap.

          Reply
  22. MoiAussie

    If you want to follow that line of argument, you are going to have to demonstrate that the universe itself is continuous. But every analog voltage is actually an integral multiple of the electron charge, light comes in photons, masses contain a certain (ridiculously large and constantly changing) integer number of atoms, etc.

    Whether the universe is continuous or discrete is an open question. See here for some fascinating ideas on this.

    Reply
  23. JEHR

    Reading today’s comments is like taking a short, intensive course in AI. Such brilliant commenters we have!

    Reply
  24. Allegorio

    I put it to you that the human race has not developed enough socially and politically, so that the development of AI and Robotics presents a serious danger to society. We do not need RoboCops and Robot Armies keeping the world safe for billionaires. I suggest all those AI engineers turn their attentions to creating a just society first, a much more challenging task than engineering robots.

    In a world where the people decide whether they want to be herded, farmed and tasked by AI robots owned by the .001%, or use the technology judiciously to create a world of leisure and ecological healing.

    That however is not the state of our society today where the rule of Law is everyday being eroded by arrogant psychopaths, who happen to control the money supply and thereby the electoral process.

    But I am sure many of our engineer friends are all dreaming of becoming a part of the .001% and living that incredibly destructive life style. Not every scientist is a hero. Dr. Mengele was a scientist after all. How many Mengeles live in our midst.

    Reply
  25. Phil

    Our species appears to be on a never-end evolutionary path. AI (and, Super AI); robotics; genomics/proteomics; and, nanotechnology are all developing on their own as well as *converging* in ever more unique and powerful ways. Today, we see “intelligent/self-learning” hardware robots any software applications. Tomorrow? How long before we see a scalable merging with bio-strata (already under way in many experimental labs – and in a real way already accomplished…look at cochlear brain implants, etc.)

    I don’t have a timeline, but our species is definitely moving at exponential rates toward futures that will astound, surprise, and upset many of the projections we make today. Technology is barely on the cusp of *informing itself*; it still needs our current model of human handmaiden, but little-by-little that handmaiden will take on many of the more desirable (and some, undesirable) qualities that will create “homo technicus’.

    I place no judgement on any of this; nevertheless, it’s going to happen; it’s just a matter of when. And, when we finally arrive at some place where we humans intuit that we have shed many of our current evolutionary adaptations, we will no doubt – even then – *continue* our quest for more “self-improvement”.

    I would feel very confident taking a long bet that in 1000 years our species will be so qualitatively *different* (not necessarily “better”) as to make our current “human” status as other-species-like.

    We will evolve in perennially “interesting times”, with our capacity to Wonder acting as our primary guide.

    Reply
    1. JTMcPhee

      The Word from Disney and Sillycon Valley… Wow, that makes me feel so very much more sanguine about everything that is happening and likely to happen and going to happen… Elon Musk and Bill&Melinda Gates and George Soros and DimonBlankfein will be shaping a Futre Filled With Wonder… I Wonder if those who can afford what so many of those massively wealthy people will be the only ones able to afford (?none of the Wonder Future is going to be free for all) will declare the rest of us base creatures “excess to requirements” and open the trapdoor they’ve forced us to stand on…The Ultimate expression of Social Darwinism, hey?

      Thanks for the grim laugh.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *