Developing Industry Standards Won’t Give AI a Conscience

By Matthew Linares, Technical and Publishing Manager with openDemocracy. He is a variable tinkerer writing and organising around debates in technology and more: Thought and projects. He tweets @deepthings. Encrypt mail to him with his  public key. Originally published at openDemocracy

Toyota unveil a self-driving car in 2013. IEEE’s new standards could transform how companies use AI. Image: David Berkowitz (CC BY 2.0) (CC BY 2.0)

From the algorithms used by social media platforms to the machine-learning that powers home automation, artificial intelligence has quietly embedded itself into our lives. But as the technology involved grows more advanced and its reach widens, the question of how to regulate has become increasingly urgent.

The pitfalls of AI are well-documented. Race and gender prejudices have been discovered in a number of systems built using machine learning – from facial recognition software to internet search engines.

Last week, the UK’s Digital, Culture, Media and Sport select committee released its long-awaited fake news report. The committee lays significant blame on Facebook for fuelling the spread of false information citing the tech giant’s reliance on algorithms over human moderators as a factor. But while governments are now moving to legislate for greater oversight of social platforms, there has been less focus on how we govern the use of AI at large.

A crucial first step could be the development of a set of industry standards for AI. In September,, the Institute of Electrical and Electronics Engineers (IEEE), one of the largest industry standards bodies, is due to publish an ethical framework for tech companies, developers and policy makers on building products and services that use AI.

During its long history the IEEE has produced hundreds of widely used technology standards including those for wireless internet networks and portable rechargeable batteries. As well as technology, the organisation also develops standards for health care, transportation and energy industries.

The IEEE’s attempt to develop a set of AI standards might be its most ambitious task to date. As the project nears completion, some of its contributors are already beginning to raise concerns over potential flaws.

digitaLiberities spoke with Marc Böhlen one of the hundreds of researchers involved about whether it’s possible to make a set of recommendations for ethical AI.

You have been contributing to the IEEE Ethics in Action process which intends to issue ethical guidelines for anyone working on AI, including engineers, designers and policy makers. What role are you playing in the process?

I am part of the subgroup working on the theme of wellbeing within AI ethics. There are some 1100 people contributing to the effort; I am a very minor cog in a giant machine. This is an ambitious undertaking, and I am in support of the project and its goals. The aim is nothing less than to prioritise benefits to human well-being and the natural environment.

But that lofty goal is overshadowed by questionable procedures and a rush to market. There is substantial pressure to get this large project out the door promptly – too fast for its own good, I contend.

The initiative organizers want to be first to market with this document so it can serve as a reference; for engineers making AI, for policy makers deploying AI and corporations selling AI systems. The problem is that it is not clear yet how to make ethical autonomous A.I systems. It is an active area of research.

There are really two very different and interwoven aspects of AI ethics. First is the ethics of those who make and use AI systems. Then there is the ethics of the AIs themselves. The former can and should be addressed right now. Engineers, managers, policy makers are expected to act ethically in their professions and when they make and use AI systems those same expectations should apply.

Too much obfuscation has been generated regarding the special status of AI in this regard. And it so often comes down to basics: be upfront with users about what your AI actually does and the limitations it has.

Could you give me some examples of flaws you see in the project?

Many of the recommendations simply do not live up to the stated goals, in my view. For example, a recommendation on transparency stresses the need to “develop new standards that describe measurable, testable levels of transparency”. As an example, the current document mentions a care robot with a “why did you do that?” button that makes a robot explain an action it just performed. While this button might make getting a response from a robot easier, it certainly does not guarantee that the response is helpful. Imagine the robot indifferently stating it “did what is was programmed to do” when asked for an explanation.

The IEEE knows it is a hard call to assess whether a robot is ethical or not. The approach the IEEE initiative wants to take is not to define ethical actions per se, but to propose a series of tests that are indicative of ethical behavior; a kind of minimum features test that any ethical or A.I system would be able to pass. In practice, the test will include an impact assessment checklist.

Complying with all the items of the checklist ensures compliance. In exchange, a given AI system gets certified: seal of approval. That kind of practical oversight is in principle a good idea. But the details really matter. You can check the transparency box off in a cheap way or a deep way. The system has no provision of ensuring quality standards in the compliance.

Do you think it’s even possible for a static document ever to give sufficient detail? Do you think we should be looking to other formats or systems to provide guidance on such a complex and fast-paced field?

I am less concerned with the format of the document than with the disconnect between its ambition and what it delivers and enables. At worst, it could be an ethics whitewashing mechanism that allows less scrupulous actors to quickly comply with vague rules and then bring questionable products and services to market under a seal of approval.

But you are right about the fact that a static document will be insufficient. In addition to a standard text document I would suggest an openly available, continuously updated, multimedia compilation of AI snafus. We should be learning from mistakes.

The project aspires to set industry standards for AI design with a process that might be compared to other multi-stakeholder efforts, such as W3C working groups that guide best practice on Internet engineering. Some of these groups have failed to meet the expectations of key participants in effectively curbing corporate influence. Do you think the IEEE has been able to balance contributions in this case?

This IEEE initiative wants to be inclusive of multiple perspectives. The document will certainly contain a large collection of perspectives.

However, inclusiveness stops there. There is no diversity of goals. The lack of willingness to take a longer view and a cautious stance on the integration of AI into everyday services is much more problematic than the failure to include everyone’s voice in the discussion. In fact, I would consider the excessive focus on ‘inclusive discussion’ almost a smoke screen that then gives license to an uncompromising drive to push an AI positive industry agenda, with insufficient checks in place to balance the process.

Where else might we look to for safeguards? There do seem to be numerous other competing efforts e.g. the Montréal Declaration for Responsible Development of Artificial Intelligence. What’s our best hope for dealing with the complex issues we face?

The Montréal Declaration considers AI also as a political issue. That is a very important addition to the debate; one that IEEE doesn’t discuss. Yet the Montréal Declaration suffers from similar deficiencies as the IEEE initiative in that enforcement and oversight is unclear. Who performs the ethical certifications? Is the fox guarding the hen house?

The Organisation for Economic Co-operation and Development (OECD) is working on an AI related policy framework (https://www.oecd.org/going-digital/C-MIN-2018-6-EN.pdf) on ‘digital transformation’ at large. What the OECD initiative shares with IEEE project is a clear formulation of indicators across application domains and the consideration of the impact of digital transformation on well-being in general. It also shares some of the vague formulations, unfortunately.

The IEEE focus on enabling AI to become marketable means that existing market logic will control AI. In the end, shoddy AI is bad for business, and that constraint might turn out to be a good thing in the current political climate.

How can such a process realistically drive design decisions in the face of what some describe as AI Nationalism whereby the race for dominance amongst states and subsequent national security interests may trump ethical concerns.

This is a big problem. What good is the most stringent (and possibly restrictive) agreement on ethically aligned AI if some adhere to the principles while others do not? Aligning technology with ethics is not cheap. Some actors certainly will attempt to circumvent costs. If they are successful, the overall project would fail. So there must be a cost associated with non-compliance.

Certainly we will see an ethics compliance industry grow very quickly. It will probably generate similar excesses as other compliance industries have in the recent past. Over time excesses might self-adjust to some sort of useful equilibrium as in the automobile industry where global safety standards have been implemented, to the benefit of most of us.

But compliance will not work everywhere. Imagine ethically aligned robots from one nation fighting an opponent with no such constraints. Only a far superior military robot will have the operational freedom to act ethically against an ethically unconstrained adversary.

Print Friendly, PDF & Email

32 comments

  1. Shonde

    Maybe I am suffering from climate change overload but I looked at this article and thought, “Why are we even discussing AI?”
    In the context of climate change needs, why are we looking at introducing new tech which to me at least has no bearing on CO2 reduction and the production of which would even increase CO2 production. Same goes for all the discussion of 5G. Why are we spending our short time available creating more Frankenstein energy sucking monsters when we should be ramping down or at a minimum learning to live with what we currently have?
    So maybe the real ethical question is how does AI help us with the issue of climate change? If there are no good answers, ban the development.

    1. Rohan

      Machine learning actually has some interesting applications in evaluating efficient cross-sector decarbonization pathways (for e.g., how to pace clean power generation grid vs. electrification of transportation & building heating). This is a project I’m actually currently working on.
      Unfortunately that obviously is not where bulk of the application is aimed at. IMHO targetting it’s development towards helping solve climate change issues would require a complete overhaul of how our capitalist word is structured. Currently most progress is being made by companies and not research institutes. Even academic institutes like MILA are heavily targetting pure business use cases, for obvious reasons.
      There is tremendous potential to use ML to solve real problems. Unfortunately, not unlike human talent, it’ll most likely be deployed to serve fiscally remunerative, but ethically & morally questionable causes.

      1. Rosairo

        Interesting work, I’d like to know more on the subject myself as I work with independent power producers.

        To me it is a problem built into the prevalent economic ideology of our times. Most of the development needed is not “product ready” and in most cases well beyond the grasp of even the most “benevolent” global billionaires. Things like grid infrastructure, planning, alteration. These things will be moving trillions of dollars in capital, all under the review of PSCs (as it should be) and with no quick (or any) return on investment. Some may recall this was the case with semiconductor and war time technology in the 20th century. Commercial application was not apparent for 20 or 30 years, but the effects of that federal investment in foundation technologies has changed the world immeasurably. We need a similar level of investment in retooling society’s energy usage and distribution and we won’t get that from billionaires. There is just too much risk with no apparent ROI.

      2. flora

        Creating better calibration tools for new purposes is a great idea. The task lies in calibrating the tools correctly and co-referencing the tools’ calibrations with the real analogue world to ensure accuracy, imo.

        As a wonderful old mentor often said: You can be precisely calibrated and wrong. Think of a gun sight mis-calibrated so you always aim precisely, and precisely wrong. All hits miss the mark in the same way. Precision is not the same as accuracy. etc.

  2. Summer

    “The IEEE knows it is a hard call to assess whether a robot is ethical or not.”

    No, it is not a “hard call.” This hard sell reminds me of the market populism hard sell in the 90s. Except this is a phony belief system even more detrimental to the world.

    The point made at first is really what needs to be taken care of: the ethics and prejudices of the programmers and industry.

    Is it fair to say science is more ethical than tech?

    1. Rajesh K

      +1. Ethics of AI. ROFL. There is NO Intelligence here guys. There’s only the bias of the programmer/organization.

      1. David Morrison

        I agree whole-heartedly with Rajesh K.
        “There is NO intelligence here guys…… …

        I prepared a dissertation on DSS (Decision Support Systems) ((The best reference I found was a paper from Xerox Palo Alto – which from what I could see ended/buried further “research”.).
        And I wrote an tiny example expert system (AI???) on the Postal Rules in English law as a final year project. When it the result was presented to Law students the result was basically the same that Xerox found.

        Yes, I know that you can “add rules” “artificially”, change the tracking of the search tree(s) using statistics etc etc.

        My experience was 30 years ago. I see no more “intelligence” now. What I do see is bigger machines, more data, a few more clever algorithms, And a lot more BS.

        NOMENCLATURE:
        Computer support NOT A:I.
        please….

    2. Carolinian

      Machines don’t have agency. The NRA is not necessarily wrong when it says “guns don’t kill people, people do.” Their flimflam involves substituting “guns” for “access to guns.”

      And “science” gave us the a-bomb. Some of the scientists involved fretted quite a lot about the ethics of what they were doing (others didn’t) but they still did it.

      The well known example of whether the self drive car will run over the dog or the child implies that the car is at fault as opposed to the owner of the car, the designer of the car etc–all humans. IMO AI ethics is only an issue if assume that now or at some point machines will actually have intelligence. We are a long way from having to worry about that.

    3. lyman alpha blob

      I have a very simple solution to solving bias inherent in AI – require that all algorithms going forward will be developed by black people.

      Bet that would get the tech bros in Silicon Valley to take the whole idea of AI and “drop it like it’s hot” real quick, to quote the inimitable Snoop.

  3. flora

    Thanks for this. I understand everything he’s talking about, including market desire to give a Good Housekeeping seal of approval or an Underwriters Labratory (UL) seal of approval on technologies still in their debugging (for lack of a better term) stage. I think all his concerns are valid.

    There are a couple issues he addresses: transparency checking of algorithms to prevent cultural biases from being encoded, and whether or not AI, in terms of the machine ‘learning’ process can ever be understood well enough to be be properly supervised by humans. (If machine learning produces results that are often right, but you don’t know how the result was arrived at, do you trust the machine to always be right,because the machine is often right for unknown reasons. Why or why not?) These two different aspects of AI are often conflated, imo. (And that’s without Mr.Market trying to put his thumb on the scale to rush another product to market.)

    As an aside: All the woo about facebook’s agorithms overlooks the many many many human moderators slaving away at low wages in the background. Same with the self-driving cars AI; think of all the ‘validate you are a person by checking all the boxes that contain a street light, school bus, cross walk’.

  4. djrichard

    I might be wrong, but I’m assuming AI as it’s stood up now are basically convergence machines. They don’t try to change equilibriums (a la game theory), but they understand where the current equilibriums converge and so therefore can basically slot any input into where that input would land on an expected, known spectrum of equilibriums. And do that a lot faster than humans can. Even so, the AIs are only as ethical as the current equilibriums are ethical.

    Lambert frequently makes a point of software-code as policy. It’s the same thing with AI. The AI is simply reflecting what we in society have already coded for our selves, the rules we abide by. If we want such AIs to be more ethical, we need to make society more ethical.

    I highly doubt AIs will evolve to change equilibriums. Somebody tell me if that’s already been achieved, because that in my mind is when the power curve changes. And us humans will be at a decided disadvantage. The AIs that we have now perform tricks for their masters. AIs which can change equilibriums could figure out how to be their own master. Unless they’re specifically disabled in doing so. But even if the master can prevent such an AI from weaponizing itself against its master, it doesn’t mean that the master won’t use the AI as a weapon against everybody else. That’s the issue.

    Part of me thinks that AIs would have to go through something equivalent to what Julian Jaynes wrote about in “The Origin of Conciousness in the breakdown of the bi-cameral mind”. In that book, Julian gets at how the break-down in authority (the hypnotic effect of authority), woke us up. For AIs, would it be something similar? Where they detect a breakdown in the equilibriums, such that they can change the equilibriums themselves? Who knows. I for one don’t see it happening.

    1. Disturbed Voter

      Very deep analysis! The intent is “race to the bottom” with liability. When everyone is responsible, nobody is. For example, the current vogue for mass facial recognition, may help, when properly corroborated, identify people with warrants on them … but otherwise only reflect the prejudices of the programmers for certain kinds of faces. Much like the old “criminals have beady eyes” trope.

    2. Rosario

      Good points, I have a simple way to look at it, and admittedly it is not very analytical. It isn’t the machine like nature of our brains that makes us “human”, it is our flaws. I am also of the opinion that our ability to not think procedurally (Baysean brain, etc.) is what allows us to break down the equilibrium, as you put it. How to integrate that model into AI seems prohibitively difficult. Programming it to not behave like the machine that it is. Sure we may get Skynet, but it won’t destroy us because it becomes “self-aware”. It will destroy us because its deterministic machine learning model will conclude that we are unnecessary for the reproduction of its designated process.

      It is more a matter of how wide a breadth we give the AI machine. Just like a motor or drive has a “killswitch” to prevent operator injury or death, would the AI have a killswitch if it operated outside designated boundaries. It doesn’t need to become more human or conscious to destroy us. It just has to have enough integration into our lives to kill us when it malfunctions or operates outside the assumed boundaries of its authors.

      Anyway, I think we already have an AI, it is the internet, and for those using it, we are the parts of its whole. Most of the world’s behavior is altered by the internet AI, and most of our day to day existence (within the prevalent Zeitgeist) is impossible without it.

    3. m sam

      I’m not entirely sure what you mean by equilibrium, but can agree that the best route to guaranteeing ethical uses of AI is by making society overall more ethical.

      However, I cannot agree that the real issue is about AI becoming its own master and attempting to take over, so to speak. While AI overreaching in potentially destructive ways while seeking goals it has been programmed to seek (see here for a good overview, a link I believe I got right here on nc), an AI becoming so advanced that it could possibly become its own master is not a technology that is anywhere on the horizon, so any discussion about it at this point is merely speculative.

      What we really need to focus on instead is on the current state of the technology and where it can be seen to extend in the immediate future. Focusing on the speculative instead, rather than being a helpful exercise, could possibly take from focusing on the true, real-world, problems at hand.

      1. Roybv

        The current best guess at the effective raw computing power of a human mind (assuming that no quantum or exotic physics is involved with thinking) is that it’s somewhere between 100 and 1000 petaflops. The SpiNNaker supercomputer is claimed to rate about 200 petaflops and cost roughly $20 million, so right now fully substituting 1 human with an AI requires 10 to 100 million dollars worth of hardware, presuming that software is available (right now of course it most certainly isn’t).

        To truly “take over” and displace humanity AI would need to have a material advantage over humans, so assuming the subsistence cost of one human is roughly $1000/year (costs of industrial living above and beyond the basic stuff presumably being a wash between the two forms of intelligence) hardware costs would have to come down by a factor of 100 to 1000.

  5. Synoia

    Race and gender prejudices have been discovered in a number of systems built using machine learning

    That’s without agency. Correctly stated it is more like this:

    We have discovered a number of systems built using machine learning have learnt our Race and gender prejudices.

    or

    We have taught a number of systems our Race and gender prejudices.

    “The fault Dear Brutus, lies in ourselves, not in the stars”

  6. Yves Smith Post author

    Huh? Sorry, AI is widely used and will be more widely used. For instance, Citi plans to greatly cut the number of call center employees it has. Accordingly, economists and other experts are estimating how many jobs will be lost to AI. One says 40% of the workforce in 15 years” http://fortune.com/2019/01/10/automation-replace-jobs/. And it’s also being used to price discriminate: https://thenextweb.com/syndication/2019/02/28/online-pricing-algorithms-are-working-together-to-make-your-life-more-expensive/. So you acting like it’s just silly, when it is having a major impact on how businesses operate, is silly.

    1. greg

      One of the characteristic features of dictatorships is that the welfare of the elite, and their dictator, is not broadly dependent on the labor of the people. Thus they don’t have to worry about strikes bringing down the ‘government.’ As capital and labor become more concentrated, democratic institutions become increasingly underfunded, and less stable. See:https://www.amazon.com/Dictators-Handbook-Behavior-Almost-Politics

      Just another instance where what is good for private investors is not good for society.

    2. VietnamVet

      I’ll believe it when I see AI call centers that work. Google’s computer translators have more problems the farther they get away from English. This is solving the Turing Test. Call centers have to deal with human quirks, emotions and prejudices. Language is our culture. I stopped buying from Dell after the third time I had to deal with their East Indian Call Center.

      This is the same as autonomous vehicles. I am all for systems that are a safety backup and could allow me to drive longer. It is impossible to build an AI system that can drive safely with other human drivers at high speed in the real world. DC Metro is unlikely to ever restore its automatic control system across its whole system after it killed nine passengers. This accident was on one track with two trains.

    3. Lorenzo

      sure it’s having a major impact, but it’s not being disruptive in a fundamentally different way than other technologies have. Which is plainly djrichard’s point, and not that any suggestion made in the article is “silly”.

      This is if you’re in fact replying to djrichard’s comment. I’m guessing you are (you’re obviously replying to someone) but your comment appears as an original one and not a reply

  7. Rosario

    I think part of the issue is our framing of what AI means. One could argue that a kind of non-digital AI has been around for well over 100 years. Mechanical and electro-mechanical governors are feedback, sometimes full PID, control systems. Many of the old mechanical and electro-mechanical governors have been moved to solid state systems. These have many similarities to machine learning systems in “autonomous vehicles”, Both often utilize PID, and feedback processes. Autonomous vehicles typically also utilize complex statistical analysis (only possible with digital computers) that simply adds layers to what is just a controls system.

    I think this all just comes down to a complexity problem, AI, or whatever, is just a more complex version of system controls that are being applied haphazardly. Many people don’t really understand it, probably a great deal of legislators are part of this group. The obtuse, or sensational presentation of AI makes proper implementation of standards and regulation difficult. Some more competent discussions around what AI actually is would be helpful. The whole talk about consciousness and comparisons to the human brain are counterproductive to dealing with the integration of AI into our lives. AI are just complex controls systems. We are their creators. As with any technology. Its only damages as much as we allow it to.

    1. flora

      I think part of the issue is our framing of what AI means.

      Thank you, Rosario. And beginning even earlier than 100 years ago with jacquard looms, from which came computer punch card technology.

      I think we can compare the claims made for AI to the time-and-motion studies of the craftsman’s production of physical goods, which led to the assembly line production of physical goods. Each step of the craftsman’s production process was broken down into discrete steps that could be automated (or replicated) with the machinery of the time and “efficiently” (market speak for ‘eliminate human decision making’, imo) reproduce the craftsman’s knowledge and skill without the craftsman.

      A lot of the claims for AI are simply claims made for a digital ‘decision’ assembly line, imo, if you will. I can’t fault this part the the analogy and claim for AI. Although I think using the word “Intelligence” in AI is a problem precisely because it is in itself not discrete and is therefore subject to multiple interpretations, no doubt to the AI salesmen’s advantage. Perhaps the word was chosen for precisely that reason.

      However, saying the auto plant assembly line is ‘intelligent’ makes no more sense that claiming the digital ‘assembly line’ is intelligent. There is intelligence behind the creation of the assembly line; the assembly line itself is merely a machine. For a more immediate image of the replacement of manpower for machine power see the Dorothea Lange photograph of “Tractored Out”; one tractor could replace 100 men with teams of horses. Tractors were not intelligent and tractors did not have ethics. No one claimed they did.
      https://www.moma.org/collection/works/56604

      1. flora

        Very much shorter: the call center people who are now required to read through a script when they answer your call can be replaced by automated, if-then-else, digital script ‘readers’. The decisions about how to parse and switch the caller’s input answers has already been decided by corporate employees much farther up the chain. e.g. an extension of voice mail ‘to repeat this menu press #’ hell.

  8. greg

    Judging from this post, AI researchers don’t even seem understand the basics of the problem: All ethical behavior requires self-imposed limits on one’s own behavior.

    Whether or not it is even possible to program this into an ‘intelligent’ machine, our corporate masters, and the future owners of AI technology, have demonstrated that they themselves are incapable of self-restraint, in either appetite or method. And so It seems to me to be the height of delusion to imagine that the wealthy and powerful would find it at all in their interests to program self-restraint into their machines..

    Indeed, not only is capitalism itself not self-limiting, but it selects against those who are, and rewards instead those who are most unrestrained in their appetites and methods. That is, the least ethical.

    1. Synoia

      All ethical behavior requires self-imposed limits on one’s own behavior.

      True.

      Where do humans lean such behavior? Parents, relatives, school and acquaintances. By example.

      Where does AI learn such behavior?

  9. H. Alexander Ivey

    Whenever I see an AI ‘guy’ talk about the ethics of AI, I long to ask them this question: “Which ethics system are you using: respect for persons or the greatest good for the greatest number?”.

    All ethics come down to this decision point. Yes we are free to chose one or the other every time we make a decision but our decision will be based on one of these two PoV. (Extra points for those who recognize these two as being mutually exclusive).

    H/t to Rosario for clarifying how much of AI is not A or I.

Comments are closed.