Ethics and Artificial Intelligence

Yves here. Some design and regulatory proposals to mitigate the risks of the deployment of artificial intelligence.

By Valerie Frissen, a professor in ICT and Social change at Erasmus School of Philosophy, Gerhard Lakemeyer, Professor of Computer Science and head of the knowledge-based systems group, Department of Computer Science 5, Aachen University of Technology, and Georgios Petropoulos, a visiting fellow at Bruegel with extensive research experience from holding visiting positions at the European Central Bank in Frankfurt, Banque de France in Paris and the research department of Hewlett-Packard in Palo Alto. Originally published at Bruegel

Machine learning and artificial intelligence (AI) systems are rapidly being adopted across the economy and society. Early excitement about the benefits of these systems has begun to be tempered by concerns about the risks that they introduce.

1. Introduction

Machine learning and artificial intelligence (AI) systems are rapidly being adopted across the economy and society. These AI algorithms, many of which process fast-growing datasets, are increasingly used to deliver personalised, interactive, ‘smart’ goods and services that affect everything from how banks provide advice to how chairs and buildings are designed.

There is no doubt that AI has a huge potential to facilitate and enhance a large number of human activities and that it will provide new and exciting insights into human behaviour and cognition. The further development of AI will boost the rise of new and innovative enterprises, will result in promising new services and products in – for instance – transportation, health care, education and the home environment. They may transform, and even disrupt, the way public and private organisations currently work and the way our everyday social interactions take place.

Early excitement about the benefits of these systems has begun to be tempered by concerns about the risks that they introduce. Concerns that have been raised include possible lack of algorithmic fairness (leading to discriminatory decisions), potential manipulation of users, the creation of “filter bubbles”, potential lack of inclusiveness, infringement of consumer privacy, and related safety and cybersecurity risks. There are also concerns over possible abuse of dominant market position,[1] for instance if big data assets and high-performing algorithms are used to raise barriers to entry in digital markets.

It has been shown that the public – in the widest sense, thus including producers and consumers, politicians, and professionals of various stripes – do not understand how these algorithms work. For example, it has been shown that Facebook users have quite misleading ideas about how algorithms shape their newsfeeds (Eslami et al.).[2] At the same time, the public is broadly aware that algorithms shape how messages are tailored to and targeted at them – for example, in the case of news or political information, and of online shopping. Algorithms also shape the logistics of vehicles, trades in financial markets, and assessments of insurance risks.

To date, however, by far the most common and dominant implementation of algorithms has been in messages that target people directly. Thus, to build awareness among a broad public, the topic of platforms that affect everyone cannot be avoided. The two domains, shopping and news (or political information: whether some non-news dissemination can be counted as ‘news’ is precisely at issue in algorithmically disseminated ‘fake news’) are also relatively long-established.

But it is not only the public that does not understand how algorithms work. Many AI experts themselves are painfully aware of the fact that they cannot explain the way algorithms make decisions based on deep learning and neural networks. Hence there is also considerable concern among AI experts about the unknown implications of these technologies. They call for opening up this blackbox: from this perspective explainability of algorithms is one of the key priorities in this field[3].

Furthermore, the application of AI in robotics has created numerous new opportunities but also challenges. Already the extensive use of industrial robots in production has raised productivity for decades. The introduction of smart robots will only increase this trend and transform employment conditions in an unpredictable way.

The introduction of autonomous vehicles certainly has the promise of leading to smart and efficient (urban) transportation systems. However, autonomous vehicles also raise ethical issues related to the decision-making processes that are built into their hardware and software. A widely used example is the case of an unavoidable accident where the autonomous car is called to choose at an instant of time whether it will sacrifice its occupants to protect pedestrians or not.

An area of immediate concern is the possible use of AI technology to develop lethal autonomous weapons. As illustrated very graphically by the video “Slaughterbots” (see autonomousweapons.org) it is conceivable today that drones equipped with AI software for navigation and face recognition can be turned into cheap lethal weapons capable of acting completely autonomously. Allowing such weapons to become reality will likely have catastrophic consequences at a global scale.

In terms of ethical challenges AI and robotics raise questions that are unprecedented. Given the increasing autonomy and intelligence of these systems we are not just talking about societal implications that merely ask for new ethical and legal frameworks. As the boundaries between human subjects and technological objects are virtually disappearing in AI, these technologies affect our fundamental understanding of human agency and moral responsibility. Who bears responsibility for AI-behaviour is a complex ethical issues. What is needed is a shared or distributed responsibility between developers, engineers, industry, policymakers and users. And last but not least we will also need to take into account the moral responsibility of the technology itself, as it develops towards increasingly autonomous behaviour and decision-making.

2. Policy response

The breakneck pace of development and diffusion in AI technologies urgently requires the development of suitable policies and regulatory infrastructures to monitor and address associated risks, including the concern that vast swaths of the economy and society might end up locked-in to sub-optimal digital infrastructures, standards and business models. Addressing these challenges requires access to better data and evidence on the range of potential impacts, sound assessment as to how serious these problems might be, and innovative thinking about the most suitable policy interventions to address them, including through anticipatory and algorithmic regulation strategies that turn big data and algorithms into tools for regulation. We need to adopt a more balanced approach that also considers ‘the human factor’ and the proper place of AI in our democratic society. And for this we need a trans-disciplinary research agenda that enables the building of knowledge on which responsible approach towards AI can flourish.

However, the research community concerned with algorithms is diffuse. Different academic disciplines are studying these issues from a variety of perspectives: technical, social, ethical, legal, economic, and philosophical. This work is incredibly important, but the lack of a shared language and common methods makes discourse, synthesis, and coordination difficult. As such, it has become near-impossible for policymakers to process and understand this avalanche of research and thinking, to determine which algorithmic risks are already being tackled through technical measures or better business practices, and what algorithmic risks are relatively underserved.

‘Formal’ policy interventions and regulatory frameworks are unlikely to be enough to steer an increasingly algorithmic society in desirable directions. It is likely that corresponding changes are also called for in the behaviours of day-to-day users of algorithmic services and platforms whose choices eventually determine the success or failure of online platforms, products and services. A better understanding of the risks and hidden costs of AI decision-making could inform their choices. This could lead in turn to the development of social norms upholding regulation and making it more effective.  Europe should take the lead in developing the codes of conduct and the regulatory and ethical frameworks that guide the AI community in developing ‘responsible AI’.[4]

3. Recommendations

  1. Adopt transparency by design principles over how the input data in AI algorithms is collected and is being used. Many times algorithmic bias is inherited by the fact that input data does not well represent the sample and introduces bias towards specific categories of people. Transparency over how data is collected in decision-making algorithmic systems is necessary to ensure fairness.
  2. Invest in research on explainable AI. In this way we can increase the transparency of algorithmic systems. AI systems are based in deep-learning techniques in which many times the intermediate layers between the input data and the algorithmic output are considered a “black-box”. Explainable AI can substantially contribute to understanding how these automated systems work.
  3. Integrate technology assessment (TA) in AI research. In order to create awareness of the potential societal and ethical impacts of AI not after the fact but in an early stage of development, prospective policy research such as TA helps to create both awareness of unintended consequences of AI within the AI community and agility among policymakers.
  4. Increase public awareness. As AI algorithms penetrate more and more our life we should be well informed about their usefulness and potential risks. Educational and training programmes can be designed for this purpose. In this way, individuals will not only be aware of dangers but they will also maximise the value from using such systems. In addition, public discussions at a local level on the implications of AI systems should be organis
  5. Develop regulatory and ethical frameworks for distributed responsibility. These frameworks should include clear standards and recommendations over the imposed liability rules which facilitate the protection of both users and manufacturers through efficient and fair risk-sharing mechanisms.
  6. Develop a consistent code of ethics in the EU and at the international level,based on shared European values that can guide AI developers, companies, public authorities, NGOs and users. Authorities, big professional organisations (e.g. the partnership on AI) and NGOs should work together closely and systematically to develop a harmonised code of ethical rules that will be, by design, satisfied by AI systems.
  7. Experimentation. As with clinical trials of new medicines of pharmaceutical companies, AI systems should be repeatedly tested and evaluated in well-monitored settings before their introduction in the market. In such experiments, it should be clearly illustrated that the interaction between individuals and AI systems (e.g. robots) satisfies the standards of safety and privacy of human beings. They should also provide a clear message on how the design of AI systems should be modified in order to satisfy these principles.
  8. Ban lethal autonomous weapons. Europe should be at the forefront of banning the development of lethal autonomous weapons, which includes the support of the respective initiatives by the United Nations.

Note: the authors have participated in the CAF / DG Connect Advisory Forum 

Footnotes:

[1] See Ariel Ezrachi and Maurice E. Stucke (2016), Virtual Competition: The Promise and Perils of the

Algorithm-Driven Economy, Harvard University Press.

[2] Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K. & Kirlik. A. 2016.

First I “like” it, then I hide it: Folk Theories of Social Feeds . Human Factors in Computing Systems

Conference (CHI). See also Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios,

K., Hamilton, K., and Sandvig, C. 2015. “I always assumed that I wasn’t really that close to [her]:”

Reasoning about invisible algorithms in the news feed. Proceedings of the 33rd Annual SIGCHI

Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM):

153-162.

[3]  See for instance: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

[4] See for instance V. Dignum (2017), Responsible Artificial Intelligence: Designing AI for Human Values. In: ITU Journal: ICT Discoveries, Special Issue, No 1. Sept 2017.

Print Friendly, PDF & Email

24 comments

  1. Disturbed Voter

    I brought up this topic as an audience member, at an AI conference in Denver in 1984. What about legal liability? That certainly is where ethics hits the pavement, rather than a nice to have. I got crickets.

    The dark side of autonomous systems, including vehicles … is a no fault policy. If the programmers don’t know why an algorithm does what it does (see cancelled Google experiment of two AI “talking” to each other) … then how can they, or their employer, be found liable?

    I am not saying that AIs will acquire status as persons (as have corporations), but notice how after corporations acquired the status of persons, that management is held even less liable than before (in a structure already designed to limit executive liability).

    AIs in action, can’t be deductively examined, since they are open dynamic systems, and since they aren’t persons (no matter the legal fiction), they cannot be cross-examined in court. They add a further layer of liability protection to corporations who don’t need it.

    Let me propose, that corporations be made to follow the Three Laws of Robotics, before their machinery are. The results of consequence free society are obvious. No matter the sophistication of “the dog ate my homework” excuses.

    1. tegnost

      I’m not in that field of endeavor, and while I’m pretty luddite overall, right down to the flip phone and no fb, although I’m corralled in the google communication arena, i had not processed the agency of ai issue until today. ISTM that one of the reasons, along with “we won’t have to pay truck drivers anymore” is a way out for designers of ai who will likely claim that they only have agency for their own person, not for the ai created in their imperfect image, leading, as disturbed voter highlights, to a further layer of liability protection, and that will have a decidedly dystopian effect on society, such as it is. Also this is a european viewpoint, I doubt that silicon valley has any reservations about the road ahead.

    2. JTMcPhee

      There’s this thing in the law called “strict liability.” Basically, a judgment (by humans, a “social policy) that anyone, a real person or a corporate “legal entity fictitious person (and the people who operate it) who engages in a set of activities that lead to harm are “liable” for the remedy and reimbursement of those injuries, without regard to their intent. Applies in both criminal and civil situations:

      Absolute legal responsibility for an injury that can be imposed on the wrongdoer without proof of carelessness or fault.

      Strict liability, sometimes called absolute liability, is the legal responsibility for damages, or injury, even if the person found strictly liable was not at fault or negligent. Strict liability has been applied to certain activities in tort, such as holding an employer absolutely liable for the torts of her employees, but today it is most commonly associated with defectively manufactured products. In addition, for reasons of public policy, certain activities may be conducted only if the person conducting them is willing to insure others against the harm that results from the risks the activities create.

      Of course this is a matter of post-injury application of some “law,” those binding and enforceable and actually enforced strictures and principles that hav to be enacted by a legislature or crafted from the common law by some daring court. And of course, the legislatures and enforcement agencies, and the courts, have largely become happily captured by the people with the wealth to engage in these risky and deadly and “innovative and disruptive” activities. The sh!tes who are driving us over the cliff of Terminator-Matrix-style mechanization, and their lobbyists, are busily insulating themselves from any liability for harms to the mope “world as it was” before these wonderful potentially potential “benefits” and “novelties” and “really cool stuff” got “invented.” Insulating, by buying off the lawmakers, completing the infiltration of the executive-enforcement parts, and filling the courts with “business-friendly’ judges.

      This article made me gag. I see no “ethics” in the piece anywhere. Just paeans to the “potential” for “awesome innovative disruption,” and wide-eyed innocence toward the “marvels” that infecting the planet with these technologies for which it has become a proud banner claim that “we don’t understand them, they are so smart, smarter than we will ever be, and will see all these relationships in Big Data that can be exploited and might generate Alpha and Beta (and more likely Omega).”

      Asimov’s postulated Three Laws, those nominal “coding principles” to be crafted into the fundamental structres of his postulated “positronic robot brain,” were for a very different set of fictional cases. There’s a ton of critique of his “laws,” which like all “laws” will always fall behind the rushing chaotic edge of human-initiated “invention.” Here’s one critique, outdated in its own way: https://gizmodo.com/5260042/asimovs-laws-of-robotics-are-total-bs

      There are actions that simply should never be taken, if one has any interest in the survival of the species in any kind of sense and structure that we seem to be adapted for. And the “innovators” and “disrupters” who are saying that ‘we,” or of course just some or most of us will “adapt” (or ‘fail to adapt, and die,” are arrogating a vast claim of right and power to themselves. Unfortunately, a claim that because of the current systems and structures that humans have stupidly created, they may be able to make manifest.

      Any proof that there are any “ethical constraints” on this vector of technosuicide, at all? I read about multiple conferences where people point out the vast potential downsides and planetary dead ends of what looks like a manifestation of the the Fermi Paradox. And the response? Kind of like what comes out of sessions like COP24, or in the case of the town where I live, a loudly praised “ban on single use plastics” that extends only to concessionaires selling junk food to mopes at city parks and other municipal properties, and even those little ‘initiatives” don’t start to be applied until ‘supplies on hand’ are exhausted, and not until 2020 in any event. Giving the Chamber of Commerce Horrors plenty of time to moot out even this little bit of vanity. Not even close to any kind of fix, and just so typical of the New Slogan: “Cupidité, Stupidité, Futilité!”

      My guess is that our species will not even get the chance to undertake Frank Herbert’s posited “Butlerian Jihad,” smashing the machines (Luddites! cry the suicidal tech lovers!) that think like humans…

      But hey, what do I know? Destiny Calls!

      1. Synoia

        There in one assumption about much of the AI future: There will be electricity, and fuel, enough to power them.

    3. JTMcPhee

      Maybe (unlikely, of course) these activities should be subject to “strict liability?”

      Absolute legal responsibility for an injury that can be imposed on the wrongdoer without proof of carelessness or fault.
      Strict liability, sometimes called absolute liability, is the legal responsibility for damages, or injury, even if the person found strictly liable was not at fault or negligent. Strict liability has been applied to certain activities in tort, such as holding an employer absolutely liable for the torts of her employees, but today it is most commonly associated with defectively manufactured products. In addition, for reasons of public policy, certain activities may be conducted only if the person conducting them is willing to insure others against the harm that results from the risks the activities create.

  2. Stephen V.

    Here is another recently published paper:
    ABSTRACT:: (Expected) adverse effects of the ‘ICT Revolution’ on work and opportunities for individuals to use and develop their capacities give a new impetus to the debate on the societal implications of technology and raise questions regarding the ‘responsibility’ of research and innovation (RRI) and the possibility of achieving ‘inclusive and sustainable society’. However, missing in this debate is an examination of a possible conflict between the quest for ‘inclusive and sustainable society’ and conventional economic principles guiding capital allocation (including the funding of research and innovation). We propose that such conflict can be resolved by re-examining the nature and purpose of capital, and by recognising mainstream economics’ utilitarian foundations as an unduly restrictive subset of a wider Aristotelian understanding of choice.

  3. JTMcPhee

    Looking at the article and the writings on the subject, dare one observe that ‘resistance is likely futile?’ Without gaining the benefits of being incorporated into the Star Trek version of the Borg?

  4. Synoia

    The is a conceptually simple solution:

    Any AI decision taken or recommended has to have a public documented accompanying rationale.

    ie: A log of its decision process.

    No exceptions.

    I also like the strict liability explained above. Just as people are liable for their dogs’ actions, so it the owner of the AI.

  5. Brooklin Bridge

    Will our species live long enough to be killed off by AI, or will GW save the day and get us first?

  6. Bill Smith

    Is there any agreement on what AI actually is? What are the overlaps in AI, machine learning, deep learning, expert systems? What about blended systems of the above?

    Anyone with an AWS development account look at what is offered there these days for free?

    How can something that before long anyone who knows something about software can create be regulated?

    1. JTMcPhee

      You know the answer to that. The plagues are out of the unfortunate Pandora’s little container.

      Maybe us older folks will succumb to natural demise, before the species hits the Jackpot, hey?

  7. Brooklin Bridge

    AWS and AI? (an intro): https://aws.amazon.com/machine-learning/

    Regulations? In theory, Amazon has some responsibility for what goes into what it calls it’s “ecosystem,” but I can’t see them getting in any trouble that vast sums of money -or at least the lure of it- can’t get them out of; sums that are basically rounding errors to them.

    I suppose, business/developers that use their AI SDKs will bear a good deal of the responsibility, if not all of it, for what they do with it. As will Amazon for what they do with it, but legal liability will likely cascade down to the end user as much as big money can buy and a corrupt judicial system can get away with.

    It’s not a particularly healthy time for AI to be blossoming.

    1. JTMcPhee

      Not a mention in all of this of that silly “precautionary principle,” is there? “Inconceivable!”

      And I’d challenge the use of “blossoming” to describe the advent of this possibly inevitable Fermi Paradox thing. Lots of other less flowery terms to characterize it. Small point, no criticism of you. We are all at sea in an unimaginable storm.

      1. Brooklin Bridge

        No offense taken. Blossoming can be a pretty grizzly unsentimental affair for instance if you’re a fly and the blossoming flower is a fly catcher. Nature’s opulence can be lethal at the same time as frilly.

        That said, I would be inclined to celebrate technology in general and AI in particular if the human environment in which it was created was enlightened and humane rather than outright dystopian. In either case, blossoming seems strangely appropriate even if radically different depending on context (as in The Picture Of Dorian Grey) .

  8. Brooklin Bridge

    A significant problem with AI creeping into all aspects of our daily lives is likely going to be a general atrophy of human cognitive ability as we loose the necessity of solving so many every-day kind of problems and issues – at least in a large part of society. It may be argued that not having to think about or solve any sort of basic problem will only free people up to devote their energy towards more complex ones, but except for a small segment of society that gets highly expensive and extensive training, that will probably remain just the usual bs intended to snooker the rubes. A better general pattern for increasing the odds of our extinction could hardly be thought up on purpose.

    Interesting how science fiction has covered so much of this and I’m also amazed at how cavalierly I used to toss off those issues as never-in-my-lifetime back in my SiFi reading days.

    1. el_tel

      Interesting that you raise these points and how sci fi has covered them years ago. I have been suggesting on reddit that the time-travelling show ‘Travelers’ is following Asimov’s ‘The End of Eternity’ in its broad plotting. One other contributor who is clearly also old enough to have gobbled up real books on the subject in their youth, said they’d have posted the same thing, had I not done so.

      TV is currently obsessed with the idea of time travel to solve our current crises and although Travelers is novel in having a future AI “fixing the 21st century issues of climate change etc” in order that humanity does not die out, I (and this other commenter) raised the Asimovian hypothesis that humans must solve their own problems, else risk extinction by apathy as the “time-spanning quantum AI” solves (supposedly) all our problems.

    2. Kathleen Mikulka

      The push for “personalized” AI education is already happening. Besides not being personalized this computer education is dumbed down. The hedge funds pushing it foresee ai replacing teachers in your childrens’ schools. Taxpayers will buy it because they will say it will save $$. Only the Gateses and Zuckerbergs kids will have real teachers and no tech in their schools as they educate the next generation of leaders. Scared Teacher

  9. TG

    I find it curious that so many people forget the we already have lethal autonomous weapons in service, and they have been in use for over a century.

    They are called land mines (OK also water mines).

    They may mostly be simply looking, but hey, they autonomously input data and make a decision and take potentially lethal action. And many of these mines have been very sophisticated computers for decades..

    And really the debate is therefore nothing new. The entire campaign against landmines is really the same thing.

    1. W

      Here’s a fun ‘ethics of autonomous killing machines’ point: landmines are commonly designed to maim, not kill, as this imposes a greater burden on the military and society of the injured person.

  10. Winslow R.

    The ethical and moral spheres, in theory, are the domain of the church.

    Perhaps funny but the church should be funding/leading the development of a Jiminy Cricket (JC) AI that would compete with the commercial and government funded AI’s for our attention.

    Christian, Muslim, Buddhist, Hindu, Humanist, Scientific….. AI’s, let’s get them all online.

    1. Brooklin Bridge

      Oops, (AI or GW) as J.K. Galbreith observed re. the difference between Communism and Capitalism (in Capitalism man exploits man), I had it the other way around.

Comments are closed.