MENU

What Governments Need to Understand About Ethical AI

September 18, 2018 • TECHNOLOGY, Artificial Intelligence, Emerging Ideas, Europe's recovery is possible. This is how…

By Josh EntsmingerDr. Mark EspositoDr. Terence Tse, and Danny Goh

The increasing application of artificial intelligence across the value chain reflects that such a technological development provides competitive advantages to enterprises. However, as one can observe, its meteoric rise paves the way for greater ethical risks – which means a more effective governance should be put in place. Here are a few propositions governments can consider when looking at the scope of the problem associated with AI.

 

Primum Non Nocere. First do no harm. So goes the more modern version of the Hippocratic Oath, taken by doctors despite knowing more than likely they will be involved in a patient’s death. The involvement may be from a mistaken diagnosis, exhaustion, or a variety of other influences, leading to a natural concern about how many of these mistakes could be avoided.1 AI is taking up the challenge, and shows promise, but just as with doctors, if you give AI the power of decision making along with the power of analysis, it will more than likely be involved in a patient’s death. If so, is it the responsibility of the doctor? The hospital? The engineer? The firm?

Answers to such questions depends on how the governance is arranged – whether or not a doctor is at the end of each AI-provided analysis, checking whether or not it’s correct; whether or not the decision-making paths of each AI driven diagnosis can be followed. It is paramount to remember, that current attempts to automate and reproduce intelligence are not deterministic, they are probabilistic, and therefore subject to issues and experiential biases that plague all other kinds of intelligence. The same issues  arise for self-driving cars, autonomous drones, and the host of intentional and incidental ways AI will be involved in life-or-death scenarios, and the more day to day risks people face. As machine to machine data grows in the internet of things, companies with preferential access will have more and more insight into more and more minute aspects of behavioural patterns we ourselves might not understand – and with that comes a powerful ability to nudge behaviour, or more worryingly, limit choices. We can begin to see the larger picture, but governance is in the details. The risks of 99% accuracy in a hospital, in image recognition for food safety, in text analysis for legal documents, will not be the same – as such, policymakers will need more nuanced accounts of what is involved. The details for the use cases of AI in each practice are as well going to change. What kind of oversight, standards, and frameworks for making AI accountable in healthcare may require different conditions than AI in education, in finance, in telecom, in energy, on and on.

 

From Tech Literacy to Tech Fluency

Effective governance of AI means the burden of adjustment falls, if unequally, on all partners – on governments, on firms, on users, and non-users. Ethical governance takes it further. New technology means new risks, meaning firms, governments, and users have to be literate enough about the technology to understand the new set of risks and responsibilities that come with the tech. Understanding those risks is not straightforward – not for users, governments, or even the firms employing that tech. Consider if an AI is employed to assess risk of a heart attack, detecting variations in eating habits and other trends identified to be important to making an effective prediction;2 or, more simply, assessing risk by scanning your eye. Consider a service leveraging voice analysis to identify PTSD.3 

It is paramount to remember, that current attempts to automate and reproduce intelligence are not deterministic, they are probabilistic, and therefore subject to issues and experiential biases that plague all other kinds of intelligence.

The more individual the profile, the better the prediction. Consider if a school replaces teachers as test monitors with an AI system to detect cheating or is leveraged by students to cheat better,4  or to identify trends in homework grades and class attendance to identify the probability that a student will drop out.5  Such cases have a clear burden on the designer and the firm – but the depth of usefulness of any of the predictions implies an understanding of how that decision was reached, what was a trigger – for someone being informed of an increased risk of heart attack without, seemingly, overtly changing any behaviour, generates confusion – for a cheating system that might not reliably be able to tell between stretching and looking at another paper, we can create further problems. But these are technical issues and have technical solutions – the problem is when the user has to change their behaviour to match the technical issues.



Please login or register to continue reading... Registration is simple and it is free!

About the Authors

Josh Entsminger is an applied researcher at Nexus Frontier Tech. He additionally serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.

 

Dr. Mark Esposito, PhD., is a Socio-Economic Strategist and bestselling author, researching MegaTrends, Business Model Innovations and Competitiveness. He works at the interface between Business, Technology and Government and co-founded Nexus FrontierTech, an Artificial Intelligence Studio. He holds appointments as Professor of Business and Economics at Hult International Business School and he is equally a faculty member at Harvard University since 2011. Mark is an affiliated faculty of the Microeconomics of Competitiveness (MoC) network at Harvard Business School’s Institute for Strategy and Competitiveness and is currently co-leader of the network’s Institutes Council.

Dr. Terence Tse is an Associate Professor at ESCP Europe London campus and a Research Fellow at the Judge Business School in the UK. He is also head of Competitiveness Studies at i7 Institute for Innovation and Competitiveness. Terence has also worked as a consultant for Ernst & Young, and served as an independent consultant to a number of companies. He has published extensively on various topic of interests in academic publications and newspapers around the world. He has been interviewed by television channels including CCTV, Channel 2 of Greece, France 24, and NHK.

Danny Goh is a serial entrepreneur and an early stage investor. He is the partner and Commercial Director of Nexus Frontier Tech, an AI advisory business with presence in London, Geneva, Boston and Tokyo to assist CEO and board members of different organisations to build innovative businesses taking full advantage of artificial intelligence technology.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

« »