What Every Manager Should Know About Human-Centered AI

A Manager’s Introduction to Human-Centered Artificial Intelligence

By Mark Esposito, Terence Tse, Aurelie Jean and Josh Entsminger

As AI advances, there is a need for better frameworks to understand how to create value from changing human-AI relationships. This article offers a practical guide for managers and executives looking to leverage human-centered approaches to AI.

 

I think the problem was that our systems designed to recognise, and correct human error failed us.”1 This was the explanation given in 1999 by Carl Pilcher, science director for solar system exploration at NASA’s Jet Propulsion Laboratory, when a failed measurement conversion led to the 125 million Mars Climate Orbiter slamming into the Martian surface. This problem was succeeded 14 years later when a calculation error in the design of a new Spanish submarine2 led to a fatal flaw, such that the submarine could submerge but could not resurface – an error causing a 2.2 billion dollar investment to be delayed by years. Each of these errors was a simple issue of calculation and translation, ultimately a matter of mistaken decimal points. So, what if a system could automatically catch all such errors?

Successful AI projects are less about the tech than the institutions and practices to which they are connected.

Naturally, the suspicion has already been investigated, with novel artificial intelligence (AI) systems being deployed across sectors to reduce rates of human error. Across such projects a common theme has emerge – AI systems may be good at correcting for problems identified, but they are not very good at independently identifying what counts as a problem that needs to be corrected outside of what was explicitly identified as a problem, yet. The problem of using AI goes further – as its not simply correcting for the mistake but alerting that a mistake was needed to be corrected.

This identification problem has been joined by a host of like issues creating difficulties for firms trying to build, buy, deploy, and change artificial intelligence solutions. Fundamental to all these problems is a lack of common principles and guidelines shaping how organisation understand the value of AI and what people want from it. As the adoption and development of AI progresses, the latter of points of what value AI should provide is occupying a larger space in public attention, as AI often brings out automation anxiety over the possibility of the replacement of people’s competitive advantage across tasks and jobs rather than their use to accentuate a person’s advantages – whether to help humans catch errors, or to replace human where they produce errors.

As AI advances, manager and executives need better frameworks to understand how to create value from changing human-AI relationships. Human-Centered AI is emerging as just such a framework to help firms orient themselves under rapidly changing technology, to better balance ethical principles and successful use, and to better identify what counts as an ethical and successful use of AI at all.

 

WHY AI IS NEVER JUST AI

To understand how to leverage human-centered AI, managers first need to understand changes facing AI management. This begins with three fundamental points:

First, that the AI technology itself is far more than the algorithm; rather, AI is the umbrella term for a class of systems integrating talent capable of creating and augmenting AI solutions, the AI algorithm designs themselves, valuable data sets and data management strategies, data capturing devices, and computational power to train and run the solutions. Each of these elements has their own development path, and the unique combinations of each is what generates a competitive solution.

Second, that successful AI projects are less about the tech than the institutions and practices to which they are connected. This can be split between the organisation deploying the solution and the expected end users. On the side of the organisation comes the need for buy-in from existing staff, training on how to use the solution, and an effective feedback design; on the side of the user comes a host of additional dimensions relating to the user-experience and interface design of any given system.

Third, that AI systems evolve, and so do the institutions and practices once AI is used. So, the question for managers is not the next 6 months, but the next 5 years of expected evolutions across the elements and institutions, to ‘look around the corner’ at what’s next in value creation. A lack of clear understanding of unique development paths not only for the firm but the larger market for the institutions and elements of a successful AI system can lead to delayed development, unsuccessful implementation, and reduced effectiveness at scale.

AI is never just AI. Yet, within each are inherent warnings for firms looking to leverage AI to build advantages – as the attempt to acquire a solution without the requisite talent to manage and implement it can lead to failure. Each should also cement that as a system, individual elements can advance at different rates – a firm can acquire data sets without the talent, new algorithms can emerge without appropriate data to be trained on, new data capture systems can advance without acquiring appropriate data management.

But these backend levels have considerable influence on how people understand their relationship with AI and the opportunities afforded. AI is a device for augmenting cognition, it reshapes what a person needs to think about for a task. Consider the calculator as a service for storing and organising mathematical information, its function is to replace the need for human to mentally perform any such calculation for themselves and defer it to the technology.

Mars Climate Orbiter undergoing acoustic testing that simulate launch conditions. Source: http://grin.hq.nasa.gov

  Please login or register to continue reading... Registration is simple and it is free!

About the Authors

Mark Esposito, Ph.D is Co-founder of Nexus FrontierTech, a leading global firm providing AI solutions to a variety of clients across industries, sectors, and regions. In 2016 he was listed on the Radar of Thinkers50, as of the 30 most prominent business thinkers on the rise, globally. Mark has worked as Professor of Business & Economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University and has been on the faculty of Harvard University since 2011. Mark is the co-author of the bestsellers “Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends” and “The AI Republic” which received global accolade.

Terence Tse, Ph.D is a co-founder and Executive Director of Nexus FrontierTech, a London-based tech company specialising in the development and integration of AI solutions that help organisations save time, money and resources by tacking process inefficiencies and data waste. He is also a Professor at the London campus of ESCP Europe Business School. Terence’s most recent book is “The AI Republic: Building the Nexus Between Humans and Intelligent Automation”. He has written more than 110 articles and regularly provides commentaries on the latest current affairs including technology-driven transformations, future of work/education, artificial intelligence and blockchain in many outlets and act as speakers on these subjects around the world.

Aurelie Jean, Ph.D has been working for more than 10 years in computational sciences, applied to engineering, medicine, education, finance, and journalism. Aurélie worked at MIT and Bloomberg. Today, Aurélie works and lives between the USA and France to run In Silico Veritas, an agency in analytics and computer simulations. Aurélie is an advisor at the BCG and Altermind, a mentor at the FDL at NASA, and an external collaborator for The Ministry of Education of France. Aurélie is also a science editorial contributor for Le Point and Elle International, teaches Algorithms in Universities, and conducts research on predictive algorithms.

Josh Entsminger is an applied researcher in technology and politics. He currently serves as a fellow at the PublicTech lab, senior fellow at Ecole des Ponts Business School, and fellow at Nexus FrontierTech. He is currently a doctoral candidate in public sector AI at UCL’s Institute for Innovation and Public Purpose.

References

1. https://www.latimes.com/archives/la-xpm-1999-oct-01-mn-17288-story.html
2. https://o.canada.com/news/spain-builds-submarine-70-tons-too-heavy
3. https://www.wired.com/brandlab/2018/05/ai-needs-human-centered-design

/https://www.wired.com/brandlab/2018/05/ai-needs-human-centered-design/

LEAVE A REPLY

Please enter your comment!
Please enter your name here