By David De Cremer, Devesh Narayanan, Andreas Deppeler, Mahak Nagpal, Jack McGuire and Jess Zhang, Centre on AI Technology for Humankind at NUS Business School
Intelligent technologies are dramatically transforming modern societies. The potential economic and social benefits of these technologies seem unprecedented. Intelligent technologies are therefore increasingly being involved in a variety of decision-making contexts, as such influencing a wide variety of outcomes that are important to human end-users. This reality means that intelligent technologies will and have to be used for good so they do not endanger the stability of our social fabric and the sanctity of our human autonomy. Despite the increasing awareness that a human-centred approach to the adoption and employment of intelligent technologies is needed, concerns about the potential risks and harms that these technologies can bring to humanity are nevertheless growing. In fact, this observation makes us realize that the obsessive search for technological solutions striving to optimize efficiency and maximize productivity will prioritize investments in innovations that primarily serve the interests of those designing and distributing intelligent technologies. Indeed, Big Tech companies employ and advocate a specific narrative (sometimes referred to as the Silicon Valley mindset) where they emphasize that technology can be used to solve most problems that we encounter in society and business.
At the Centre on AI Technology for Humankind (AiTH; https://bschool.nus.edu.sg/aith/), we have been discussing and examining this state of discourse on AI ethics and trustworthiness, the unquestioned dominance of Big Tech, and the deficiencies of techno-solutionist and machine-centred approaches to AI for some time now. These efforts have inspired us to write a manifesto: a public declaration of AiTH’s thinking on what is needed (and why) to employ a distinctive and legitimate human-centred approach to the adoption and integration of intelligent technologies in our businesses and society.
Below, we summarize what our manifesto aims to achieve and outline what implications a “humanity first” approach entails when we further advance the development of intelligent technologies. A longer version of the manifesto has been published in the journal AI and Ethics (https://www.springer.com/journal/43681) and was launched during the AiTH event on 8 December 2021, where we discussed the importance of this manifesto in developing a road to a human-centred digital society (see picture below).
What is the fuzz about?
Not too long ago we imagined a society where technical innovation would improve labour productivity and overall wealth to such an extent that working three hours a day would be “quite enough” (see the work of John Maynard Keynes). Today, however, people are working more than ever, salaries in many professions and regions have been stagnant (in real terms) since the 1980s, and pension funds are under threat everywhere – forcing people to work longer than before. In fact, people today feel so hijacked by their job and the digital requirements that are imposed on them –which is accelerated by Covid-19 – that many of them are simply walking away, or have the intention to, from their jobs. This is a worrying trend.
Instead of working less and enjoying life more, we see that people today experience more uncertainty than ever and face big challenges that threatens their welfare, career, and future. Inspired by the idea of “techno-solutionism”, big tech companies feed into these concerns and propose solutions that assume that most societal problems and challenges can be ‘solved’ if one has the right technology. Rather than focusing more on creating a humane society and emphasizing the value of human abilities, instead we seem to become conditioned to define solutions by means of optimizing and modifying the properties of machine-learning algorithms to correct and direct society. As such, intelligent technologies – and the companies that design and develop them – have acquired a position of power in our societies that goes unchallenged and adds to the creation of a world that ultimately may be more suited for machines than for humans.
At AiTH, we feel that the widespread obsession with intelligent technologies as the primary way forward to promote prosperity and efficiency could lead to a ‘tech crisis’ that is built upon and reinforced by the various social anxieties and fears about intelligent technologies that we see today. As many crises seem unique, but in reality are quite similar when looking at the underlying causes, we see similarities between the ‘tech crisis’ that AiTH predicts and the global financial crisis of 2007-2008. Specifically, we observe the following similarities:
-
Contemporary technoscientific thinking promotes calculative and hyper-competitive thinking and a ‘ticking-the-box’ mentality that reduces everything (including humans; cf. people analytics) to measurable and predictable data-points. This mindset was also dominant in the wake of the global financial crisis. For example, mortgage origination and trading desks at major investment banks relied on quantitative models and ever more complex financial engineering to manage the risks of their holdings, which clouded people’s judgments and prevented them from recognizing the looming dangers until it was too late.
-
In the wake of the global financial crisis, banks were seen as “too big to fail” because they owned and ran most of the financial infrastructure essential to the functioning of the globalized economy. Similar beliefs are held about tech companies today. A handful of tech companies provide and maintain the digital cloud infrastructure that enables much of the world’s private and public sector activities, a reality that makes them too essential to the modern digital economy to fail.
-
The belief in technocratic solutions is part of a broader ideology of instrumental rationality. We saw it in the early 2000s when banks relied on oversimplified models to manage complex structured financial products. We have seen it since the crash when central bankers devised ever more creative ways to engage in what essentially amounts to printing money. Today we see it in pronouncements of technology firms about ethical and responsible artificial intelligence. It is the unquestioning belief in the inevitability of technical progress, along with the assumption that any potential threats or harms arising from such never-ending progress can be “managed” or “mitigated” with ever more technical solutions.
What do we fear?
We observe that the magical thinking surrounding intelligent technologies has caused many businesspeople to worry about finding a place for humans in a world run by computers, rather than the other way around. Such thinking, and the fear of humans being “left behind”, threaten to fragment our social fabric. Perceived divides – between organizations that are “AI leaders” and those that are “AI laggards”, between those whose jobs will be “disrupted” and those whose jobs are “safe”, and between “technophiles” and “Luddites”, to name a few – can produce widespread social anxiety and dissatisfaction among those who feel left out of the technological future we seem to be hurtling towards.
What do we propose?
AiTH is deeply concerned about these seemingly “machine-centred” approaches to the design and deployment of AI. In the reductionist perspective taken by such approaches, setting the right incentives and rewards is seen as sufficient to generate “optimal” behaviours and decisions. In contrast, we advocate a human-centred approach to developing and deploying intelligent technologies. Such an approach fully embraces the complexities and grey zones of human judgments and intuition.
To do so, at AiTH, we started by developing our own definition of Human-Centred AI (HCAI). We argue that HCAI focuses on designing and deploying AI systems in ways that serve the needs of, and create benefits for, humans. In line with this purpose, we recognize that HCAI must contribute to and empower the human experience of competence, sense of belonging, control and well-being.
Competence:
HCAI augments and enriches human capabilities and performance across all domains in life, rather than automating away the skills and attributes that make us human.
Belonging:
HCAI designs AI systems with the understanding that intelligent technologies are fully embedded in society. Such systems can therefore be expected to act in line with the norms and values of a humane society, including fairness, justice, ethics, responsibility and trustworthiness.
Control:
HCAI preserves human agency and sense of responsibility by designing AI systems to give users a high level of understanding of, and control over, their specific and unique processes and outputs.
Well-being:
HCAI advances the self-esteem, confidence and happiness of all humans. The design and deployment of such AI systems must be mindful to the varied dimensions of life that they stand to impact, as well as their long-term effects on overall well-being.
Building on this definition, we derive seven recommendations how businesses should approach and employ intelligent technologies as part of their ongoing digital transformation efforts:
- Humans first, machines second: The capabilities of intelligent technologies for thought and action should not serve as the standard by which humans are assessed and compared. Considerations about the well-being and flourishing of humans must always be central to any technology deployment.
-
‘Digital transformation’ and the adoption of intelligent technologies should be value-driven rather than solely profit-driven: We can use machines for good if we are clear about what our human identity is and what value we want to create for a humane society. A clear understanding of how to do business and what kind of value ought to be created for end-users can serve as a lens for evaluating the appropriateness and necessity of technological interventions.
-
Human and machine intelligences should not be treated as interchangeable: Automation should not be thought of in terms of its potential to replace or disrupt human labour. We should instead evaluate how automation complements and enhances our human abilities and ways of working. The future of work should be a collaborative one: where machines are deployed in ways that respect the autonomy and abilities of workers, and in turn make work better for everyone.
-
The ultimate responsibility for technologically-augmented decisions must remain in human hands: Intelligent technologies are not moral agents. The ‘decisions’ they make are situated within contexts and rules set in place by human choices – by those who develop, deploy and use them. Humans must retain ultimate responsibility for these decisions.
-
Ethical considerations about technology must be embedded in organizational structures and practices, rather than in abstract frameworks and principles: Current governance frameworks and principles reduce ethics, fairness and trust to technological features and boxes to be ticked. However, we can only have ‘ethical AI’ when ethics is fully integrated into daily organizational life. Organizations need to translate principles into practices, and educate all their workers in terms of creating moral awareness and an enhanced sense of responsibility (which we refer to as moral upskilling).
-
Embrace value pluralism and respect cultural differences while advancing ethical AI: Current conversations about ethical AI tend to emphasize perspectives from the West rather than the East, and the Global North rather than the Global South. For human-centred AI to serve the needs of all – rather than just a few – humans, we must be sensitive to how values and interests are displayed differently across diverse cultural and social contexts and how these differences may impact our thinking about and assessment of fair, trustworthy and ethical intelligent technologies.
-
Focus on real AI, rather than imagined AI: There are growing tendencies to focus on the anticipated future risks and benefits of certain kinds of “superintelligent” AI that might exist in the future. We argue that fantasies of “superhuman” AI (endlessly repeated by business writers and self-proclaimed experts) mislead people into overestimating the capabilities of currently available AI systems. As a result, today’s society risks being constructed and shaped in correspondence with imaginaries of AI that may or may not materialize. The process of building value-aligned and human-centred AI must begin with a realistic attitude that focuses on the AI systems that we have today, and the actual material harms and benefits that they presently create.
About the Authors
David De Cremer is a Provost’s chair and professor in management and organizations at NUS Business School, National University of Singapore. He is the founder and director of the Center on AI Technology for Humankind at NUS Business school. Before moving to NUS, he was the KPMG endowed chaired professor in management studies and current honorary fellow at Judge Business School, University of Cambridge. He is named one of the World’s top 30 management gurus and speakers in 2020 by the organization GlobalGurus, one of the “2021 Thinkers50 Radar list of 30 next generation business thinkers”, nominated for the Thinkers 50 Digital Thinking Award (a bi-annual event that the Financial Times deemed the “Oscars of Management Thinking”), and included in the World Top 2% of scientists (published in 2020). His latest book is “Leadership by algorithm: Who leads and who follows in the AI era?”
Devesh Narayanan is a research assistant at the Centre on AI Technology for Humankind at NUS Business School, and an MA candidate at the NUS Department of Philosophy. His research focuses on the ethics and politics of AI in organizational contexts, and particularly on how calls for ‘ethical’ and ‘trustworthy’ AI might be theoretically and empirically grounded. He holds a B.Eng. in mechanical engineering from NUS.
Andreas Deppeler is Adjunct Associate Professor and Deputy Director of the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore. Trained as a theoretical physicist, he has worked in various consulting and management roles in the private sector across Europe, the United States and Asia. At NUS Business School, he teaches and writes about the economic and societal impact of technology.
Mahak Nagpal is a postdoctoral research associate at the Centre on AI Technology for Humankind at NUS Business School, National University of Singapore. She received her Ph.D. in Organizational Behavior and Business Ethics from Rutgers Business School and Bachelor’s degree from The University of Virginia. Prior to pursuing her Ph.D., she was a Behavioral Lab Manager at The Darden Graduate School of Business Administration. Mahak’s research uses both prescriptive and descriptive methods to consider the question of how users of intelligent and autonomous systems in the workplace build an understanding of what is right, when it first comes across as not clearly right or wrong.
Jack McGuire is a Ph.D. candidate in the Department of Management & Organisation at NUS Business School. Broadly speaking, his research examines the augmenting effects of AI technology on employees at work. Prior to this, he was the Experimental Lab Manager of the Cambridge Experimental and Behavioural Economics Group (CEBEG), and Research Assistant in the Department of Organisational Behaviour at Judge Business School, University of Cambridge. He has obtained an MA (SocSci) from the University of Glasgow, an MSc from University College London.
Jess Zhang is currently an independent digital content producer. She held various roles in the past decade as the Head of Executive Education (China Market), NUS Business School, Singapore; China Relations Development Director with Judge Business School, University of Cambridge, UK; Associate Director, Corporate Development (Asia), HULT, Shanghai; Co-founder and Executive Director of Centre on China Innovation, CEIBS (China Europe International Business School). She was the translator of two international bestsellers: ‘Legionnaire: Five Years in the French Foreign Legion’ by Sir Simon Murray, CBE, and ‘Huawei: Leadership, Culture, and Connectivity’.