Human-Algorithm.-Featured-Image

By Tessa Haesevoets, David De Cremer, Kim Dierckx and Alain Van Hiel

Most would accept that there is a role for both humans and AI in managerial decision-making today. But where is the optimal balance between human input and that of machines in the process of arriving at decisions? And to what extent are humans prepared to accept the inclusion of machines in managerial processes?

Artificial intelligence (AI) refers to machines performing cognitive functions usually associated with human minds, such as learning, interacting and problem solving. Recent advances in computational power, the exponential increase in the availability of data, and new machine-learning techniques have resulted in the development of AI-based solutions for various managerial tasks. As a result, consensus has emerged that the future of work entails humans and algorithms working together in making managerial decisions (De Cremer, 2020).

However, to date, no attention has been paid to what this cooperative partnership should look like. So, an important question that arises is how much input from humans and how much input from algorithms is warranted in order for humans to be willing to accept algorithmic involvement in managerial decisions. If humans do not accept the proposed cooperative work model, it is possible that they will discount the input from machines. The objective of the present research, therefore, was to provide a better understanding of the desired relative weight of human and algorithmic input in managerial decision-making processes.

Human-algorithm cooperation

The academic literature has consistently shown that algorithms are generally better at making optimal decisions than human beings. For instance, studies have found that algorithms prove better at recruiting new staff (Hoffman et al., 2017), predicting employee performance (Highhouse, 2008) and providing medical diagnoses (Beck et al., 2011). A meta-analysis of these effects revealed that algorithms outperform human judgement by 10 percent on average (Grove et al., 2000). These findings illustrate that, across a vast majority of tasks, it is far more common for algorithms to outperform humans than vice versa.

However, in a recent study involving 1,500 companies in 12 industries, Wilson and Daugherty (2018) found that organisations can achieve the most significant performance improvements when humans and machines work together. An example of such a successful collaboration between man and machine concerns the case of cancer detection in images of lymph node cells. Wang et al. (2016) found that a combined human-AI approach outperformed both human-only and AI-only decisions. Specifically, the authors reported a 0.5 percent error rate in the combined condition, which represented a reduction in error rate of least at 85 percent compared to the human-only and the AI-only approaches.

Are humans willing to accept algorithms?

Organisations have long used AI-based solutions as tools and as specific types of advisors that can facilitate decision-making for human managers but, more recently, organisations are also employing algorithms that have managerial discretion. For example, the Hong Kong-based venture-capital firm Deep Knowledge recently appointed a decision-making algorithm – known as VITAL – to its board of directors (Nelson, 2019). In a similar vein, Amazon recently employed a warehouse-worker tracking system that can automatically fire employees, without a human supervisor’s involvement (Bort, 2019). These examples illustrate that algorithmic management, where algorithms have a certain level of autonomy when making decisions, is on the rise in today’s organisations.

Human-Algorithm.

However, it is vital to note that, although algorithms may help humans perform in more accurate and thus more efficient ways, at the same time humans also display some aversion towards algorithms. The tendency for humans to be reluctant to employ algorithms in decision-making – a phenomenon that has been referred to as algorithm aversion (Dietvorst et al., 2015) – raises the question of whether joint human-algorithm managerial decision-making can become fact or will remain only fiction. Indeed, a partnership between humans and algorithms can succeed only if humans are willing to accept algorithms as decision-making agents.

Overview of the study results

We conducted a series of five empirical studies (total N = 1,025 managers). The results consistently show that most managers strongly oppose a partnership in which algorithms provide the most input into the decision-making process. Yet, our findings also clarify that human managers do not want to exclude algorithms entirely from providing input. Instead, it is illustrated that, generally speaking, human managers are willing to accept algorithm involvement in managerial decisions, as long as algorithms have less input than humans. These findings mirror those of Bigman and Gray (2018), who demonstrated that people are less aversive to algorithms when these are limited to having an advisory role. Dietvost and colleagues (2018) similarly reported that people accept algorithmic input when they have control over the outcome. However, our research goes beyond those observations by identifying exactly what the “optimal” human-algorithm work relationship should look like. In this light, the present research extends these prior studies by clarifying that human managers’ acceptance of human-algorithm partnerships steadily rises when the involvement of humans increases, up to the point that human agents have a weight of about 70 percent in managerial decisions. Once this particular point has been reached, higher amounts of human input did not result in higher acceptance rates.

Most managers strongly oppose a partnership in which algorithms provide the most input into the decision-making process.

It is important to stress, however, that our studies also demonstrate that this overall pattern actually represents an average tendency, rather than a genuine psychological reaction that is shared by all managers. More specifically, our results consistently show that some managers (about 5 percent) prefer a partnership in which algorithms have the upper hand in managerial decisions, whereas others (about 15 percent) seem to prefer a partnership in which humans and algorithms both have an equal input (i.e., 50 percent weight) in managerial decisions. But, it must be stressed that these two classes remained a minority. Indeed, the third and largest subgroup of managers (about 50 percent) prefers a partnership in which humans have more weight than algorithms, although they do not necessarily want to exclude algorithms entirely. In addition to these three subgroups, there is also a fourth subgroup of managers (about 30 percent) who do want human agents to have complete control in managerial decisions. The present research is the first, at least to our knowledge, to illustrate that managers do not react all alike to different levels of human-algorithm involvement.

Practical implications

Efforts to optimise the functioning of organisations have led to the increasing use of algorithms, sometimes even up to the level that algorithms are given complete decision control. Schrage (2017), for instance, argued that “at some of the world’s most successful enterprises – Google, Netflix, Amazon, Alibaba, Facebook – autonomous algorithms, not talented managers, increasingly get the last word.” Unfortunately, this has been done without much thought about whether humans are ready to accept algorithms. The present research is pivotal, because it informs organisations that the majority of human managers are willing to accept a partnership in which humans have 70 percent weight and algorithms 30 percent weight in managerial decisions.

The majority of human managers are willing to accept a partnership in which humans have 70 percent weight and algorithms 30 percent weight in managerial decisions.

But, at the same time, our findings also warn organisations that a substantive part of the workforce – including approximately 30 percent of all managers – wishes to exclude algorithms completely from managerial decisions. Because of their strong aversion to algorithms, it can be accepted that these managers will do anything to exclude algorithms. It is even possible that they will incur high financial costs for either themselves or the organisation to avoid algorithms having a say in managerial decisions. It is of crucial importance that organisations should be made aware of the existence of this particular subgroup, since their reservations regarding the introduction of algorithms can have severe negative consequences for organisational efficiency. 

About the Authors

Tessa Haesevoets.jpg.Tessa Haesevoets is a postdoctoral researcher at the Social Psychology Research Unit at the Department of Developmental, Personality and Social Psychology, Ghent University. She holds a PhD from Ghent University. Her research interests include trust repair and artificial intelligence.

David De CremerDavid De Cremer is a Provost’s chair and professor in management and organizations at NUS Business School, National University of Singapore. He is the founder and director of the corporate-sponsored “Centre on AI Technology for Humankind” at NUS Business school. Before moving to NUS, he was the KPMG endowed chaired professor in management studies and current honorary fellow at Cambridge Judge Business School. He is also a fellow at St. Edmunds College, Cambridge University. He is named one of the World’s top 30 management gurus and speakers in 2020 by the organization GlobalGurus, one of the “2021 Thinkers50 Radar list of 30 next generation business thinkers”, nominated for the Thinkers50 Distinguished 2021 award for Digital Thinking (a bi-annual gala event that the Financial Times deemed the “Oscars of Management Thinking”) and included in the World Top 2% of scientists (published by Stanford). He is a best-selling author with his co-authored book (with Tian Tao and Wu Chunbo) on “Huawei: Leadership, Culture and Connectivity” (2018) having received global recognition. His recent book “Leadership by Algorithm: Who leads and who follows in the AI era?” (2020) received critical acclaim worldwide, was named one of the 15 leadership books to read in Summer 2020 by Wharton and the kindle version of the book reached the no. 1 at amazon.com. His latest book is “On the emergence and understanding of Asian Global Leadership”, which was named management book of the month July (2021) by De Gruyter. His website: www.daviddecremer.com

Kim Dierckx.jpg.Kim Dierckx is a PhD student at the Social Psychology Research Unit at the Department of Developmental, Personality and Social Psychology, Ghent University. His research interests include procedural fairness and discrimination.

Alain Van Hiel.jpg.Alain Van Hiel is a professor at the Social Psychology Research Unit at the Department of Developmental, Personality and Social Psychology, Ghent University. He holds a PhD from Ghent University. His research interests include political attitudes and group processes.

References

  • Beck, A. H., Sangoi, A. R., Leung, S., Marinelli, R. J., Nielsen, T. O., Van De Vijver, M. J., … & Koller, D. (2011). Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Science Translational Medicine, 3, 108ra113-108ra113.
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
  • Bort, J. (2019). Amazon’s warehouse-worker tracking system can automatically fire people without a human supervisor’s involvement. Business Insider.
  • De Cremer, D. (2020). Leadership by Algorithm. Hampshire: Harriman House Ltd.
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144, 114-126.
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64, 1155-1170.
  • Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: a meta-analysis. Psychological Assessment, 12, 19-30.
  • Highhouse, S. (2008). Stubborn Reliance on Intuition and Subjectivity in Employee Selection. Industrial and Organizational Psychology, 1, 333-342.
  • Hoffman, M., Kahn, L. B., & Li, D. (2018). Discretion in hiring. The Quarterly Journal of Economics, 133, 765-800.
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5, 2053951718756684.
  • Nelson, J. (2019). AI in the boardroom – Fantasy or reality?
    Schrage, M. (2017). 4 models for using AI to make decisions. Harvard Business Review.
  • Wang, D., Khosla, A., Gargeya, R., Irshad, H., & Beck, A. H. (2016). Deep learning for identifying metastatic breast cancer. arXiv:1606.05718.
  • Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: humans and AI are joining forces. Harvard Business Review, 96, 114-123.

LEAVE A REPLY

Please enter your comment!
Please enter your name here