By David De Cremer and Laurence Van Elegem
As business strives to integrate AI optimally into its processes, it risks paying insufficient attention to the human element while focusing on the technical. But the reality is that gleaning the hoped-for results from AI requires a holistic view that considers both the machine and the people.
For many business leaders, artificial intelligence (AI) feels like a double-edged sword. On one hand, they are in awe of its enormous potential and are excited about the prospect of applying it to business processes across the board. On the other hand, the excitement about leveraging AI also becomes a source of stress, as boards want leaders to adapt AI quickly, as the tech is developing rapidly, and turn it into an immediate value creator. The fear of missing out (FOMO) among companies has never been higher than today in a world where everyone is imagining that the impossible could be achieved with the use of AI. The consequence of this tense situation is that business leaders feel that, when it comes to AI, they are acting outside of their comfort zone and, as such, freeze up (De Cremer, 2024).
Being paralyzed, a work situation emerges that ultimately will not benefit the success of the AI adoption project. That is, business leaders reason that they do not understand the tech that they have to adopt and therefore remove themselves from the adoption process in its entirety. Instead, they delegate the entire responsibility for turning this essential technology into a “business” asset to the tech experts inside or outside their organizations, be they data analysts, machine learning experts, their IT department, or external consultants. Business leaders think and feel that AI adoption projects can only be made to succeed by those who are AI experts.
The fear of missing out (FOMO) among companies has never been higher than today in a world where everyone is imagining that the impossible could be achieved with the use of AI.
While it may seem a logical step for tech experts to take the lead on tech projects, we have found that this situation, although common in many organizations, often creates a deeper issue that will undermine the successfulness of AI adoption and that, as we will explain below, is rooted in the tension between automation and human augmentation. In fact, we believe that the idea of technology driving technological transformation is nothing more than a myth and does nothing to help businesses integrate AI into their work processes so that value across the board can be created.
It’s all about efficiency, right?
Bringing AI into the business world can be considered a “double” emphasis on the importance of efficiency and, as a result, turns the entire AI adoption project within companies into a “rational” thinking exercise where the human, and often irrational, factor is discounted and eliminated as much as possible. Indeed, a rational approach, still the dominant paradigm within business schools (De Cremer and Narayanan, 2023), argues that only perfectly rational actors who think solely in terms of efficiency can deliver the holy grail for business, which is growth and profit maximization. And what type of actor is best suited to working with data in this way? AI, of course! AI is hailed as the new type of future worker that can work tirelessly to process data in rational, unemotional, and consistent ways, so that performance can be predicted and managed more accurately and, in turn, so that organizations can be turned into more efficient profit-making machines. In fact, AI is presented as the best approach to allowing companies to strategize, better than humans do, in ways that will help them to maximize efficiency and performance levels (cf. Lindebaum et al., 2019).
Interestingly, this rational approach is not only dominant in business education but also aligns well with the mindset of engineers and computer scientists. As a result, AI adoption is seen as an engineering exercise where rational processes steer the integration of this new tech in a human-populated world. When business leaders delegate the responsibility of AI adoption projects to the tech experts, it is then also no surprise that AI is primarily perceived through the lens of cost reduction and efficiency, meaning that a widespread expectation exists today that AI will primarily and significantly boost productivity. As such, AI can be thought of as the perfect homo economicus, as it can free us from mundane tasks and enable us to accomplish more, faster, and with greater quality. Even the data supports this optimism. According to GitHub, developers using AI coding tools report an impressive 88 per cent increase in productivity compared to those who don’t (Rodriguez, 2023). Similarly, a case study by Nielsen Norman Group found that customer service employees could handle 13.8 per cent more inquiries per hour with AI assistance (Nielsen, 2023). Moreover, research indicates that when highly skilled workers leverage generative AI technologies aligned with their expertise, their performance can soar by up to 40 per cent compared to those who do not (Somers, 2023).
The zero-sum reality driving AI adoption
And this rational approach could work were it not that, at the end of the day, companies need their human workforce to perform (AI cannot do it alone), and this reality turns AI adoption into a behavioral exercise. Humans have to be empowered and incentivized to work with AI and, when this willingness exists, then AI can create real and holistic value. But does this happen? When AI enables employees to complete their tasks more quickly, organizations face two possible ways to utilize the resulting “free” time. The first is the utopian one. It’s the story of enhancement that many tech gurus love to tell their eager audiences: AI can and will liberate our workforce from all the tedious repetitive tasks that are numbing their brilliant minds. It frees up their time and their minds so that they can collaborate more, brainstorm more, or creatively hyperfocus on innovation.
It’s a lovely story. And I do believe that the potential is there, but the problem is that most companies extrapolate the same rationality framework that they use for managing technology to human decision-making and behaviour. Instead of empowering their employees to rewrite their own jobs, in collaboration with organizational leadership, by regenerating, seeking interpersonal connections, and more intellectual down time, all of which are prerequisites for creativity and innovation, they fill in the extra time that is freed up by AI with even more tasks. So, rather than using the time that was freed up by automating dull and repetitive tasks to stimulate what is unique to us humans, they unwittingly discard the idea that AI should augment human abilities and instead reduce their employees to mere “task completers.”
In doing so, organizations create a zero-sum game: it’s either the human or the AI that does the job (De Cremer and Kasparov, 2021). The dominant rational paradigm emphasizing the importance of organizational efficiency leads businesses to approach performance optimization as a choice between humans versus AI. Given the rational kind of thinking that favours economic (short-term) self-interest, it is likely that humans will end up on the losing side, simply because completing more tasks in the short term appears more appealing than betting on investing in the potentially unpredictable human abilities to create real long-term value. As a matter of fact, investing company resources towards making human workers more efficient in the use of their natural abilities by adopting AI in genuinely augmentative ways is often judged to be too expensive, and thus considered less of a priority. As such, investment in sustaining and promoting the analytical abilities of humans is not really a priority, because AI is better equipped to devise rational strategies leading to increased efficiency (and, importantly, at a lower cost).
From this perspective, it makes a lot of sense to insist that the “irrational” human must be removed as much as possible from the design and adoption of AI and, in turn, must not be relied upon in the future to serve as the responsible decision-maker.
The human cost
But the more tasks that employees were able to complete in collaboration with AI, the more they kept going. The result of using AI to promote efficient working thus led to an increase in their workload rather than a reduction of it.
The consequences run deeper than the emergence of a zero-sum game mindset. If we treat humans as efficiency machines in an AI-dominated work paradigm, there will be an emotional cost as well. Our own research (De Cremer and Koopman, 2024) indicates that employees working frequently with AI do indeed achieve productivity gains, completing more tasks in less time. That’s the good news.
But the more tasks that employees were able to complete in collaboration with AI, the more they kept going. The result of using AI to promote efficient working thus led to an increase in their workload rather than a reduction of it. The consequence of this was that employees then also felt more socially deprived, lonely, unhappy, and tired. In an ironic way, those resulting emotions and harm to their well-being are also the factors that will in the long term ultimately make them less efficient.
Indeed, employees who feel disconnected and emotionally unfulfilled at work tend to be less engaged, less productive, and less committed to their organizations (Belle, Burley, and Long, 2014). Furthermore, they are also less inclined to collaborate, innovate, or exceed expectations in their roles, and are more susceptible to burnout, absenteeism, and high turnover rates.
In many respects, this overlooking of human primacy in AI adoption may in a way be rather surprising, especially given the fact that many organizations are becoming increasingly attentive to their employees’ physical and mental well-being. However, when it comes to AI-human collaboration, this focus seems to quickly fade into the background.
A holistic view
These findings make it clear that AI adoption, as we suggested earlier, is not (only) an engineering exercise, but primarily a behavioural one. It brings massive opportunities, but it’s also one of the biggest leadership challenges of our time (De Cremer, 2024). And that is exactly why we need business leaders who understand both the opportunities and the limitations of AI. Those who harbour a holistic view, not a reductionist one where a human can be reduced to zero, and who do not only think in terms of automation, replacement, and economic growth, will be more successful in having the adoption of AI create real value. Indeed, successful AI-savvy leaders think in terms of human augmentation, of enhancement and wellbeing.
It’s essential to recognize that the jobs of the future must remain tailored to human needs, rather than reducing people to mere task performers. AI is not a magical solution for optimization; it’s a tool that should be thoughtfully aligned with an organization’s purpose and, as such, create value for all stakeholders involved.
Same as it ever was, but not with AI in the picture
We’re always intrigued when we hear people say that leadership needs to be reinvented to meet the needs of this AI era. We don’t agree with that statement. The qualities of great leaders today are much the same as those of 20 years ago, though they may be balanced and prioritized differently (De Cremer, 2024). In fact, leaders today need emotional intelligence, empathy, communication skills, meta thinking, intuition, critical thinking, trust, psychological safety, resilience, and the ability to learn from failures more than ever. Leaders need to function as bridges between departments and stimulate cross-functional collaboration. They must bring meaning and work in purpose-driven ways, so that they know which business questions can be pursued in collaboration with technology.
As AI becomes increasingly ubiquitous and commoditized, it is not the mere adoption of technology that will help differentiate your company or create a strategic competitive advantage. It is the knowledge you have about your organization and customers that should guide your strategic decisions on where AI is relevant to use and where it is not. Only in that way can AI become that competitive differentiator that everyone is looking for. Indeed, implementing AI in ways that enhance your organizational purpose will boost engagement, unlock innovation potential, and indirectly support the well-being of your human capital.
Skills can be lost
In line with this holistic perspective on the use of AI, we always tell business leaders to invest at least 50 per cent of their technology budget in change management. Do we have the right talent? Are we providing enough training? Do we have the right infrastructure? What about R&D? Are we creating the right conditions for employees to thrive with AI? These are the human-centered questions that leaders should ask when adopting AI, so that they can both enhance efficiency and create the jobs of the future that are suited to humans and their development.
The training part is crucial here, because few people seem to realize that even soft and emotional skills can be lost. Take today’s teenagers, for instance. Many of them are losing interpersonal relationship skills because they over-rely on technology. They text, for instance, because they feel anxious about talking on the phone or even meeting in real life. Similarly, if you reduce your employees to task machines in an efficiency paradigm, their soft skills will be eroded.
Just to give a recent example, Gartner found that 30 per cent of employees are avoiding more people at work today than they did two years ago (Gartner, 2024). Just think about what that will mean for the quality of collaboration if they are not trained to connect and work together again.
In conclusion
It is our belief that we must rethink the roles that humans will take in organizations that are increasingly adopting AI. Measuring them to the same efficiency standards as AI is not the way to go and will reveal unwanted side effects. So, this is our advice to all the business and technology leaders out there: “put your employees first and AI second.” That approach will bring us much closer to the AI utopia that so many people envision and like to talk about. The alternative is simply unacceptable.
About the Authors
David De Cremer is the Dunton Family Dean and professor of management and technology at D’Amore-McKim School of Business, Northeastern University (Boston). He is the founder of the Center on AI Technology for Humankind in Singapore. Before moving to Boston, he was a Provost’s chair and professor in management and organizations at NUS Business School, National University of Singapore, and the KPMG-endowed chaired professor in management studies at Cambridge University. He was named one of the world’s top 30 management gurus and speakers by the organization GlobalGurus, one of the “Thinkers50 list of 30 next generation business thinkers”, and is regularly included in the world top 2 per cent of scientists. He is a best-selling author, with his new book The AI-savvy leader: 9 ways to take back control and make AI work (published by Harvard Business Review Press) being a #1 new release at Amazon, a Financial Times book of the month , a must-read book of the Next Big Idea Club, and winner of the 2024 OWL (Outstanding Works of Literature) Award in the category leadership.
Laurence Van Elegem is a seasoned trend researcher and content strategist, deeply passionate about the future, societal transformation, and technological innovation. With over 15 years of experience in crafting impactful communication for tech and innovation companies, she has cultivated an in-depth understanding of the industry’s complexities and its far-reaching effects on consumers, citizens, and employees. Her newsletter “Here and Now” offers sharp insights and thoughtful analysis on the latest developments in technology, society, and business, making it an essential read for forward-thinkers.
References
-
Belle, S.M., Burley, D.L., & Long, S.D. (2014). “Where do I belong? High-intensity teleworkers’ experience of organizational belonging”. Human Resource Development International, 18(1), 76-96.
-
De Cremer, D. (2024). The AI-savvy leader: 9 ways to take back control and make AI work. Harvard Business Review Press.
-
De Cremer, D., & Kasparov, G. (2021). “AI should augment human intelligence, not replace it”. Harvard Business Review, Winter
issue, 97-100.
-
De Cremer, D., & Koopman, J. (2024). “Using AI at work makes us lonelier and less healthy”. Harvard Business Review. June 24.
-
De Cremer, D., & Narayanan, D. (2023). “On educating ethics in the AI era: Why business schools need to move beyond digital upskilling, towards ethical upskilling”. AI and Ethics, 3, 1037-41.
-
Gartner (2024). “Gartner HR research finds organizations are in the midst of a reset; most are not prepared”. Retrieved from: https://www.gartner.com/en/newsroom/press-releases/2024-10-28-gartner-hr-research-finds-organizations-are-in-the-midst-of-a-reset-most-are-not-prepared
-
Nielsen, J. (2023). “AI tools raise the productivity of customer-support agents”. Retrieved from: https://www.nngroup.com/articles/ai-productivity-customer-support/
-
Rodriguez, (2023). “Research: Quantifying GitHub Copilot’s impact on code quality”. Retrieved from: https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-code-quality/#:~:text=Engineer%20(study%20participant)-,88%25%20of%20developers%20reported%20maintaining%20flow%20state%20with%20GitHub%20Copilot,a%20pretty%20neat%20syntax%20edition.
-
Somers, M. (2023). “How generative AI can boost highly skilled workers’ productivity”. MIT Sloan Management Revie Retrieved from: https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity