David De Cremer gets to the heart of leadership in the era of rapid AI evolution. Discussing his upcoming book on the topic, he stresses the role of business leaders in integrating AI so as to ensure that it becomes a value creator rather than a mere technological tool.
It’s an honour to have you with us again, Professor! We hear congratulations are in order for your upcoming book launch. Can we start with a few words on what inspired you to write a book all about AI leadership?
Thank you for asking. Yes, on 18 June 2024, my new book The AI-savvy leader: 9 ways to take back control and make AI work, published by Harvard Business Review Press, will be available. And there is an important reason why I wanted to write this book. As we all know, AI is all around. In fact, AI has become so mainstream that the biggest risk for organisations today is not using it. But, despite this sense of urgency, I do notice that business leaders are not actively involved in adopting AI and turning the technology into a real value creator for the company and their stakeholders. In a similar vein, participants in my executive leadership classes say that, because they’re not tech experts, they are afraid of becoming redundant. These observations led me to conclude that, in the business world, a trend has emerged where they have started valuing AI’s computational prowess over human understanding. They’re letting it lead. That is, business leaders launch AI adoption projects, but they let technologists take the lead because they don’t know much about this thing that they’re being told is the future value creator.
However, what is striking in this story is that, in delegating the company’s AI journey to AI experts because of a belief that I have called the “tech-driving-tech” strategy, something is clearly amiss. Indeed, despite AI’s being portrayed as the holy grail for business, at the same time AI adoption efforts are failing at alarming rates. So many companies I spoke with and have worked with are sinking significant money into AI, but they’re failing to extract value commensurate with the investment. The reason that I see is that prizing a technical mindset above all else for the rollout of AI adoption and implementation programmes means that companies hand off the entire process to tech experts, with disastrous results as the human element is ignored. In my view, this approach is a mistake. I’ve written this new book to correct it. I hope to reverse the trends I see in the companies I’ve worked with, the data I’ve reviewed, and the leaders I’ve spoken with. I want to bring leaders back into the AI conversation and, in doing so, I hope to save many organisations a failed AI adoption project or two by reminding them that leadership skills are absolutely essential when AI is deployed.
Your research often focuses on behavioural economics and human decision-making. How did these insights inform your perspective on the interaction between human leaders and AI systems within organisations?
What is important to know is that the book is not about the technology itself. Of course, I do dive into what AI is and how it works, but the focus is on how we will work with it and how to use it to create value for our companies. This a behavioural focus and one that is needed if we want to address the question of whether the introduction of AI means that we must rewrite the rules of leadership and what leadership in the AI era will look like.
One thing people often forget is that AI, even today at the height of large language model (LLM) applications, is still a tool. So, the question will become how you as a business leader can use that tool to create impact on your company and stakeholders. And there are two perspectives that are dominant today. The first perspective zooms in on AI as a means to enhance efficiency in everything we do, so productivity will go up. This perspective is obviously one that fits perfectly with the business world, as the focus on increased efficiency and productivity aligns well with our motivation to maximise profits. But, the consequence of this perspective, as we’re starting to see, is that we create work cultures where humans feel pressured to align with how machines make decisions. We invest significantly in AI deployment, so companies expect their employees and customers to interact with AI in ways that correspond with how intelligent machines work and act. Such an approach lacks any respect for the human condition and will ultimately work against you. With this strategy, people work around the technology, results don’t materialise, turnover increases, innovation declines, and your reputation in the market suffers to the extent that good human talent wants nothing to do with your company.
So, if adoption of AI is primarily AI-centred, where the machine comes first and humans second, we see that organisations ultimately will lose their workforce and be left with AI only. And this will not be the way that companies will be able to create business value. The way to create that value is if humans and AI work together. The future is not only automation, but especially augmentation, where AI is in service of human intelligence. To achieve this kind of work situation, we need a behavioural, human-centred approach where AI is developed and employed in ways that aligns with how humans work, think, and decide. With the computational power of AI, human intelligence can then be elevated and, in doing so, we make our organisations more efficient, but at the same time also more human than before. So, AI is not about making employees less human, but instead more human. From that perspective, my expertise in behavioural economics and the psychology of decision-making is extremely relevant to derive insights from thinking about how we can shape the adoption of AI so that it respects the human condition.
In your book, there’s a heavy emphasis on leaders “retaking control” of how AI is deployed. Can you discuss the balance between empowering AI technologies and maintaining human oversight and decision-making authority?
As I mentioned earlier, with AI being so present in our lives and becoming cheaper all the time to apply, a sense of urgency exists to embrace AI in all your business practices. In fact, the adoption and use of AI is going so fast that business leaders can’t keep up. In addition, as most business leaders do not understand AI very well, they start delegating the management of the AI adoption process to the tech experts. A belief emerges that AI’s computational prowess should be valued over human intelligence and abilities. Business leaders are thus inclined to reason that when it comes down to making AI work for the company, it should be the ones who understand technology who lead the transformation process. However, this is where it goes wrong.
It’s a simple fact that business leaders cannot delegate the responsibility of the AI adoption strategy! If they do, they lose control over the organisation, and this will not serve the business itself nor any of the stakeholders. Indeed, if a technology-only mindset drives the AI adoption strategy, then the transformation process is seen as only a technological endeavour and this perspective is way too narrow to create any business value. Think about it – organisations exist because they have a certain purpose, and along with that purpose come business goals that one wants to achieve. To evaluate what to do to achieve those goals, the questions that need to be asked will in the first instance have to be business questions. For example, what exactly do we want to achieve in comparison to our competitors, what challenges is the industry facing that are relevant to my company, and what is it exactly that we want our customers to expect from us? In the second instance, the questions will be technological ones, where we assess where and how AI can help us to address these questions effectively and achieve our business goals.
However, if we let tech experts lead the transformation of our company to become an AI-driven one, we are reversing the order of these questions. For example, tech experts, who are not business experts, will do their job and analyse the data at hand and inform business leaders accordingly. But, as the leaders are not tech experts, how can they know whether their recommendations the right ones for the company to act upon? Did they analyse the right kind of data to inform the company’s business strategy? We can only know this for sure if we start by asking the business questions that we want to see answered and then see whether we have the right kind of data available to answer these questions. In addition, by asking the right kind of business questions first, leaders can then also more easily collaborate with tech experts on deciding whether AI solutions should be used and, if so, in which stage of the business process. After all, today, we’re faced with many AI solutions that are hyped and are not necessarily needed for every business decision.
It is thus important that business leaders take back control of the entire AI adoption process to ensure that, at the end of the day, business value is created. And they can do so by becoming more AI-savvy, so they understand the basics of AI and so can communicate more effectively with their tech experts and, at the same time, decide whether the use of AI makes sense in light of their business goals. If AI makes sense, then the tech experts will take care of the implementation and execution. But those tech experts need guidance first and that’s the responsibility that business leaders must take up in the AI era. In a similar vein, it makes sense that business leaders, in assessing the value of AI from a business perspective, are also responsible for using AI in ethically correct ways. They need to create a sense of awareness in the company that, while the use of AI should enhance the company’s impact, it should co-exist with a culture of integrity. Business leaders can do so by installing expectations that the technology needs to be always governed. This means that the company stays updated about the AI regulations in place and creates a work culture where thinking about ethical dilemmas in the use of AI is a given and dictates corrective actions if needed.
Why do you think leaders have been hesitant to engage directly with AI strategy and decision-making for so long?
Today, the technology is growing and changing very rapidly. At the same time, the fear of missing out on the use of AI is so high that the deployment of AI in companies is accelerating. This rapid pace puts leaders in the awkward position of learning to adapt, while at the same time learning what it is they’re adapting to. And this situation discourages many business leaders. It makes them less confident about what they think their role should be when adopting AI and, as a result, they become less involved in anything in which AI plays a role. And at the level of our executives, I see this lack of participation also translated, for example, into CEOs discouraging their teams from using AI. They don’t want their employees to use AI because they don’t understand it. So, in a way, many business leaders are afraid of (and intimidated by) AI and their own fear leads them to avoid any experimentation with this intelligent technology. But one can only truly understand the value that AI can bring to one’s organisation if one allows that kind of experimentation. As a result, many companies so far have not learned enough about the real value that AI can create for them. And it’s the lack of understanding by the company leadership that usually has been the cause behind this situation. So, if the priority for business leaders is to empower their workforce to leverage their capabilities for a competitive organisational advantage, then their first leadership task is to close the gap between their understanding of AI and the growing use of it.
Your book outlines nine actions that leaders should take to successfully transition to a more AI-centric future. Could you highlight a few of these actions and explain their significance in driving organisational growth?
The basic premise of my book is that business leaders need to participate in the AI journey of their companies more actively. Out of fear that they are not tech experts, which is believed to be the only thing needed to drive AI adoption and implementation, business leaders are not part of that journey, which means that leadership practices are absent in the entire AI-driven transformation process of the company. And, because of that, most of these projects ultimately fail. Therefore, leadership needs to be brought back and taught, but now in the context of AI deployment. To do so, in my book, I zoom in on nine leadership practices needed to ensure that the adoption of AI will be successful.
The first practice is that leaders need to get to know AI and mobilise their learning to be a business leader who has the right narrative and approach to make AI adoption succeed. The second practice is that leaders need to lean into their purpose to make sure they are asking the right kind of questions of AI, not just being led by what’s technologically possible. The third practice is that leaders need to work hard to foster an inclusive culture for human-AI collaborations, where employees are not left out or left behind. The fourth practice is that leaders need to focus on clear communication at all levels in their organisation to explain corporate intentions and foster AI adoption, while at the same time receiving feedback and suggestions on how to improve the use of AI. The fifth practice is that leaders need to develop a clear vision that bridges their organisation today with its AI-enabled future to inspire and motivate their workforce to experiment and work with AI. The sixth practice is that leaders need to adopt a balanced approach to AI that keeps all stakeholders in mind and leads to an organisational culture where AI governance and ethics moves centre-stage. The seventh practice is that leaders need to use an empathetic, human-centred approach that recognises and accommodates the impact of new AI systems on their workforce, so people do not feel like a number, but as essential collaborators to make AI work for the company. The eighth practice is that leaders must consider their mission as critical to their company’s AI journey. The long-term future of their business still needs human creativity. Invest in AI to augment, not to automate, jobs. The ninth and final practice concerns the need for leaders to hone their emotional intelligence. Leaders need to accept that soft skills are the new hard skills and practise them!
What are some common misconceptions or pitfalls that leaders should be wary of when implementing AI solutions and how would you suggest they avoid these traps?
Budget wisely. Once organisations decide to embark on their own AI journey, they commit most of their budget (70-80 per cent) to adopting AI, only to find that, without proper integration, these investments fall short. Business leaders making the decision to bring AI into their organisation usually focus primarily on the investment made in the technology. In their minds, AI adoption can have direct effects on making the organisation run more efficiently, so that putting AI to work will have a straightforward effect on enhancing productivity and increasing profit. This rather narrow-minded focus means that the interests of the employees are assigned less weight during the introduction and integration of an AI project in the organisation. And this is a problem, because AI does not have a direct effect on the efficiency of the organisation. Whether AI creates impact will depend first on the willingness of employees to use it, and of customers to accept the use of the technology in how they’re being helped and served. So, AI adoption budgets need to include sufficient resources for the implementation phase, as well, where business leaders will have to work with the human workforce to ensure that AI is used effectively, and jobs are redesigned to allow AI to augment the abilities and performance of employees. It goes without saying that this can be a costly affair and any budget needs to take those costs into account.
AI is not the same as human intelligence. Leaders need to clearly understand that artificial intelligence and human intelligence are two different things. In the business world, we seem to be suffering through a cycle of hype with AI that does not match reality, because leaders don’t understand this crucial distinction. The hype has made people overly optimistic about AI’s capabilities to the extent that many people increasingly think that AI systems are already matching human intellectual abilities. They believe that it is only a matter of time before AI can perfectly replicate the human brain. And when that happens, expensive and not-always-efficient employees can be replaced by much cheaper AI, capable of self-learning. This kind of thinking, however, is overly optimistic and unrealistic and may even turn out to be dangerous. Brain scientists themselves argue that our understanding of the human brain, with its roughly 86 billion interacting neurons, is sketchy and provisional at best. With such incomplete knowledge about the brain, we cannot seriously say that we have succeeded in matching human intelligence with AI. At best, we have brought to the fore a narrow kind of computational intelligence that can complement our human intelligence. But not replace it. And it’s only by realising this truth about AI that leaders, in my view, will be sufficiently AI-savvy to create a narrative where they will be able to explain to their workforce why and how AI should be used, considering the business goals that need to be pursued. Because only then will they understand themselves and make others see that AI adoption is still about humans first and machines second, because AI is used to augment human intelligence.
You do not need to be a tech expert! Executives in my advanced leadership classes feel so much pressure with regard to AI that I’ve heard some of them wonder aloud if they needed to transform themselves into professional coders to be effective leaders on AI. But be assured that acquiring coding expertise or uplifting yourself to become that tech expert is not the level of AI-savviness that business leaders need. What business leaders need most is a foundational understanding of AI, and that includes learning about AI at two levels. First, learn the basics of what AI is and what it is not. Second, think about what AI is about in your business context, so it can drive your discussions with your tech experts with regard to what kind of AI will be most suitable to use. This, of course, means that leaders need to evolve and keep learning about the developments taking place in the AI field and how those advancements will impact business practices.
How can leaders foster a culture of innovation and experimentation while still mitigating the potential risks associated with AI adoption, like job displacement or privacy concerns?
We all know that innovation can only emerge if we experiment, which means that we test assumptions, fail, try again, and eventually succeed in delivering solutions for our problems. It’s an important task for any business leader to create such conditions at work, and doing so requires interpersonal skills that allow for building a psychologically safe place to work. A place where trust exists and failures are not held against you, but seen as a learning opportunity. With AI becoming part of our work culture, experimentation is increasingly being seen as riskier. As I mentioned earlier, most business leaders lack a basic understanding of AI and so refrain from allowing room for experimentation to turn AI into a value creator. This means that, even though AI is believed to drive innovation to greater heights, this “innovation value” is often not achieved, because AI is not put to the test in collaboration with the workforce. An important task for business leaders today is therefore to empower their workforce to use AI, experiment with it, and provide feedback on how AI performs within the setting of a human workforce. This kind of information is necessary if one wants to integrate AI effectively in the company’s workflow. This means that leaders need to try to create flat communication cultures to ensure that feedback about the use of AI quickly reaches both the teams that need to work with AI and the experts, who can then see how AI may need to be used differently to create the expected business impact. But this is not where it stops. In addition to giving leeway to employees to use AI and report on it, leaders need to create goodwill among employees, so that they are open to the idea that they will have to work with AI in the future, and hence their jobs will be affected in certain ways. Creating that goodwill and trust implies that leaders create work conditions where employees feel in control of their job and still experience a sense of autonomy in how they work. In other words, they cannot feel that their job will eventually be controlled by the technology implemented. Here, it is important that business leaders stress the company’s strategy that AI adoption is about humans first and technology second.
As AI becomes increasingly integrated into organisational processes, what opportunities do you see for leaders to leverage AI to enhance employee engagement, productivity, and overall well-being?
This is an important question because I see too many companies that are introducing AI as something that must happen, and where the perspective of the employee is largely ignored. If this happens, then it’s almost a certainty that the AI adoption process will suffer, because business leaders will have failed to introduce AI as a tool that can benefit employees as well.
To prevent this situation from happening, it is first of all important that AI adoption is seen as an augmentation strategy. Of course, for routine and repetitive tasks, automation is nowadays accepted as the default choice. Jobs that require little creativity and that include the processing of massive amounts of data are increasingly being automated. But it is important that, as a business leader, you can make it clear what purpose the automation strategy serves. In other words, making it clear to employees that automating the routine and repetitive tasks fits a strategy that is focused on making employees a better version of themselves in the work context. If business leaders remain silent and automation is seen as the primary strategy, the consequences will be that you’ll end up with a less skilled workforce. Under such circumstances, people’s jobs will become fragmented until the entire job is gone. When this happens, then the unique human qualities that AI does not have will also be absent. So, the use of AI is not to create fewer learning opportunities, but instead to enrich job content and add cognitive responsibilities for employees to learn and grow in their expertise and become better at what they do and empowered in their confidence and abilities. AI-savvy business leaders need to devote serious time to carefully preparing and redesigning jobs for the augmentation strategy to succeed.
Second, in the context of augmentation, the leverage of AI must be seen as a holistic strategy. Using AI in augmentative ways cannot – from both a practical and normative perspective – be seen as a unidimensional strategy focused on solely promoting people’s efficiency and productivity. Humans are not unidimensional, rational task completers. They derive pleasure from other sources, and performance will not always improve just because people have been exposed to ways to improve efficiency and productivity. Humans also want their work to be intrinsically motivating and meaningful, not just maximally efficient or productive. Indeed, the human condition holds that people can be motivated by a need to feel competent, included, respected, and confident, and to be seen as moral and curious.
A holistic approach is thus needed for any AI adoption project, as it will allow AI to be leveraged to promote employees’ performance (and thus productivity) and, at the same time, positively affect the work identities and motivation of your employees. And this dual effect is the one that any company needs in order to become an AI-driven organisation without changing or adjusting its core identity and values. In fact, as it turns out, every AI adoption process has two sides. On the one hand, advances in AI promise exciting opportunities to dramatically reinvent your business, unlocking opportunities for increasing productivity, optimising processes, and creating value. On the other, certain aspects of your business should hold steady – your core identity, your commitment to customers, your attentiveness to employee well-being, and more – so that you do not lose the essence of what makes your business unique. The rapid advancements in AI must not undermine the personal touch and purpose of your company.
Looking ahead, what do you envision as the next frontier of research and practice in the intersection of leadership and AI?
In my book, I focus on how leadership is needed to make AI adoption successful by stressing that transforming your organisation to be AI-driven requires even more attention to humans – by means of empowering, inclusive, and purpose-driven leadership – than to technology. To make this perspective successful, leaders need to understand their responsibilities in relation to making the use of AI acceptable to all their stakeholders and using it in ethical ways. Having said this, there are, of course, also other ways of looking at the relationship between leadership and AI.
One other way is to consider how AI can be used by leaders in their decision-making. Leaders need to navigate their organisations in ever more dynamic and noisy circumstances, and the analysis of massive amounts of data is then a prerequisite to be successful. This is a place where AI will play an important role. So an important challenge for leaders is also to figure out how to use AI as an assistant in gathering information and putting that information to use. Obviously, a certain level of AI-savviness will still be required here, as business leaders need to be able to assess both the potential and limitations of the computational value of AI to their leadership decisions.
Another potential avenue for the study of the integration between leadership and AI concerns whether leaders can delegate some of their tasks to AI. The rapid developments in generative AI provide great opportunities for leaders to create bots that can sometimes do the talking for the leader as a first source of information. This is possible, as audio data would allow the bot to speak like the leader, while the AI allows for all knowledge available about the leader to be used for content delivery. Furthermore, research has also provided evidence that generative AI can reproduce accurately the personality of people as derived from the available data. Of course, having a leader bot available when the leader is not present requires high levels of transparency (everyone should know they’re communicating with the leader bot) and clear communication on why the bot is being used, so as not to lose credibility and legitimacy as a business leader. At the end of the day, people value “authenticity”, especially when it comes down to their leader. Any AI solution that forms a part of business leadership will therefore always have to be seen as an addition that is complementary to the real thing.
How do you plan to contribute to this evolving landscape?
I see my contribution mostly in my roles as a business school dean, a scholar, educator, keynote speaker, and consultant.
As dean at D’Amore-McKim School of Business (Northeastern University), I have taken steps to redefine our mission and vision so we can integrate the pivotal task of educating our future business leaders to be sufficiently AI-savvy to create business value by using AI effectively and responsibly. We aim for our students to be responsible business leaders who can act, navigate, and create in a tech-driven environment. To succeed in that kind of education, business schools do not need to train business leaders to become tech experts, but to equip them so they see technology as conduit and can acquire the necessary skills to put AI to use so that it creates the business value that they want to see. Technology develops so quickly that we do not need to train them to know what kind of AI will be available in five years (that’s difficult to do, anyway), but rather teach them tech and human behaviour literacy, so that they are able to work with whatever AI is available then.
As a scholar and educator, I see it as my task to keep putting out critical thinking about the relationship between AI and humans and how it will affect our organisations and society at large. One observation that I want us to be cognisant of is the fact that the use of AI is still a choice that we can make. If you listen to everything that is being said about AI these days, it is relatively easy to believe that AI is a kind of magical force that no one can escape from. It is presented as something that is inevitable and will be used in any facet of life, whether you agree with it or not. Granted, AI is an amazing tool and brings a great power with it to transform our society in significant ways. But we do have a choice on how and when we want to use it. Humans created AI and we did so primarily to serve humanity. If, because of corporate pressures, we narrow down the use of AI to being a means of promoting efficiency that humans will not be able to compete with, then we adopt a reductionistic approach to humanity. From a moral and humanitarian perspective, this is not necessarily a route that we must walk. And it’s this kind of awareness that I want to keep alive for all of us.
As a keynote speaker and consultant, I see it as my task to bring a behavioural, human-centred perspective on AI to the bigger stage of the corporate world. In business, people are quickly swayed by the newest trends and fads, and this is also true for AI. I think that, as thought leaders, it is our job to think through different scenarios and present those insights to the parties whose use of AI will impact all our lives.
Given your extensive experience in both academia and consulting, what advice would you offer to leaders who are grappling with the complexities of AI adoption within their industries?
The message, at the end of the day, will be the same, which is that, regardless of any technological breakthrough, organisations will always need leaders who are knowledgeable about the changes that are happening — with AI currently being a significant change in our organisations and society — and are able to translate their leadership responsibilities into the specific context of that change. Specifically, as a leader, I see it as necessary that today you push yourself to ask the question of what AI should be about in the context of your company. If you don’t have an answer to that question, if you do not see what AI is for when it comes down to your company, then maybe you should not use it or you should not be the one leading the AI transformation of your company. You need to be able to make such decisions. The changes that intelligent technology brings to organisations are clearly non-negotiable and will make our organisations look and operate differently in the coming decade. What will not be different, however, is that businesses will continue to require strong business leaders to guide any technological transformation — leaders who participate, connect, communicate, and lead more than ever with a vision and purpose when AI enters the organisation.
About the Author
David De Cremer is currently the Dunton Family Dean of D’Amore-McKim School of Business and professor of management and technology at Northeastern University (Boston), and an honorary fellow at Cambridge Judge Business School and St. Edmunds College, Cambridge University. Before moving to Boston, he was a Provost chair and professor in management at National University of Singapore and the KPMG endowed professor in management studies at Cambridge University. He is the founder and director of the Center on AI Technology for Humankind (AiTH) in Singapore, which was hailed by The Higher Education Times as an example of interdisciplinary approaches to AI challenges in society. He is one of the most prolific behavioral scientists of his generation, and a recognized global thought leader by Thinkers50. He is a best-selling author, including “Leadership by algorithm: Who leads and who follows in the AI era?”, and his newest book “The AI-savvy leader: 9 ways to take back control and make AI work”, which is published by Harvard Business Review Press in 2024.