David-De-Cremer

An Exclusive Interview with Professor David De Cremer, Founder and Director, Centre on AI Technology for Humankind, NUS

The employment of artificial intelligence is proceeding ever faster all around us – not least in business and commerce. Will its use be driven by purely commercial forces or can we control it to the overall benefit of society? Professor David De Cremer of the National University of Singapore believes we need to aim for AI tempered with humanity.

Hello, Professor De Cremer. Thank you for taking the time to talk to us. Just to orient ourselves a little, I wonder if we could start off with rather an obvious question. “Artificial intelligence” is one of those terms that we hear and use quite a lot but, when challenged, we may have differing ideas about what exactly they mean. What, for you, is artificial intelligence? And what is the relationship between AI and algorithms?

It’s good to always start with a clear definition, because I’ve noticed over the years when studying the results of the surveys that big consultancy companies set out, many executives in their response to questions such as whether their company has adopted AI or not, refer to many examples that technically speaking cannot be defined as AI. So, there is still a lot of ambiguity about what AI actually means and how it can be used. 

The simplest definition of AI is computers showing actions and decisions that seem intelligent. As it is a machine that displays these decisions and actions, we call it “artificial intelligence”.

In my view, the simplest definition of AI is computers showing actions and decisions that seem intelligent. As it is a machine that displays these decisions and actions, we call it “artificial intelligence”. It’s not human intelligence, but it imitates or models it. Computer scientists design algorithms that represent a model in line with specific calculative rules to make predictions. Usually, in the case of supervised learning, this prediction model is based on training data and then used to make predictions with respect to new situations (i.e. new data). In its essence, AI is an elegantly framed version (albeit in somewhat mysterious ways to lay people) of statistics.

Could you give us some background on your work at the National University of Singapore Business School Centre on AI Technology for Humankind? What were the drivers to the setting up of the Centre?

At AiTH, we believe that the development of AI technologies must be understood and examined in the context of collaboration and co-creation with humans. Our aim is to study, explore and develop deep insights into how AI technologies should be advanced with human-centred choices, promoting creativity and happiness whilst serving and enhancing human identity. This may sound a bit abstract, but the main gist is that given the way AI is developing and the amazing, and even wishful, prospects regarding the use of AI, it is necessary to reflect and study that we use AI in ways that benefit humanity. In other words, deciding to adopt AI in your organisations and societies has to be motivated by human-centred concerns. In our centre, we are not obsessing too much about the potential threat to the existence of humanity as a whole, as these horror stories are not a reality right now or in the next few decades, but primarily focus on the question of whether the choice of AI and automation will facilitate the well-being, effectiveness and performance of the human end user. So, we need to find the perfect balance to enjoy the benefits and opportunities of AI, whilst ensuring these advances serve our human identity and values. A fear we do have is that if AI is adopted in rather mindless ways – without a reflective attitude – we can easily start a slippery slope where the way we work, interact and manage our societies will adapt to the way the machine works. And, if this happens, then society will over time become more suitable for machines to live in, rather than for humans. It would entail a shift from humans as being the ones to serve towards serving the employment of the machine itself.

An important driver was that in the West I saw that a focus on the humane and ethical implications of AI employment was increasing. From tech entrepreneurs in Silicon Valley to academics in risk centres, more and more questions were being asked about the relationship between the increasing use of machines and the possible deterioration of humanity. In Asia, however, it was relatively silent when it came down to these rather existential questions. China, as the dominant nation in Asia, of course, has a strong focus on advancing AI technologies, but the way they look at its governance, ethics and the use of technology for human welfare is quite different from what we have seen in the West so far. As a result, the rest of Asia has so far also not devoted much attention to the issue (although this is changing very rapidly). Singapore is often applauded as the place where East and West meet, so, in my view, it’s the perfect place to have a centre like ours, where we ask existential questions that will, and should, occupy the whole world. We also notice that many companies in Asia have questions about these issues, so I like to think that with the establishment of our centre we can accelerate the demand for more human-centred development and the adoption of AI technologies in this region.

You’ve recently written a book, “Leadership by Algorithm”, in which you discuss the range of issues connected with the ever-increasing presence of artificial intelligence systems in society, including the notion of fairness. Could you enlarge on the theme implied by the tag line of the book: “Who leads and who follows in the AI era?”

AI is the new hero in the corporate world and, by extension, society.  And, to some extent, this is not a surprise. Almost daily, we can see examples in the news, social media and research on how AI can perform the most amazing tasks and solve challenges. It creates the hope and belief that AI will improve our lives significantly and, as such, it is almost a requirement that it needs to be implemented at different levels of decision-making in our organisations. The more AI becomes involved in the decisions we make, the more aware we are also becoming that, because it’s fast, consistent and more accurate than humans, AI in itself may become a threat – a threat to our jobs. Much debate exists about whether AI will replace humans in their jobs. From a cost-cutting perspective – which most companies adopt – loss of jobs will definitely happen. At the same time, however, voices are also out there saying that the loss of these jobs will be compensated for by the fact that the employment of AI and automation will actually help people to deal with the more complex and creative aspects of their jobs. The reason for this is that AI will take over the routine aspects of the job. As a result, many new jobs could also be created. This may work if companies actively invest not only in AI adoption, but also in job enrichment, when AI becomes a co-worker; but I do not see that happening much yet.

Another, and less spoken about, threat is that algorithms today are guiding us in the decisions we take, the information we receive and read, and ultimately frame our understanding of the world around us. For example, in the Netflix documentary “The Social Dilemma”, it is reported that the way we can have algorithms scan people’s habits and preferences – which requires collecting their personal data – will determine the kind of information those people will be shown via search engines, AI platforms and so forth. So, in a way, AI is already guiding us. Just look at the fact that we stop for a red light and move on when the light jumps to green. Taking all of this together, I see in my classes that many executives and senior leaders have formed a belief that AI is everywhere and that everything will change. They are truly afraid of this. In fact, I regularly have senior leaders asking me whether they should not become a coder in order to stay in charge of their own destination and career.

Many of today’s systems, arrived among us as a result of evolution. We may think of the road transport system, in which people in metal boxes hurtle towards each other on the same track at very high combined speeds. The system was never discussed or planned, but we accept it because it evolved into existence. The Internet and smartphones, too, came by evolution. It seems that we went flat-out in the name of progress, but omitted to plan and educate on the best ways for people to absorb these systems into their lives. The result is that it often appears that the systems control us, rather than us controlling them. As you said in a recent webinar on your book on AI, “We should reflect a little bit more.” In the context of AI, is now the time to pause and think through all the issues? And should we all receive appropriate education about its adoption?

I consider the time that we are living in today as a crucial one. After being able to replace physical labour by machine, we are now in an era where we may be able to replace the human mind. So, if both the human body and mind can be replaced, we are witnessing a crucial moment in our existence, where we could become obsolete. Such a moment in time requires some serious reflection on what it is that we want to achieve with these new technologies and how we can ensure that the end goal will still be the promotion of a humane society. First, such an attitude implies that we will have to start looking differently at how we will design and manage our work floor in the future. Second, it also suggests that we may need to revise the way we educate our children.

With respect to the future of work, one big movement that has been started, and is accelerated due to COVID-19, is the push for employees to reskill themselves. On one hand, this is quite a normal response, because especially the pandemic has pushed organisations and society to adopt AI platforms more quickly in their operation. As such, employees need to be at least somewhat tech savvy, so they understand the developments that are taking place (“Why is it taking place and what does it mean for my job?”). However, we run the danger that these reskilling programs are pushing the hard, rational side of the job too much. That is, people may come to think that, in the world of machine, we all need to become coders and understand the new technologies as well as any data scientist. I know many people who are afraid of the future because of these concerns. And this is a problem, because if this were the case, then we would indeed be building a world that will fit machines best. In line with this trend, universities and societies seem to think it’s almost a necessity that we want everyone to think like an engineer or a computer scientist. And this brings me to my second point.

Humanities, social sciences and humanistic perspectives are increasingly being seen as a luxury thing to study, because it does not add to the “machine” skills we want people to pursue.

The problem with this trend that I see is that humanities, social sciences and humanistic perspectives are increasingly being seen as a luxury thing to study, because it does not add to the “machine” skills we want people to pursue. If this trend were actually to materialise, then we would not only be reskilling, but also deskilling our people in their ability to be human and possess the unique qualities that define us as humans, such as perspective-taking, seeing and reflecting on the big picture, emotional intelligence, interpersonal skills and creativity. And, in my view, this is very much reflected in the education of our children. At a young age, we are monitoring and evaluating our children on cognitive dimensions judged to be important for future (business and tech) careers and trained in ways that emphasise rationality, consistency and avoiding failures. My daughter was evaluated at age 2 by her teachers, who showed concern if she did not score well on any of those dimensions. The whole culture was breeding a concern that kids would lag behind if they were slower in their development, as if they already had to meet certain (kid’s) KPIs. When I mentioned that she is exposed to three languages at home and, as such, her brain at that moment was probably a chaos of which she would make sense over the years, meaning that I was not worried yet about her development, I was met with a certain disbelief. Under such a regime, children are kept away as much as possible from experiencing any failures, while excessive emphasis is placed on the need to be as perfect as possible and as soon as possible. It brings with it a pretty mechanistic approach to creativity, everyone doing more or less the same in a structured environment, as the teachers were using fixed metrics. In an ironic way, I felt that we were training our kids to become an algorithm, rather than having them run around freely, explore, and experience failures they could learn from without any measurements being around.

And some of my fears are reflected in the numbers of countries that are using a competitive and rational model of education. For example, while their students achieve the highest scores in the mathematical skills, they are also at the same time number one in being most anxious in life (to the extent that in Singapore, for example, students have reached such levels of fear of failure that they are afraid to pick up their test results). So, for the future, education will have to make sure that the interpersonal and emotional development of our children continues to be taken care of. In addition, if we push everyone to think too much like a machine, we run the risk of not developing their general sense of intuition and reflection abilities. It’s my opinion that, at this moment, we do not need to develop more experts (that focus is already very much present in our educational systems), but we need more generalists who can see the “big picture” and identify challenges and come up with questions that we need to pay more attention to.

You have commented that there is a distinction between management and leadership. Could you enlarge on that idea? What is its significance for the employment of AI in business organisations?

There is the famous saying that everyone can be a manager, but not everyone become a leader. In the scientific literature, a clear distinction is made between a manager and a leader and this distinction is recognised by many in the corporate world. For example, for decades we have heard the same quote in the business world that companies have too many managers and not enough leaders. What does this mean? Well, in a world where things change quickly, companies want people who can adapt to those changes and, hence, come up with creative and effective solutions. Such an attitude, of course, implies that people think out of the box and do not get stuck in the habit of continuing to do what they’ve always done. In fact, many organisations suffer from this problem, in which change projects are usually met with an attitude of “we’ve always done it this way, so, why change?” And this situation is mainly created because of how we look at and execute management.

Management has become a very metric-driven business, so to speak. Management is needed to keep the organisation relatively stable and well-structured and we use many metrics to assess whether this is indeed happening. But, of course, the world is volatile, and being competitive requires agility. Hence, companies today are not served well by managers focusing on the status quo. So, if a company has too much of a management mindset, then it’s difficult to adjust, because the primary focus is on the short term, meeting KPIs and, as a result, being slow to see new opportunities and develop approaches to gain from those new opportunities. It is for that reason that the business world, in their aim of achieving a more agile mindset, is asking for more leaders and fewer managers. Leadership is the ability to give direction in times of change in order to create the value that the collective wants to see. Leadership does not deal with the status quo, but more with chaos. And, in this chaos, leaders have the responsibility to create a culture, and thus mindset, where people feel motivated and empowered to create value in more complex and uncertain settings. To do so, leaders need to facilitate and create conditions that allow others to do a better job.

This distinction between management and leadership is very relevant to how AI will play a role in helping to run an organisation. We adopt AI to make us more effective. Our focus is on innovation, and technology plays an important role in that. But, if we look at how we run our organisations, I see no innovation at all. In fact, our management philosophy is more than 100 years old and has not changed at all. In 1911, Taylor wrote a book on the scientific principles of management and, thus, management by system was born. Today, we are champions – by means of our metrics – in managing, to the extent that we are actually box-tickers but not innovators. From this perspective, we focus on routine and status quo, and this will lead to a situation where AI can replace us very easily, because, after all, routine tasks are the primary tasks AI is superior in. So, in my book, I say that management by algorithm (MBA) is definitely happening and, if we’re not careful and don’t train more in the soft skills that make us uniquely human, we run the risk of losing our leadership capabilities and ending up running organisations in automated ways. In that case, the work culture will feel robotic and even more metric-driven.

Given the increasing use of artificial intelligence in business and industry, will business of the future be a question of competition for who has the best algorithms? Or, rather, might it be more a question of who is most successful at integrating the human- and machine-based processes into the running of the business?

Well, in my view, both will be needed. Competition will definitely be there, but there is a high risk that such competition will translate into monopolies. AI runs and learns based on data. Because we talk of the digital age, we therefore also assume that every company should have data in abundance, but this is not entirely true. If we adopt AI for relatively simple routine tasks which can be done by means of supervised learning, quite a number of companies will be able to provide the needed data. However, when tasks become more complex, and especially if unsupervised learning and reinforcement learning enter the equation, then most companies simply do not have the data. So companies who have access to more data to train machines will have an advantage, and those companies are the big names everyone knows – Amazon, Facebook, Google and so forth. Also, if we look at the biggest companies in the world, most of them are tech companies, which signals a kind of winner-takes-all trend; those companies have most of the data in the world and, because of their position, they will also take all the business. Such monopolies may therefore create other challenges whereby technology does not eliminate, but rather promotes, inequalities. We have to be careful here.

AI fairness is an important topic right now, because the more we start involving AI in our decision-making, the more we are becoming aware that the outcomes that our algorithms produce can also be biased.

With respect to integrating the human- and machine-based processes into the business, if you decide to adopt AI platforms within your organisation then, yes, it will be important that machines can be integrated in ways that enhance the performance of the workforce and thus promote the business processes at play. This implies, as I also elaborate on in my book, that organisational leadership needs to think in inclusive ways, so that diversity of thought is encouraged with the aim of creating a climate of digital inquisitiveness. Such a climate can help to integrate AI where the most value can be created and to encourage employees to provide continuous feedback, with the aim of improving the efficiency of AI use in collaboration with humans in a business setting. An inclusive mindset will also help to ensure that data scientists do not work in silos and are connected to business experts who can make clear what kind of business value needs to be created, why this is the case, and what the business processes are that algorithm programmers need to be aware of (as this may help prevent biases creeping in).

In your book, you discuss the concept of fairness in relation to machine-led decision-making, and suggest that perhaps human intervention could be the instrument of mitigating an AI system’s shortcomings with regard to fairness. This would seem to place great responsibility on those charged with such intervention. Who would be appropriate arbiters in the evaluation of AI-based solutions in terms of utility versus fairness?

AI fairness is an important topic right now, because the more we start involving AI in our decision-making, the more we are becoming aware that the outcomes that our algorithms produce can also be biased. For many believers, this is quite an inconvenience. Because AI is rational and knows no emotion or hesitation (in contrast to humans), it should logically reveal unbiased outcomes, but apparently this is not the case. And it’s not such a surprise, because AI learns from historical data, so, if the data reveals trends that are recognised as biases today (e.g. in the past more men than women were hired for a specific job), then the outcomes that the algorithm calculates will be equally biased. What this example illustrates is that, first of all, AI does not have a sense of awareness of what kind of moral norms society endorses today, whether those norms have changed over time, how people feel about certain outcomes, and whether they are willing to accept them. This is, of course, not a surprise, because AI has no intentions, no moral compass and cannot be called either good or bad for these reasons. It has no way of feeling, knowing and explaining what it means to be fair, ethical and trustworthy and what that means to humans. Therefore, I recently outlined in a Harvard Business Review piece (here is the link: https://hbr.org/2020/09/what-does-building-a-fair-ai-really-entail) that AI fairness will have to be a collaborative process where the human ability to be less biased when judging other entities (in this case AI) compared to judging oneself can help to evaluate the outcomes calculated by algorithms. So, yes, responsibility for the fairness of AI will ultimately still rest with humans.

As you have pointed out, leadership supposes ability in soft issues, such as compassion, tolerance, empathy – in short, humanity. Moreover, for humans, a leader is required to have vision and be capable of inspiring and motivating. These are characteristics that are currently lacking in AI systems. When discussing AI systems, we tend to talk of “the machine”, and it seems unlikely that humans could feel empathy with, or warmth towards a “machine”; hence they would be unlikely to submit to leadership by it. Can you see a day when, whether through neural imprints or some other mechanism, an AI system might garner enough “humanity” to gain the confidence and, perhaps, the allegiance of human employees?

I’m not sure whether an AI system will understand humanity and act in “authentic” humane ways. But, they sure can imitate it and create certain affiliations with humans. For example, in the field of robotics, we see that we like to construe machines in ways that they look more or less like humans. Why is this? Well, psychologists call this the process of anthropomorphism, where non-human agents are made to look like a human, so we attribute to them humanlike characteristics, motivations, intentions and emotions. And this has positive effects. We tend to be more accepting of robots that look like a human, have a human voice, display human emotions and so forth. Our brain recognises these humanlike attributes and almost immediately our biological system responds. Of course, these are not “authentic” emotions or humanlike intentions, but imitated ones, but they do influence us humans at an unconscious level. But, there is also an interesting phenomenon related to this effect, which is called “the uncanny valley”. This effect refers to the fact that, if a robot looks too much like a human, then the positive effects that I have just outlined will be eliminated and people will show aversion to the robot. A possible explanation is that a robot that is almost indistinguishable from a human does pose some existential threat and activates “us versus them” thinking.

You refer in your book to Alan Turing’s test of intelligence, in which a system’s intelligence is evaluated by its behaviour, regardless of the internal processes that may be going on invisibly inside that system. In simple terms, if the system’s behaviour is indistinguishable from that of a human, it may be said to be “intelligent”.  But, as you have observed, although such behaviour may look “real”, in fact it’s only an imitation which we know is not backed up by understanding. Is it enough for a decision-making system to imitate understanding? At what point would such a system be found lacking?

Turing’s ideas are still important today and were developed in a time where no attention was paid to how people were thinking, but rather how they were acting. In other words, the mind was considered a black box and only behaviour was considered to be a good indicator of what people were doing and why (i.e. the behaviourism paradigm). So, in the Turing test, the idea was that a human was communicating with a computer in another room. If the human communicator could not distinguish whether a computer or a human was in the other room, then it meant that machine was able – in line with behaviourism – to act as humans do and was thus considered intelligent. As we know now, imitation can do a good job and manage tasks as well – or even better – than humans do. However, there are, of course, limits to this.

AI for Humankind_

First of all, people may be accepting advice from a computer and work with those guidelines, but as soon as they find out that the source of the advice is a computer, it only takes one bad piece of advice from the computer to discount the machine entirely. On the other hand, if a human provides advice, he or she is not evaluated as negatively as a computer after a first failure. This example makes it clear that humans do prefer the “real thing” to deliver advice and help in decision-making. Second, in situations where people feel more personally involved or feel they are under scrutiny to be evaluated for something that matters to them, then they do not so easily accept technology that can imitate a human very well. For example, if you are evaluated for a bonus, promotion or a new job all together, then people most often prefer another human to make the decision and not a machine. Our own research shows that when algorithms are used to evaluate people in terms of who they are, which is the case when being considered for a job, these people show dislike towards an automated decision-maker (algorithm aversion). One reason for this is that AI is a machine and people believe that machines do not have the ability and empathy to know what it means to be a human. And, for that reason, people consider it inappropriate for a machine to evaluate humans in their core. Only other humans should be allowed to evaluate humans in such a way and the reason for this is that humans have their own AI, which I call “authentic intelligence”.

There are numerous ways in which many of us have become dependent on what we have come to think of as “technology”, such that we would have great difficulty in functioning without it. As an example, many of us are so lazy in our use of satellite navigation systems that we consign all responsibility for choosing a route to “the machine”. Unfortunately, when the system fails, we are likely to be even more lost than if we had not used the system in the first place, simply because we haven’t even bothered to think about the matter. Moreover, even our ability to navigate for ourselves might have become compromised through lack of use. Might we see a similar dependence growing as a result of the increasing presence of AI systems in our lives, as we cede our perceived responsibility for our own destiny to “the machine”, and simply lose the habit of analysing and responding to situations for ourselves?

Many of the current examples of humans “following” AI systems take place in settings where a choice can be made. When Amazon, Netflix or YouTube say “You may also be interested in these …”, their suggestions are driven by complex algorithms; but (for the moment, at least) we have a choice about whether to follow their “lead”. (Whether we choose to exercise that choice is another matter.) But that’s different from a situation where the algorithm tells us what to do – and offers us no choice. When we reach that stage, we’ve made a fundamental transition, where we have ceded authority to a non-human system, perhaps without even realising we’ve done it. When we arrive at this point, an important question to address will indeed be whether there should be some kind of “government health warning” to advise that all or part of some given instruction originates from a machine. And I do believe that should be the case.

Our own research shows that people definitely do not want to cede authority to an algorithm. Across a variety of work situations, we arrived at the conclusion that people have no problem with delegating some part of the job to AI, but they always want to have control over the final outcome. We found that humans wanted on average 70% of control over the job, and the machine was allowed up to 30%, which indicates that the majority vote will have to stay in human hands. But, if we look at platforms like Netflix or YouTube, then we see that at a more unconscious level we do follow more easily than we consciously want. And the fact that we may not always be aware of our tendency to follow the easiest path, often delivered by algorithms, will likely make it a necessity that some warnings or labels will have to be used in time to increase awareness of who is making the decisions, especially if it comes down to the kind of values and ethics we are following.

In your book, you put forward the idea that “AI may become our new boss.” This would seem to be quite a highly charged statement, and one that has a range of implications. Do you think we are going to need some mechanism for assigning a level of authority or “power” to AI systems? Shouldn’t “the machine”, too, have some obligation to demonstrate its suitability to command.

A job is, of course, more than simply doing one task, just like running a company is also more than simply calculating a strategy. So, AI becoming your boss will not quickly happen and, if it does, then the lower-paid jobs will be hit first.

I start with this idea, yes, but I do not conclude the book with it. This idea has become popular among thinkers who focus on AI as a serious existential threat – as can be seen in Hollywood movies like “Terminator”. In that perspective, AI will develop into a supernatural force that will have the power to eliminate humans. But, as I make clear in my book, we need to be more realistic about what it is that AI can do and what it cannot do. And, if we engage in that thinking exercise, then it becomes clear that the AI we know today will outperform us when it comes down to routine tasks, which will lead to the earlier-mentioned management by algorithm (MBA) effect. But, a job is, of course, more than simply doing one task, just like running a company is also more than simply calculating a strategy. So, AI becoming your boss will not quickly happen and, if it does, then the lower-paid jobs will be hit first. As a case in point, assembly line workers at Amazon are facing such a situation, as some of them were fired by an algorithm, without necessarily having a human supervisor being involved. So, here you see the real MBA already happening, because these jobs are easy to bring back to executing one task that can easily be monitored and measured. The end result, of course, will then also be that humans are reduced to numbers that are evaluated by machine. In that case, we will be on a path to creating a world that is more suitable for machines than humans. After all, humans will be required to start acting like a machine and that is not really the reason why we want and should apply AI technologies.

For humans to feel any kind of allegiance to an organisation, it’s important that they feel appreciated. Even if an AI system were able to give a perfect imitation of appreciative behaviour, it would be obvious that, in fact, the system had no understanding of the concept of appreciation. Of course, it may be argued that many human managers also synthesise their feelings of appreciation of their subordinates. Nevertheless, even such feigned appreciation might have more credibility than that apparently shown by a machine. Will it ever be possible for an AI system to replicate appreciation in any meaningful way?

As I mentioned earlier, when information is provided that is relevant to how people look at and define themselves then they are very sensitive to the credibility of the source. In those situations, having a machine communicate this information, no matter how nice the voice may be and how respectful the text is construed by the computer (based on, for example, natural language processing abilities), the information will always be seen as not that relevant, because it comes from an entity that does not know what it means to be human. Will AI ever reach that stage? Well, in a way it depends on whether people will ever construe a belief that they will think that the computer has achieved a stage where it has some consciousness to understand humans’ concerns, needs and emotions.

But, to give you a more precise answer, let’s take a look at the synthetic life form with AI, called Data, in the Star Trek franchise. In several episodes, Data shows remarkable intelligence and does things no human can do, but, at the end of the day, Data does not succeed in understanding human emotions. How emotions feel, how they make man act in sometimes unpredictable ways – it all remains a mystery to the machine. This anecdote shows me that it will take a very long time before we ever may develop or help develop a machine with self-awareness, if ever. And, as long as this does not happen, we will not equate machine’s appreciation in a meaningful way.

Finally, you are very much involved in the debate on AI and the future. The issues are clearly very significant and are certainly going to have a huge impact on society. Are you optimistic about the future, or worried?

Optimistic, because I choose to look at life like that, but also because we have developed something that can bring amazing value to our society and our efforts to bring more welfare and happiness. But, I’m also a bit worried that, in our pursuit of expanding our horizons, we may not stay humble enough to assess continuously whether our technological developments are helping humanity or simply showing how capable we are as species of designing “the unimaginable”. If we do not show that humbleness and stay aware of human-centredness as a core value in any technological development, then we may think we are in control, whereas the reality may be that we have ceded authority already – without being aware of it. Another point of concern is that we are overselling the power of algorithmic authority today to our businesses. As a result, too many organisational leaders believe that AI can solve many of their problems and, at the same time, cut costs significantly. A kind of “blind belief” has come to the surface, and it’s deceiving us. An important reason is that most of our business leaders are not tech savvy enough to understand the potential of AI, while at the same time also being realistic enough about the limitations of AI. At the end of the day, no matter how sophisticated the adopted AI platform will be, we still need business leaders who are able to demonstrate good leadership every day and succeed in using AI in ways that facilitate – and not overtake – the effectiveness of their workforce.

Executive Profile

David De Cremer is the Provost chair and professor in management and organizations at NUS Business School, National University of Singapore. He is the founder and director of the Center on AI Technology for Humankind at NUS Business School; which is a platform developing research and education promoting a human-centered approach to AI development. Before moving to NUS, he was the KPMG endowed chaired professor in management studies at Cambridge Judge Business School (CJBS) and is currently an honorary fellow at CJBS and St. Edmunds College, Cambridge University. He is named one of the World’s top 30 management gurus and speakers in 2020 by the organization GlobalGurus, named as being among the top 2% scientists in the world in 2020, and has published over more than 300 articles and book chapters. He is also a best-selling author with his book “Huawei: Leadership, culture and connectivity” having sold more than one million copies. His most recent book is “Leadership by algorithm: Who leads and who follows in the AI era?” (2020) and has received critical acclaim by, among others, The Financial Times and the World Economic Forum.

LEAVE A REPLY

Please enter your comment!
Please enter your name here