Technology, says Hamilton Mann, is not in itself sustainable or positive – it depends on how we choose to develop and apply it.
In this illuminating Radar 2024 LinkedIn Live session, Hamilton underscores the importance of incorporating different perspectives and the key role of citizen identity when deploying AI systems. He also advocates for an ‘artificial integrity’ approach, where AI technologies are designed not just for intelligence but also to align with human ethics, values, and principles across diverse cultures. This requires cooperation across governments, business, and individuals, to avoid a ‘digital divide’ and ensure technological progress benefits all segments of society equitably.
Transcript
Stuart Crainer:
Hello, I’m Stuart Crainer, co-founder of Thinkers50. Welcome to our weekly LinkedIn Live session, celebrating some of the brightest new stars in the world of management thinking. In January, we announced the Thinkers50 Radar Community for 2024. These are the upcoming management thinkers we believe individuals and organizations should be listening to. The 2024 list was our most eclectic and challenging yet. This year’s Radar is brought to you in partnership with Deloitte and features business thinkers from the world of fashion, retail, branding and communications, as well as statisticians, neuroscientists, and platform practitioners from the Nordics to New Zealand and Asia to America. Over the next few weeks, we will be meeting some of these fantastic thinkers in our weekly sessions, so we hope you can join us for some great conversations. As always, please let us know where you are joining us from and sending any comments, questions, or observations at any time during the 45-minute session.
Our guest today is Hamilton Mann. Hamilton is the group vice-president of digital marketing and digital transformation at Thales, a global leader in advanced technologies investing in digital and deep take innovations, connectivity, big data, artificial intelligence, cyber security, and quantum technologies. Hamilton spearheads initiatives that drive enhanced customer engagement, excellence in integrated campaigns, and sales and marketing effectiveness. He’s a senior lecturer at INSEAD and elsewhere. He actively participates in driving the advancement of digital technologies for positive societal change as a mentor at the MIT Priscilla King Gray Center and hosts the Hamilton Mann Conversation, a masterclass podcast about digital for good. Hamilton, welcome. Your work spans a broad spectrum of subjects including digital transformation, artificial intelligence, sustainability, innovation, business models, and customer-centric strategies. Is there a golden thread which holds it all together?
Hamilton Mann:
Yes. Hi, Stuart, and very happy to have the opportunity to discuss with you. I think that’s a very good question because it will give me the opportunity to jump right away on one key overarching topic, which is ‘digital for good’. I think the common traits between all those different elements, pieces and parts is very much about the question around how can we make sure that the use of the technologies that we can leverage is very much about serving the cause and serving the mission of delivering positive outcomes in societies.
Stuart Crainer:
And where are we on that do you think, in digital for good?
Hamilton Mann:
I think we are doing quite well in terms of the direction, because I think more and more we do have a consciousness growing in the heads of many leaders around the world, companies tackling the point of making sure that sustainability, and societies, as a word is very much part of the strategy of their agenda. But, again, yes, we do have some improvement to make. Let’s not be shy about this aspect. We have some improvements to make. This is also something on which we need to continue our education. So as far as we see technology evolving, we also need to evolve and raise our level in terms of proficiency, understanding how those new technologies are very much bringing forward new opportunities to bring positive impact in societies. So I will say, let’s continue the work and let’s continue to improve ourselves. This is a journey.
Stuart Crainer:
But I suppose the continuing question must be how can we ensure that technology is a force for good? What do we need to do? Is it the work of governments? Is it the work of corporations? Is it the work of a body? Is it the work of individuals?
Hamilton Mann:
I think the good news in that aspect is that this is very much a collective work. This is the work of governments, this is the work of corporations, companies, this is the work of each and every one of us as users. This is very much the work on which each and every citizen, each and every person does have something to say and something, let’s say a kind of a room on which we can play. So this is very much a collective type of work, and the more we are advancing on the technology and the kind of a power that the technology can bring in society, the more this is going to be critical for us to, I will say, be able to work not as a silo form of organization, but transversely.
So whether it is mixing discipline, mixing perspective, mixing culture stereotypes, mixing diversity also as well in terms of the way we think and the way we see the world, I think the stake in the era in which we are living is very much how to work transversely and how to work in a diverse manner, so we make sure that we take a good grasp of the intelligence that we as humans, we can put on the table when we are different and when we put different perspective.
Stuart Crainer:
I suppose that an issue there is that the power in the technology space still resides in a very small number of companies in Silicon Valley. How do we control that going forward? And do you see that changing?
Hamilton Mann:
This is true that reality is, as you said, we have some pockets of power in some parts of the world, and the US of course is definitely one of them, when it comes to technologies. I think this is also back to the point I was saying, seeing the perspective not from a silo approach, but seeing the systemic effect of all that we are also doing. Technology is a system, but this is not a system that lives in a vacuum. This is a system that takes place in another one. And the other one is broader, is bigger, is the society, is the world. And so the point here is to figure out, even though we have some pockets of power when it comes to technology in some parts of the world, the system globally in which those different pockets of power are living is the world, is the society, it touches most of us.
So here it means that in terms of opportunity to participate in the integration of those different technology systems in the broader system, which is the society ones, in terms of opportunity of participating to that conversation and participating to that installation, there is a lot of us that do have the power to influence and guide the right direction in terms of what should be the definition of those new technology integrating the broader system, which is basically our societies.
Stuart Crainer:
And thank you to everyone who’s joined us so far. I see people from Germany, in India, the States, Poland and elsewhere. Please send over your questions for Hamilton at any time during the session and we will pass them on. Hamilton, what’s really good about your perspective of the world, I think, is that you see it’s very positive and optimistic, obviously not naively, but you are seeing that technology as a force for good, and the challenge to us is to create systems within society and with corporations to make sure we deliver on that.
Hamilton Mann:
Yeah, but I think I will say, because sometimes when we talk about digital for good, we intend maybe to think that this is just thinking about the positive aspect and the positive impact that technology can bring in the world, which is absolutely, of course, a key piece. But to really act on it to execute the real strategies when it comes to digital for good, it starts by acknowledging the fact that technology in itself is not inherently sustainable or positive. It starts with this topic, which is thinking about and acknowledging the fact that technology itself is absolutely not inherently sustainable or positive, because, of course, for many reasons. Technology is absolutely not… There is a fair contribution of technologies in the environmental impact and depth, so we need to acknowledge that.
We also need to acknowledge the fact that technology by itself is very much something that does not have the exclusivity when it comes to innovation and when it comes to progress. And actually sometimes technology, even advanced technologies can be the opposite of progress. So it very much starts by looking at the risk, looking at the externalities that can come with good intentions that we can foresee with technologies, and trying to figure out how to cope with those externalities, how to be very much objective in terms of how we are going to deal with those different … not foreseeing impact, because they might happen, because they will happen, and we need to have a plan for that.
So this is to me a point where we need to look at the glass half full, but not only that, we also need to look at the glass half empty to make sure that all the externalities that definitely naturally come also with the technologies are very much managed in a good manner. And this is to me what makes the difference between a good transformation and good technology transformation no matter what kind of ecosystem it touches versus a digital for good approach.
Stuart Crainer:
So it is a really interesting statement that technology is not in itself sustainable or positive.
Hamilton Mann:
Yeah, definitely. I think it starts with having a step back in terms of let’s look at the reality as it is. We all know that technologies come with a form of externalities in many aspects. If we take the example of AI, for example, we of course know now that AI comes also with great opportunities and great advancement that we can foresee for society in many domains, but we also know that it also comes with some major water consumptions, or this is also a key contributor when it comes to electronic waste, this is also a key contributor when it comes to energies, consumption, etc. So the question is very much about not being naive in terms of, oh, this is good or this is bad, because real life is a mix of the two.
So we need to figure out what is very much the service and the good aspect of it that we can deliver to societies and participate in the progress that we seek. And this is very much where the real approach comes into play, and considering all the different impact aspects and negative externalities, etc that also comes with it, and to look at the way to manage that a proper way.
Stuart Crainer:
And so technology is a force for democratization of knowledge and of societies?
Hamilton Mann:
I think the short answer will be yes. But, again, complexities … is always where we can fine tune the approach of having the real perspective though, the less biased perspective if I may. So I think when it comes to education, of course there is some great opportunities with technology and leveraging technology just because we know that some of the technologies like simply visual conferences, etc, they are of great help when it comes to even the sharing, even the opportunity of accessing knowledge, accessing some people from all around the world to have some exchange, to share some ideas, etc, and to spread the opportunities of leveraging education in many aspects. The reality is that this is absolutely not true in each and every country all around the world.
We know that in many countries around the world, the access of the Wi-Fi network and electronic devices and so on and so forth is not democratized. So it means that when we think about the leverage that we can have using technologies to spread even more power when it comes to giving access to education, it also means that some of the part of the world will stay behind because the technologies and the means to do so is not absolutely right there or not widespread so far. Again, it means that beyond technologies, looking at the system and looking at the technology as a system that needs to be integrated in societies, which is basically the broader systems, the question is how do we democratize education when sometimes the barrier to have technology working and to have the technology’s presence in some parts of the world are very low.
So this is always a form of a rationale to have in mind to figuring out that everything is not equal so far. And so when you do something with technologies in one part of the world, it doesn’t mean that you can take the same aspect and have the same positive impact in each and every part of the world. So you need to figure out those different perspectives and make sure that you do not exert a form of a digital divide, thinking that you are having a positive impact in the world in one aspect, but you’re just thinking about the world that is formerly in your head, but not explicitly the world as it is.
Stuart Crainer:
Frank Calberg from Switzerland develops that point, Hamilton, he says, “What concrete initiatives do we need to take to make sure that technological advancements increase incomes for people with low incomes and avoid even higher differences in incomes between people?” What needs to happen?
Hamilton Mann:
I think so many things can be .. of course, let’s say there is not one single bullet point or one single weapon that can solve that issue, of course. But what comes to my mind is that we know that in many organizations, as far as our societies are concerned, identity is key. The fact that we can give to each and every person, each and every citizen in a given country, give them the opportunity to have an identity. And it sounds like quite something obvious in many countries around the world, but the truth is that this is not something that is accessible in many other parts of the world. And when you are able to give and provide that identity for each and every given person that lives in the given countries, it starts to create a form of a society where you can organize many things.
You can organize social security, you can organize healthcare, you can organize employment, etc, etc. All the different layers that we need as organized societies to organize and to structure the way the society works somehow start with the fact that we are able to know that you are Stuart Crainer and you have a number of social security and you have an ID card, et cetera, et cetera. So one point that comes to my mind in terms of what can be done in terms of making sure that the usage of technologies could be at the service of leveraging even more in all different part of the world, the opportunity of generating incomes in many population could be making sure that we advance our agenda when it comes to developing this opportunity of giving to citizen around the world, the poor of having a formal administrative identity.
Stuart Crainer:
Where is best practice then? Understanding of technology and the power of it, which countries do you think really understand its power and are harnessing it successfully?
Hamilton Mann:
I think there are many great examples in many countries. I think you always see some kind of best practices in different sectors and different activities. If you take the example, for example, in Asia, in China, etc, they have been in a form of a very interesting way leveraging technologies when it comes to organizing, for example, some kind of services through social networks, etc, so they leverage the social networks to deliver services. I think the uberization also, as we call it, with many applications that live in our smartphone and tablet and whatsoever, has also been a good way in some aspects of living our different tasks of our everyday life, which contribute also as well in a form of a good aspect in societies.
So I will not put the points of putting a spotlight in some countries that are good or example in terms of how to do, but more in terms of how we can leverage the advancement of the technologies today to tackle some key area in terms of a domain or in terms of sectors, thinking about transportation, thinking about healthcare, thinking about employment. So those critical pillars that are very key to any societies and on which there is some progress to be made from one country to another, and there is some great opportunity going on with AI and with some new technologies coming up.
Stuart Crainer:
Frank comes back with another point, which I think is an interesting one, what are the advantages and disadvantages of surveillance technologies, and how do we democratize the development of ethics in relation to the development and use of surveillance technologies? I think this relates, Hamilton, to you talking about, and I think there’s a really good Forbes article by you, talking about artificial integrity, which is a really fantastic term. Perhaps you could talk about that, because obviously, as Frank says, there’s a lot of issues around technology and the ethics of a lot of the technology.
Hamilton Mann:
Yeah, definitely. So I think when it comes to the technologies and surveillance type of application is one use case that is of course very in the head of many of us, I think it is very much about how to embrace those form of use cases and application of technologies in our day-to-day life in a way that will be ethical, in a way that will be also in sync, also in harmony with the value that we want to push forwards. And so when you put the points this way, of course there is no one single answer. And this is to me the first point that is very much important to have in mind, there is no one single answer basically to that question, because this is the encounters between technology on one hand and the value of a given part of the world on the other hand, and the harmony between the two.
So you will have some countries where they will push the level in terms of how they want to leverage technologies when it comes to surveillance for some good reason, which let’s say security aspect, for example, and they will push that level at a certain degree because they found that there is a form of harmony with the value that they push forward in that given part of the world. So it doesn’t mean that it is not going to be ethical. It is going to be a form of ethics from their perspective, from their culture, and from their value stance. If you look at another part of the world and you try to figure out the way to implement the exact same technologies, not taking into account and considering the culture and the value that is very much at the heart of that part of the world, it will not work because, of course, you need to deal with another different perspective compared to the other one.
So it is also where comparison, it is absolutely not being, to be a reasoned way of approaching this aspect. And this is also why I’m currently developing this concept of artificial integrity, because I think of course being intelligent is great, it helps to do many things, many tasks, but at the end of the day, we want more. We want more than the task being done. We want the task being done the proper way, we want the task being done the right way. And when we have the proper way, the right way, comes the fact that we want things to be done in alignment with value, with principles, with ethics, et cetera. It doesn’t make sense to just have things done intelligently in the vacuum of any form of ethics and values, and so on and so forth.
To me, this is basically the study that I’m looking at currently, which is about how to make sure that the systems and the technologies that we develop and that we implement will not only be developed and implement with intelligence, will not only be delivering intelligence, but also being designed in the purpose of being in harmony with a form of integrity that we need to preserve in a way. So back to the point and back to the question, there are going to be different answers depending on the countries and depending on the people that you are talking to, and so this is also where we need to figure out the fact that the technology advancements is not something like we love to say in marketing, this is not something to see in a one-size-fits-all. The same form of algorithm, the same type of systems will mean different things depending if you are living in China, Africa, France, et cetera, et cetera, and so we need to have that respect.
We need to have the respect to look at the eyes and to respect the form of a value ethical stance that structurally compose a given culture, because they will look at the technologies in a different ways that we might look at that, and they will make the work of integrating the eventual form of advancement power or progress that can come with it, taking into account their value, their ethics, their point of view, et cetera, et cetera. I think it is also true that there has to exist, or in a form of a way that also has to be considered, a form of overarching … a universal form of principle, ethic and value. So taking into account and considering the different form of perspective that exists all around the world, which composes the richness of who we are basically, but also taking into account some of the universal guiding principles that also come into play to define those different cultures and different values that exist, and sometimes create some bridge between one to another.
I think this is the challenge that we have today with the new technologies, not only and specifically managing the fact that how to, technology speaking, develop the AI that will be intelligent and that will mimic what we can do humanly speaking, but being also able to think around how those technologies will be able to embrace the form of a diversity that exists so far considering the values, the different ethics and things that exist all around the world; the cultural aspect as well, because, at the end of the day, those different technologies that we develop, they are going to mirror those different stances that we have from an ethical and value perspective, number one.
And we are living in a connected world, so it means that even though we are different in terms of perspective and way to look at it, we need also to have from some form of principles that are overarching and that allow us to have a form of a common way of seeing life and common way of seeing the world in a way. So this is to me I think what is at stake when it comes to those different applications of new technologies, taking surveillance as an example, but there are many others, we can talk about drones and whatever.
Stuart Crainer:
It’s a very complicated and interesting area. Obviously lots of comments coming through. Jonathan, who’s joining us from the Bay Area in the States, says, “Technology can democratize information sharing and knowledge transfer, and we need to decrease the digital divide, increase infrastructure bandwidth, improve health, digital, and financial literacy.” I think we agreed with you, Jonathan.
Hamilton Mann:
Yeah, absolutely.
Stuart Crainer:
Jonathan also says it’s about intersectionality, which it may well be. Bogdan says, “Value creation for all stakeholders versus value creation for shareholders, which is stronger when it comes to technology?” That’s a question that puts you on the spot, Hamilton.
Hamilton Mann:
Yeah, I think so. And thank you very much Bogdan, actually, for that question. I think, if I may, we will have to challenge ourselves when it comes to finding a very simple approach, considering that the answer will be yes or no, or will be good or bad, or will be stakeholder or shareholder, et cetera, et cetera. But this is very much something that will challenge yourself, I know that, and this is true from an organizational perspective, so of course this is absolutely true from a country perspective, because the more we advance with the technologies and the capabilities that the technologies can bring to our society, and the more and more we need to be across, we need to be thinking cross-discipline, cross-interests, cross-perspective, so the question will be more about how to serve the given ecosystem.
And, to me, shareholders, tech order, they are very much part of the ecosystem, finding the common point that reunites the different groups and the different parts that we sometimes see as separate entities, and to look at a holistic way of composing with those different groups. So we do have to find a way in many aspects to make sure that technology is not serving one group at the expense of the other, and could be very much a form of a help to holistically serve the ecosystems. And so to me this is maybe a point on which we have also work to do collectively. And it’s not the technologies in itself that will bring us the answers. This is very much our thinking, our mindsets in terms of how do we implement those technologies in our societies, and this is back to the approach of digital for good, how do we implement these technologies in our societies, making sure that we are not serving one group at the expense to another, but more holistically serving the ecosystem that is involved.
Stuart Crainer:
I see we’re joined by Simone Phipps, co-author of African American Management History, who’s championed the idea of cooperative advantage. And really what you are talking about, Hamilton, is a higher level of collaboration and cooperation.
Hamilton Mann:
Yeah, definitely. And I think this is very interesting because I think when it comes to cooperation, a collaboration, and I’m sure many people around the core know that, if I look at a given organization, this is still a challenge, it has always been a challenge for any leaders, to have cooperation and collaboration within your team. Let’s put aside technology for a moment. We are all people, human, citizens in many aspects, and we are interacting with each other. And we have many situations and occasions where we are part of a team, part of a group, and we’re trying to achieve something together. It always has been a challenge to make that equation work, one plus one equals three. So now when it comes to the cooperation and collaboration, and let’s use a word making sure that we are, of course, inclusive enough to grab the power that can be brought by the technology, make sure that we are inclusive enough to bring the form of a new intelligence that some of the system that we call artificial intelligence systems that they can bring to the collective intelligence.
This is a new form of cooperation coming up, a form of a cooperation when not only you have to deal with different brands and different perspective, coming from different human beings, but you also need to deal with the contributions that comes for those artificial intelligence systems that will add something to the cooperation, to the collaboration equation, and that will add something to what we call the collective intelligence today. So I think what it forces in a way, it forces us to accelerate understanding how to leverage cooperation, collaboration in our ways and our manners and our way of interacting between us, because the technology will not bring the answers. Again, this is very much going to be us as humans to figure out what will be the right way of including a new form of intelligence coming from the technologies in the space of what we call collective intelligence and look at the efficient way of having that collaboration and cooperation interplay.
And so another way to say that is that if I take the form of a race that is going on today with many companies all around the world trying to experiment Gen AI and AIs, I think some of the companies that will be very much ahead of the pack will not be the ones that leverage the most sophisticated AI or technologies or platforms, et cetera. There will be the ones that have understood how to include this new form of intelligence coming from the systems in the overall system, which is the organization, and adding new forms of collaboration and cooperation into play. And this is going to be the challenge in a way.
Stuart Crainer:
Surabh has got a difficult question for you. She says that Elon Musk said too much cooperation is a bad thing. Which … Elon Musk may well have said that, I don’t know. It’s not, is it? You can’t have too much cooperation?
Hamilton Mann:
Too much cooperation, I don’t know what is the definition behind the too much cooperation, but I would say that actually to me the point is very much about what are we trying to achieve. So if we are very aligned in terms of what we are trying to achieve, what the purpose is, what the mission is that we’re trying to execute or deliver, too much cooperation will never be a blocker for achieving that. As soon as we make sure that we know how to solve that equation, one plus one equals three, because instead of saying too much cooperation might be a form of a trap, I will frame it differently and say that too much bad cooperation is very much the trap.
So I think cooperation working is not just having people in the same place or in the same team, and now you say, “Okay, they are in the same team, so let’s say they are cooperating, they are going to cooperate.” And maybe they are telling the team, “So let’s make sure that the team’s people and the team’s part of the team is cooperating.” It doesn’t work this way. And we will know that. It always makes me think about, for example… And I’m sure we all have some souvenirs, memories of that, when we was very young and actually looking at our children, you come in a square and you have 10 children, 10 girls, boys and whatsoever, and you’re trying to play a match, play football, for example, and you will observe very interesting things. Very quickly, the ball is going to be the very much the only focus, and so nobody is going to be looking at the goals. Every child is looking at the ball and trying to figure out how to make the best possible performance.
And of course they feel like they can cooperate, but we could say that there is some room in terms of improvement in terms of how they can be more effective. So I think this is that cooperation is not something that you … and then it happens. This is very much something that is related to how to manage people, of course. And we know there are good managers and bad managers. Hopefully good managers do have some form of a recipe in terms of how they can approach these good answers, solving that equation, one plus one equals three. And, again, when it comes to the technologies, you need to figure out how to include that in that equation to not have something that is separate, like when you put some oil in a glass of water and you have the water on one side and the oil on the other side.
So this is very much to me the challenge behind this. So not a simple answer. I will say this is, quite, very much looks straightforward to think too much of things, or too much of this could be at stake and an issue. Of course, extreme in any form of a perspective might be an issue, but to me the point is very much about how you deal with managing first the people that you have, because you put people in some place to do something and to achieve something, and this is going to be the artwork of making the cooperation work in order to deliver. And second, making sure that when you integrate and when you implement technologies to be part of the equation, to be part of the collective intelligence, you do so in a form of a harmony, so you preserve a form of integrity between the interaction of the people and between the integration that the technologies are with those people. So this is something to look at case by case, and this is a bit of a science, but also a bit of an art.
Stuart Crainer:
They’re the worst things, aren’t they?
Hamilton Mann:
Yeah, absolutely.
Stuart Crainer:
Tami Madison says, “The difference between relational engagement versus transactional.” And I think you are talking about relational engagement. One asked the question, which is the killer question, the thing we all want to know is what the world will look like in 2030 and 2050? I get, Hamilton, that you’re optimistic, you think that we have the capacity within us and within our organizations to harness the very best of technology to make the world a better place.
Hamilton Mann:
Yeah, definitely, but I think I will draw something to answer that question. The framework that I’m, say, looking at today, is that when it comes to artificial integrity, it is very much about how those new forms of intelligence can play a role in our societies taking into account that this is not something that needs to operate in a vacuum. So figuring out, for example, a quadrant, so you have the Y-axis from low to high human-add value, and you have the… No, sorry, you have X-axis, low to high value in terms of human-add value, and the Y-axis, when it comes to the added value brought by the AI, low to high, there are four domains on which we need to answer that question for 2030 and beyond.
First, I will say the bottom left of quadrants, that I call the marginal mode, which is very much about some form of activities where we feel like the human is absolutely not well utilized. The intelligence coming from humanity and from humans is absolutely not well utilized. So it creates a lot of impact, demotivation, and so on and so forth. And in that quadrant, you also have very limited opportunities to bring value from large advanced technologies, AI and so on and so forth, because those tasks are delivering some outcomes that do not, say, deserve to put the investments related to, and so there is no error per se. And so first is we do have a very critical question, how we evolve the human workforce, making sure that we employ human intelligence the right way to move forward. That’s the first point.
And the answer to that question 10 years ago is absolutely not the same if I look at now and, of course, will not be the same in 2030. So that’s number one. Moving to the quadrant, that is at the bottom, I think that this is quite the place that many folks are, title, human first type of mode, where the intelligence brought by humanity and brought by each and every one of us is very critical as far as the outcome that we want to deliver is concerned. So even though we want to leverage AI and new forms of technologies in that aspect, the difference and what will make the difference between good and bad will be very much the implication and the contribution of a human in the loop. So in that aspect, we will have a very critical question to address: what is it and what are those critical domains in which we consider that human needs to be in the loop?
And it’s say, something that encompasses many sectors, many domains, thinking about healthcare, but many other domains. Moving to the quadrants that are up on that framework, up in the left, you have what some people may call AI first. And AI first, it is very much where we do have great opportunities. And on that aspect, let’s be optimistic, we do have great opportunities in discovering things that will very much help to advance progress in societies because those new forms of technologies and AI first will very much help us figure out, discover, and solve some problems that we did not have the opportunity to solve yet. So this is going to be very interesting. So here’s the question and the critical question for societies would be is what should be of priorities, what should be the key challenge, the big challenge on which we want to make sure that we invest a fair part of those technologies that we have in hands to make sure that we solve those, we crack the code, as we say, with those big challenges that we cannot solve by ourselves or by our very own?
And I will say on the upper right of the quadrant, which is where you have the best of both worlds, the criticality of what we can bring on the table as human when it comes to intelligence, and also what can we bring on the table when it comes to AI, that is absolutely something that we cannot bring by ourselves, you put the best of both worlds and you have what I call the fusion mode, which is how to create some form of a harmony, some form of a new collective intelligence, as we were discussing, where system machine and human will be delivering things in the form of intelligence that – we don’t know yet how this equation will function, but definitely I think we will have some clue moving forward in the five to 10 years ahead.
Stuart Crainer:
Hamilton, we’re out of time. Hopefully the new connected intelligence is just around the corner. I loved your message. I really get the sense of our potential to take control of the technology and to help it shape a better world. It’s a really powerful message. And I think the emphasis on cooperation that one plus one really can equal three is fantastically affirming about the future. There’s some links on the side to Hamilton’s articles in Forbes, which are always worth reading. He produces them regularly, so keep up to date with that. Hamilton is promising a book in the future, look out for that. And I think artificial integrity is going to be one of the big issues, and Hamilton is leading the way in discussing it. So Hamilton Mann, thank you very much, and thank you everyone for joining us from throughout the world. Thank you.
Hamilton Mann:
Thank you. Thank you very much. Thank you.
This article was originally published in Thinkers50. It can be accessed here: https://thinkers50.com/blog/digital-for-good-ai-fit-for-progress/
Hamilton Mann is the group vice-president of digital marketing and digital transformation at Thales. He is also a senior lecturer at INSEAD, a mentor at the MIT Priscilla King Gray Center, and host of the Hamilton Mann Conversation, a masterclass podcast about digital for good.