EU AI Act

By Jacques Bughin

Alongside the lightning-paced advancements in AI, a number of ethical quandaries have emerged, evidenced for example by deep fakes and incidences of bias. Here, Jacques Bughin puts the case for regulation combined with responsible implementation. 

Ethical AI on the move 

In recent years, the rapid development of artificial intelligence technologies has led to growing concerns about their ethical implications. There have been numerous cases of AI technologies being misused, including the recent deep fake linked to Taylor Swift on X, or the fake video of Ukrainian President Volodymyr Zelenskyy surrendering. In fact, a report by the Stanford Institute for Human-Centered AI found that AI incidents and controversies have increased 26-fold since 2012.  

The fundamental question, then, is whether the technology should be released from sandboxes without careful rules associated with the use of the technology. The problem is that some people may be malevolent; others, such as private companies, may simply be reckless – forgetting to internalise major social risks and opting for the AI race at all costs, in the hope of market leadership. 

In this vein, the New York Times published an article on the race for AI and the risk of reckless AI. The paper referred to how two Google employees, mimicking a similar attempt by employees at rival Microsoft 10 months earlier, “tried to stop Google from launching an AI chatbot that was likely to generate inaccurate and dangerous statements”. As the NYT article also noted, “both companies released their chatbots anyway. In the race to lead generative AI, it’s better to be first and worry about things that can be fixed later.” Note that while Elon Musk criticised Microsoft / Google for their arms race and called for a pause in AI development, he has since changed his mind and launched Gork. 

AI ethics: to regulate or not to regulate 

Obviously, we should not kill the golden goose of AI. AI innovations are already at the heart of major gains in productivity, with AI so embedded in many of our daily activities (from taxi hailing, Google search, and call centres, to product recommendations and weather forecasts) that we easily forget how powerful it is.  

However, the issue with AI is that it combines multiple problems, from misuse to  the transparency of its own algorithms (“black box”), or major biases of AI models induced by the type of data collected.  

The AI Act, which will become law around April 2024, is a clear test of business ethics, balancing the need for technological progress with the notion of doing the right thing.

One solution to fix those issues is regulation to impose a level playing field. In addition to its General Data Protection Regulation (GDPR), the European Union has emerged as a frontrunner in addressing the ethical challenges posed by AI through regulatory measures such as the European Artificial Intelligence (AI) Act. Spearheaded by EU Vice President Margrethe Vestager, the AI Act, which will become law around April 2024, is a clear test of business ethics, balancing the need for technological progress with the notion of doing the right thing. 

Everyone knows that regulation is not always optimal. For instance, there are clear drawbacks, we believe, in the EU AI Act; for example, the fact that it is essentially an ex ante regulation makes it possibly costly for some small companies to bear the full costs, as also argued elsewhere. Second, regulation may not be needed per se if firms can develop an awareness that the wrong kind of AI will build huge risks, for example in brand reputation damage or worse (remember the data scandal of Cambridge Analytica, which led the FTC to fine Facebook US$5 billion and led CA to bankruptcy). 

However, the main issue with regulating AI at the firm level is that ethical AI is not the common type of business ethics, as we have seen for issues such as implementation of governance board independence, corporate social responsibility, or inclusiveness. In fact, it is way harder to implement ethical AI. Not only does it require the AI foundation in terms of technical AI infrastructure, data and skills, but it also requires the AI models to be closely audited and monitored, with accountability assigned across the organisation. This is not a small task. 

Ethical AI operations in practice 

We have recently tried to assess the adoption of responsible AI practices among European listed firms in major European markets. According to the survey, 94 per cent of European listed firms are expected to have developed their AI principles by 2024 in line with their AI strategy, or more than double the rate by 2021. However, only 41 per cent of firms feel that RAI is sufficiently embedded in the daily work of all employees, highlighting the need for further integration, and methods of RAI operationalisation of practices across organisational functions. 

Based on multiple discussions, scholar reviews, case studies, and own experience, the key to companies’ success in day-to-day AI ethics is based on two pillars: a) a complete journey, and b) an adequate staffing level  

The journey 

Using Eitel-Porter’s work, the journey consists of 1) an organisation’s choice of ethical principles, 2) the setting up of an AI ethics board, followed by 3) a robust governance process to be set up to clarify how decisions around responsible AI are made and documented, before 4) a company-wide training of responsibilities and usage, to be completed by 5) a strong stress test of the AI practices. 

While the journey may seem common sense, a few tips are that firms should select ethical principles that are crucial in order to develop trust and reputation in their industry, that the ethical board should be composed of true external experts in both ethics and AI, that the governance process should be staged, for example starting from data accuracy / traceability, etc. to model development before moving to production. Also, training is a must-have in order to ensure that corporate AI principles are applied at all levels of the organisation. One approach of stress testing is to establish “red teams“, as in the world of cybersecurity, and relates to the use of “white hat” hackers to test enterprise defences: “Applied to responsible AI, Red Teams constitute data scientists charged with reviewing algorithms and outcomes for signs of bias or the risk of unintended consequences.” 

Resources 

There is no rule here per se. As an example of full regulatory compliance, banks support costs of about 0.5 to 1 per cent of the total for compliance risk monitoring. Typical high-tech companies seem to have an ethical AI team of about 20-40 people. 

 But, just as importantly, teams tasked with designing AI, should be multicultural as a guard against unconscious bias. Likewise, the ethics board should be high-profile to demonstrate commitment – and be responsible and accountable for its actions. 

 

About the Author 

Jacques Bughin

Jacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He is retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here