This year, the World Economic Forum (WEF) has chosen a compelling theme: ‘Rebuilding Trust’.
As outlined on the WEF’s website, it’s all about rekindling trust in our future, fostering cohesion within societies, and fortifying the ties among nations. Following the disruptive upheavals of 2023, a dose of trust-building feels appropriate.
Notably, AI, took the spotlight at the Forum for the first time along with other crucial themes like security, jobs, and climate. This acknowledgement underscores the profound impact AI is set to have on the global stage.
Given its dominant presence in the headlines, AI’s central role shouldn’t raise many eyebrows. And honing in on ‘Trust’ is an important guiding principle for business leaders navigating AI in 2024, as they align their strategies with ethical considerations and societal expectations. In a world increasingly shaped by technological advancements, instilling trust is imperative for both sustainable and responsible innovation.
Charting AI’s Next Phase: Innovation Anchored in Responsibility
The AI landscape is brimming with potential, providing an opportunity to shape the first genuinely creative human, freeing time for innovation, opening doors to novel work methods, and layering machine-driven data analysis to enhance our decision-making.
But it’s not without challenges. Operating without responsibility risks damaging confidence, empowering malicious actors, and causing significant harm. Navigating a course that seamlessly integrates both innovation and responsibility is paramount.
Over the past year, global efforts have been made to address this. The UK hosted the AI Safety Summit, introducing the Bletchley Declaration, the inaugural international agreement for a secure AI framework, endorsed by 28 countries. The EU created the AI Act, the US crafted an ambitious blueprint for an AI Bill of Rights. Numerous nations are refining their distinctive AI governance strategies, showcasing a collective commitment to responsible AI advancement.
Looking ahead to 2024, I foresee sustained momentum toward AI regulation, emphasizing ethical deployment, transparency and robust risk management. This should help usher in a safer, more accountable global environment for AI development.
To assess the current landscape, EY recently published a revealing report on key global AI regulatory trends. Standout trends identified in the report include the adoption of AI governance frameworks, compliance systems, risk-based action plans, industry-specific rules and comprehensive approaches in sync with other digital policy priorities. Additionally, collaboration among policymakers, the private sector and civil society is critical for successful implementation.
These trends have significant implications for business leaders and policymakers worldwide. Remaining agile and well-informed is imperative if businesses are to navigate the evolving legal terrain. Policymakers, meanwhile, face the challenging task of crafting effective regulation while moving toward convergence with other key jurisdictions – without stifling innovation. Together, they play a pivotal role in guiding investments to translate regulatory initiatives into tangible growth for the global AI sector.
So, what’s my message for business leaders who gathered in Davos looking to chart an AI course in 2024 that includes both responsibility and innovation? Here are three critical steps:
- Understand your legal, governance, and compliance obligations. As a crucial first step, and to make sure companies are meeting the expectations of investors, regulators, and other stakeholders, it is imperative that businesses possess a deep understanding of their responsibilities under the laws and regulations of the jurisdictions wherever they do business and establish policies and procedures designed to meet them. Some key new AI regulations have significant extra-territorial implications. Organizations will also need to understand how new AI codes and regulations interact with existing laws, including sector regulations.
- Implement strong AI governance procedures throughout all levels of the organization. This comprehensive approach should encompass governance frameworks, clearly defined responsibilities, a meticulous inventory, and stringent controls for the use of AI, spanning from the board to operations levels. The establishment of an AI ethics board can play a pivotal role in providing independent guidance to management, particularly concerning ethical considerations in AI development and deployment.[Text Wrapping Break]
- Be proactive – initiate conversations with regulators, governments, NGOs, and other stakeholders. This is important for helping businesses gain an in-depth understanding of the continually evolving regulatory landscape. These kinds of engagements also contribute potentially invaluable information and insights for policymakers as they navigate the complex process of shaping rules and regulations, fostering collaboration, and enhancing the effectiveness of regulatory frameworks.
2024 is undoubtedly a pivotal moment for AI; A balancing act for both policymakers and business leaders as they move toward AI advancement, innovation, and responsibility. As we reflect on Davos, these conversations will assume utmost significance. Succeeding isn’t merely an ethical choice – it’s an imperative.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.
About the Author
Julie Linn Teigland has nearly three decades of experience in professional services for international clients, Julie’s focus is on transformation processes, in particular on the challenges of digital transformation, and is committed to the sustainable development of capital markets and their framework conditions. Julie has served as lead partner for several Fortune 500 clients.