How European Companies Strengthen their Ethics, People, and Adaptability

By Laetitia Cailleteau, Philippe Roussiere, and Josh Elkind

AI could become Europe’s next instrument of responsible enterprise, making companies fairer and more responsive to change.

Across Europe, discussion around AI often centers on potential risks. These concerns shouldn’t be dismissed. But, when used well, AI can help companies make more ethical decisions, invest more effectively in their people, and adapt more quickly to shifting conditions. This article examines the opportunity to turn AI into a force for responsibility.


Across Europe, the public conversation about artificial intelligence is focused not only on the technology’s benefits, but also on how it can be misused—such as how algorithms might reinforce bias, invade privacy, or automate decisions that ought to remain human. These concerns are valid, but they also obscure a quieter, more constructive possibility: AI can help companies behave responsibly.

In the European tradition, “responsible” business means more than regulatory compliance. It means aligning the pursuit of profit and growth with social purpose, by safeguarding the interests of workers, communities, and the environment.

Our research shows that while this ideal remains deeply rooted in European business culture, many leaders now see technology as a new way to uphold it. For example, when we recently surveyed 3,000 executives—across 19 industries and 18 countries, including nine in Europe—nearly all respondents (98 percent) said that AI represents a major opportunity to rethink governance, reinforce human-capital development, and strengthen collaboration and accountability within teams.

However, employees are more skeptical: The 3,000 non-executive workers around the world whom we surveyed were 17 percentage points less likely than executives to agree that AI will help companies act responsibly. In Europe, the gap was 20 points.

By embedding ethics into decision-making, supporting people’s growth, and helping companies adapt better to new challenges, AI is already making some organizations more responsible.

This trust deficit matters. It makes securing critical employee “buy-in” for companies’ AI efforts more difficult. It highlights how executives would do well to devote more time to communicating how AI can facilitate responsible business. And, most obviously, it shows that for many companies, turning AI into a force for responsibility remains more of an aspiration than a reality. 

In this article, we draw on our research and client work to identify three essential ways in which AI is already making some organizations more responsible: by embedding ethics into decision-making, by supporting people’s growth, and by helping companies adapt better to new challenges. We then examine what leaders can do to make these benefits real.

AI can embed ethics into decision-making

AI can help companies behave responsibly by exposing how decisions are made and offering pathways to mitigate potential risks. That’s because, when properly applied, AI allows leaders to track outcomes effectively, flag inconsistencies, and ensure that commitments to fairness and privacy are observed in practice.

Researchers in Italy, for example, tested an AI system to make loan screening both fairer and more accurate.i Working with data from over 60,000 loan applications, the team designed a model that excluded sensitive factors such as gender or ethnicity and focused only on financial indicators relevant to credit risk. The results showed that the system could match the quality of human credit officers, while reducing hidden biases in loan approvals. By documenting every feature used and then testing the model against independent data, the project illustrated how ethics can be embedded into the design of business systems, not bolted on later. 

In our own research, over half of the European executives we surveyed cited ethical safeguards (such as preventing bias, ensuring accountability, and maintaining data integrity) as a leading benefit of AI. Meanwhile, the rollout of the EU AI Act—a new framework that classifies AI systems by risk and requires companies to demonstrate oversight and transparency—has added further urgency to make AI serve responsible ends.

Fortunately, many companies are not waiting to act until the regulation takes full effect. Across Europe and beyond, leaders are already experimenting with ways to use AI to strengthen governance and public confidence. Microsoft, for instance, publishes an annual responsible AI transparency report that describes how the company’s systems are tested and monitored.ii More than 1,300 AI use cases have undergone pre-deployment review by experts across the organization’s internal responsible AI community, according to the latest report. Microsoft’s annual “hackathon” also saw over 700 projects focused on responsible AI, helping employees apply good governance in their work.

Used in this manner, AI doesn’t replace ethical judgment; instead, it strengthens it. By making decisions traceable and outcomes measurable, AI gives companies a practical tool to live up to their principles and make fairness visible in daily operations.

AI can support people’s growth

AI’s potential to advance responsible business extends beyond governance to the way that companies develop and support their people. For example, 59 percent of the executives we surveyed strongly believe that AI encourages continuous learning and reskilling—more than any other benefit from AI. As Europe’s labor market ages and shrinks, companies face an even greater responsibility and business imperative to help employees adapt and stay “future-ready” for roles that demand AI skills.

Encouragingly, 77 percent of non-executive employees told us they trust their employers to handle AI adoption in ways that protect workers’ interests.iii But trust in leadership doesn’t automatically translate into confidence in technology. Rewarding employees’ goodwill still requires progress—proof that AI can expand opportunity, not narrow it. The good news is that, across sectors, European companies are exploring how AI can create new pathways for learning and mobility.

For example, German industrial firm Siemens is using predictive analytics and immersive digital tools to identify emerging skill needs and help employees adapt to new roles on the factory floor.iv Its training programs now combine AI, data analytics, and virtual reality to prepare workers for increasingly automated production environments, while reinforcing ethical and entrepreneurial mindsets. 

AI is also being used to make hiring more inclusive. In France, Mozaïk RH (through the Mozaïk Foundation, a recruitment and HR consultancy) developed “ZIA,” an AI-powered tool that helps young jobseekers from diverse backgrounds navigate the labor market.v ZIA acts as a skilled digital coach, offering guidance as users articulate their skills, explore career paths, craft resumes, and prepare for interviews. This kind of innovation makes it possible to scale a practical solution to tackle employment discrimination at its roots.

Used well, AI thus keeps people at the center of progress. The goal: help employees build on their strengths and find new paths forward, while giving companies a more productive and engaged workforce. 

AI can build more adaptive organizations

Responsible business requires an ability to evolve as circumstances change. Indeed, when new technologies, regulations, or stakeholder expectations emerge, companies that learn and adjust quickly—and empower their workers to do the same—are better able to prepare and stay ahead. AI can accelerate that process by helping organizations detect issues earlier, test solutions faster, and integrate learning into daily work. In this way, adaptability becomes not only a source of competitiveness, but also a foundation of responsibility.

Consider Sanofi. The French pharmaceutical company partnered with McLaren Racing to bring the precision of Formula One analytics into its global manufacturing network.vi Through this collaboration, Sanofi says that AI-driven modeling and simulation will help the company detect and correct inefficiencies before they disrupt production. The goal is to make adjustments in real time routine, so that the high quality of Sanofi’s life-saving products is maintained even as its operations grow more complex. 

AI can make companies more adaptable by helping them detect issues earlier, test solutions faster, and integrate learning into daily work.

An aerospace firm offers another example. The company created a data platform to bring together information from aircraft, factories, and suppliers into a shared digital environment. With thousands of planes now connected, as well as thousands of users across airlines and manufacturing partners, the platform uses AI models to detect emerging maintenance issues, simulate fixes, and share insights instantly across teams. This ability to learn continuously from data has made the company more adaptable—and, in turn, better equipped to prevent problems before they compromise safety or cause unnecessary fuel burn. 

As these cases show, adaptability and responsibility reinforce one another. The more a company can sense and respond to change, the better it can safeguard quality, safety, and trust—boosting competitiveness and responsibility in the process.

How can leaders make these benefits real?

Nearly all the executives we surveyed agreed that AI could be used to strengthen transparency, fairness, and inclusion within their companies. Yet, to turn this ambition into reality, our experience suggests that leaders should focus on three actions. 

Make responsibility someone’s job—and everyone’s concern

In many companies, responsibility for AI is everybody’s topic, but nobody’s task. Oversight drifts among compliance officers, data scientists, and legal teams, leaving no single owner of outcomes. To harness AI for responsible business, companies should start by naming a clear point of accountability and then build mechanisms to position AI’s use for positive impact.

This requires adopting practical tools, such as bias checks, transparency templates, and model-risk dashboards. It also demands aligning incentives where impact metrics are embedded in performance goals. Ultimately, however, what determines whether these tools and incentives stick is the example that leaders set through their own decisions. Responsibility becomes credible only when it’s led from the top. 

Treat culture as the enabler 

There are plenty of companies that invest in algorithms before they invest in understanding. In other words, their models may be highly sophisticated, but the people using the models are not empowered to maximize their potential. To make culture an enabler, workforce engagement must be at the center. AI should be treated as a creative partner and learning should be embedded directly into daily workflows.

Previous research by Accenture, for example, found that enabling co-learning between people and AI strengthened workforce engagement by a factor of five, on average, while accelerating skill development by a factor of four.vii Likewise, in a recent study, we discovered that the companies most advanced in deploying AI across their businesses were four times more likely than their peers to have prioritized cultural adaptation as part of their transformation strategy.viii  

Experience also shows that reskilling around, and experimentation with, AI should be linked to broader responsible business goals, such as inclusion and sustainability. In this way, AI adoption reinforces and furthers the company’s mission, rather than distracting from it.

Get ahead of disruption

Unlike traditional tools, AI changes as the data around it changes. As the tools learn, their applications multiply, creating a flywheel of possibility and disruption. Instead of reacting to disruption, responsible leaders help their companies get ahead of technological change, including by proactively involving teams from across the company in the adoption of new technologies.

Companies, for example, might rotate responsibility for reviewing AI use cases across functions, ensuring that no single perspective dominates as applications evolve. Another way to get ahead of disruption is to require that any significant changes in model behavior or use trigger a review before applications are expanded further. Yet another way is to regularly scan for emerging uses and second-order effects of AI—inside and outside the organization—before they show up as operational or reputational risk. The bottom line: harnessing AI for responsible business requires building a company that can evolve in real time and keep its core values intact as conditions change. 

Turn AI into a force for responsibility

Europe has long defined responsible business as a two-pronged priority encompassing competitiveness and social purpose. That tradition is now being tested by technologies that move faster than most corporate cultures or regulatory systems. The challenge for European leaders is not to slow innovation, but to make innovation serve values that have always distinguished their markets: fairness, inclusion, accountability. 

AI can help meet this challenge—if companies get it right. By embedding ethics into decisions, they can make fairness measurable rather than aspirational. By using AI to develop people, they can extend opportunity instead of displacing it. And by building more adaptive organizations, they can respond to change without sacrificing trust. In each case, AI offers a way for European companies to show that responsible business remains an enduring competitive edge. 

Acknowledgments

The authors thank David Kimble for his contribution.

About the Authors

Laetitia CailleteauLaetitia Cailleteau leads the global and EMEA Responsible AI practices at Accenture, bringing 25 years of consulting experience delivering value through data and AI. A European Commission–appointed AI High-Level Expert Group reserve member, Laetitia contributes to global standards committees and has a cross-industry business and technology career spanning digital transformation and reinvention. 

Philippe RoussierePhilippe Roussiere leads Innovation and AI at Accenture Research. Over the last 25 years, he’s held research and leadership roles on strategic projects in tech, data, and AI. In his current roles, GenAI is both a research topic (e.g., co-author of “The Front-runners’ Guide to Scaling AI”) and a key driver of research reinvention at Accenture.

Josh ElkindJosh Elkind is a research specialist at Accenture Research, where he focuses on sustainability. His experience covers decarbonization, net zero and energy transitions, sustainable consumption, and carbon credit markets. He has co-authored and contributed to publications on these topics and how they intersect with competitiveness and AI. 

References:
1.  arXiv: Baseline validation of a bias-mitigated loan screening model based on the European Banking Authority’s trust elements of Big Data & Advanced Analytics applications using Artificial Intelligence
2. 2025 Responsible AI Transparency Report | Microsoft
3. Accenture Pulse of Change: Business and Technology Trends
4. Digital – value-oriented – fit for the future: Siemens starts training year 2025 | Siemens
5. ZIA – Fondation Mozaïk
6. Our Formula for Success with McLaren Racing | Sanofi
7. Learning, Reinvented: Accelerating Human–AI Collaboration | Accenture
8. The front-runner’s guide to scaling AI: Lessons from industry leaders | Accenture

LEAVE A REPLY

Please enter your comment!
Please enter your name here