EU AI Act

target readers ie - idea explorer

Preparatory obligations under the EU AI Act have taken effect, signalling a shift from policy ambition to operational enforcement. Organisations must now address enterprise risk management, supply-chain accountability, explainability, governance, and leadership oversight. Here, experts explain that the core challenge is no longer building AI systems but ensuring they are responsibly managed and scalable.

Preparatory obligations under the EU AI Act have come into force this week, marking a further shift from policy ambition to operational reality. From enterprise risk management and supply-chain accountability to explainability, governance and leadership judgment, experts agree that the immediate challenge is not whether AI can be built, but whether organisations are ready to manage it responsibly at scale.

Innovation becomes infrastructure

The Act’s preparatory obligations marks a decisive shift from treating AI as an experimental capability to treating it as regulated infrastructure, explains Ian Murrin, CEO of Digiterre and co-author of Transform! The 14 Behaviors Driving Successful Digital Transformation in the Age of Gen AI. “It signals a move to a clear, risk-based framework, which forces organisations to understand not just what their AI systems do, but how and why they are used.”

That shift, Murrin notes, extends well beyond internal development teams and into the wider ecosystem organisations depend on, including its suppliers.. “Organisations will be accountable not just for the AI they build, but for third-party models and tooling they rely on. In practice, this pushes AI from being a purely technical concern into the core of enterprise risk management. With the EU AI Act’s toughest requirements kicking in by August, firms will be forced to show exactly how their models work, where their data comes from, and who is accountable when things go wrong,” he says.

Compliance is now an operational discipline

For many organisations, the most immediate impact of the Act will be felt in day-to-day operations rather than strategy documents. AI Empowerment Specialist Fahed Bizzari agrees that the near-term impact is largely operational. “Companies will need an inventory of AI use cases, a clear view of which ones are higher risk and named accountability for decisions that involve AI. Procurement and vendor management also get tougher: buyers will need better documentation from suppliers and clearer contractual safeguards, especially where AI touches hiring, performance, customer decisions or safety-critical contexts,” he explains.

Bizzari also highlights a cultural shift underway, particularly for employers and managers navigating AI adoption at scale. For employers, Bizarri warns that AI literacy is rapidly becoming a management responsibility, not a nice-to-have training perk. “The organisations that win will treat compliance as infrastructure: map use cases, set simple internal rules, train teams, tighten supplier expectations and add lightweight monitoring so governance becomes routine rather than reactive.”

AI literacy must be developed

After AI has flooded industries, this next stage of implementation of the EU AI Act marks a necessary reset for the industry, explains Cien Solon, CEO and Founder of no-code AI platform LaunchLemonade. For high-risk systems, especially those influencing people’s rights, safety, or access to services, this is the right direction. However, Solon reiterates: “Regulation alone isn’t the hard part. The real challenge is literacy. Many organisations don’t yet have the skills to evaluate how their models behave, document decisions, or build the guardrails the Act expects.”

Solon has seen teams rush to deploy AI without truly understanding what they do, how they behave under pressure, or who is accountable when things go wrong. While the Act forces a different standard, training and building this literacy must run simultaneously. “To ensure safer, more informed choices when it comes to AI, we must educate and embed a deep understanding so the terms of the Act can be successfully followed. If we want AI to be trusted, we need to trust our team’s understanding of the AI tools they’re using.”

The bar on explainability and trust gets higher

While the Act’s preparatory obligations establish clearer expectations, questions remain about how organisations will operationalise them in practice. The Act’s preparatory obligations are an important step forward, but also highlights that uncertainty still persists around how organisations will operationalise these standards in practice, explains Seb Kirk, CEO of AI solutions firm GaiaLens. “High-risk AI demands a higher bar. Every AI-driven outcome should be explainable in human terms, traceable across its lifecycle, and auditable by both internal teams and external stakeholders.”

For Kirk, the organisations best positioned to respond are those that embed governance early rather than retrofitting controls later. “The organisations that will move fastest are those that build governance into the design process from day one,” Kirk continues. “When explainability, traceability and accountability are embedded early, innovation can accelerate without compromising public confidence or regulatory readiness.”

Leadership, not technology, is the deciding factor

Beyond systems, processes and documentation, the Act is also forcing a more fundamental organisational reckoning. The natural evolution from ‘can we build it’ to ‘should we deploy it’ forces organisations to confront a key truth; technology doesn’t determine outcomes, leadership does, explains Andrew Bryant, a self-leadership expert and author of Potential-ize – How Leaders Unlock Human Potential in the Age of AI.

Bryant observes that the Act’s risk-based framework creates immediate pressure on leaders to assess AI systems against human impact, not just business efficiency. “This is where most organisations will struggle, not because compliance is technically difficult, but because it requires the self-leadership capacity to make values-based decisions in the face of uncertainty,” he explains.

The first operational steps of the EU AI Act make one thing clear: compliance is no longer a future problem or a specialist concern. For organisations that treat the new obligations as a box-ticking exercise, the Act may feel like friction. But for those that see it as a necessary discipline to clarify accountability, strengthen trust and align AI deployment with human values, it is set to become a competitive advantage rather than a constraint.

LEAVE A REPLY

Please enter your comment!
Please enter your name here