By Emil Bjerg, journalist and editor
Most companies are deploying AI. Most boards can’t govern it. The gap between adoption and oversight is becoming a liability.
On December 9, 2025, shareholders of DoubleVerify – a digital advertising analytics firm – filed a derivative lawsuit against the company’s CEO, CFO, and eight board members. The allegation: the board approved public statements claiming strong AI capabilities while the company’s AI-powered fraud detection was failing to catch bot traffic.
Major advertisers were leaving for competitors. Revenue was declining. But the board continued signing proxy statements describing effective risk oversight and AI-driven competitive advantages. When an independent research report exposed the failures in March 2025, the stock dropped 36%. It had already fallen 38.6% the previous May when performance problems first surfaced.
The Accountability and Knowledge Gap
The DoubleVerify case raises the fundamental question boards are now confronting: when an AI system shapes a consequential outcome, who is accountable?
The board that approved the system didn’t make the individual decision and the team that built it didn’t intend the specific outcome. Therefore, the shareholders affected have no obvious person to hold responsible. The structures most organisations rely on – reporting lines, approval chains, review committees – were designed for a world where humans made decisions and could explain them afterward.
According to McKinsey’s December 2025 report “The AI Reckoning: How Boards Can Evolve” – drawing on interviews with directors from 75 boards – 66% of board directors report having limited to no knowledge or experience with AI, and nearly one in three say AI does not even appear on their board agendas. Meanwhile, more than 88% of organisations report using AI in at least one business function, and fewer than 25% have board-approved AI governance policies.
That gap between deployment and oversight has become one of the most consequential problems in corporate governance. Most boards know AI matters – but knowing it matters and knowing what to do about it are very different things.
The Wrong Kind of Literacy
The instinct, when boards recognize they don’t understand AI, is to hire technical expertise. Some bring in data scientists as advisors. Others send board directors and board members to executive education programs on machine learning fundamentals. A few have appointed AI-literate board members – though finding board members with both governance experience and meaningful AI knowledge remains difficult, according to research from ISS-Corporate.
But framing this as a technical literacy problem misses the point. Boards don’t need data scientists or deep-tech experts in every seat – what they need is sufficient understanding of how AI works and its role in creating opportunities and risks for the business, according to McKinsey’s research on board AI governance. The analogy to financial oversight is precise: board members don’t need to be accountants to oversee financial reporting, and they don’t need to be data scientists to govern AI. What boards need is not the ability to build AI systems, but the ability to ask the right questions about the AI systems others are building.
The most consequential AI decisions a board will face often aren’t technical. They’re judgment calls about risk appetite, ethical boundaries, and organisational values. Should the company use AI to monitor employee productivity? Is it acceptable to deploy a predictive model that is accurate on average but systematically disadvantages a specific demographic group? When an AI vendor promises cost savings, what framework weighs those savings against opacity and dependence?
“Beyond simply mitigating risks, boards are increasingly expected to ensure that AI initiatives deliver measurable business value,” Ammanath told Bloomberg Law, “and challenging management not only on ‘how’ AI will be implemented, but also ‘why’ these efforts make sense in the broader context of value creation.”
According to a 2025 MIT study cited in McKinsey’s report, organisations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity. But “AI-savvy” in this context doesn’t mean technical fluency – it means boards that understand where AI is deployed, what data it uses, how risks are monitored, and who is accountable for outcomes.
Three Steps Boards Can Take Immediately
Assign clear ownership. Identify one executive accountable for AI deployment and risk across the organization. Not a committee, not a task force – a name. That person reports to the board on a regular cadence with a standardized update: what’s deployed, what’s planned, what’s gone wrong, what’s been learned.
Define “high-risk” for your organization. Not every AI application needs board-level scrutiny. Create a simple classification system – typically based on impact to revenue, reputation, regulatory exposure, or human welfare – that determines what requires formal approval before deployment. Document it and use it.
Demand decision-relevant reporting. Stop accepting technical briefings that explain how AI works. Start requiring operational briefings that show where AI is making decisions, how those decisions are monitored, and who is accountable when they go wrong. The test: could a board member explain the governance structure to a regulator or shareholder without using jargon?
Organizations that cannot explain how their AI systems make decisions – to regulators, to customers, to their own employees – will find that efficiency gains come at the cost of institutional trust.
Speed and Responsibility
None of this means slowing AI adoption. According to Deloitte’s Board Practices report, 70% of companies have AI use or deployment policies in place – up from just 13% in 2023. The organisations governing AI well are often the ones deploying it most aggressively – because they’ve built the institutional capacity to move fast without breaking things.
The alternative – treating AI oversight as a future agenda item – is becoming untenable. “Boards that proactively engage by setting clear oversight frameworks, defining accountability, and ensuring management has a plan for governance will be better positioned to capture AI’s benefits while mitigating its risks,” Ammanath notes.
The boards that close these gaps first will earn institutional credibility to move faster, because they can demonstrate to stakeholders – and to themselves – that speed and responsibility are not in conflict.
The frameworks are emerging – from Deloitte’s AI Governance Roadmap to NIST’s AI Risk Management Framework. What’s missing, in most boardrooms, is the will to treat AI oversight as a core responsibility rather than someone else’s problem.
The DoubleVerify shareholders are now forcing that question. Other boards would be wise not to wait for their own lawsuit to answer it.






