Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Diligence the watchword as oversight lags AI governance in Simple Termsand what it means for users..
There’s more than a few dark sides to the emerging AI boom and one of them is a shortage of quality oversight as directors grapple with a governance gap: while recognising the technology’s transformative power, many haven’t yet introduced measures to manage risks. That’s according to Executive Director of the Diligent Institute Dottie Schindlinger, the corporate governance research arm of Diligent Corporation. She says insufficient board-level scrutiny and integration of AI tools creates emerging vulnerabilities that could expose organisations to regulatory, reputational, and operational harm.
“AI is the king of shadow IT right now, right? Everybody’s using Claude or Gemini or Chat GPT, and they may be perfectly innocent and fine and harmless, or they may not. The problem is, we just don’t know right now,” Schindlinger points out.
That goes straight to the heart of the challenge; that which is unknown can’t be measured, and that which can’t be measured…you get the idea.
Diligent Institute conducts several surveys that canvass directors’ sentiment on a range of issues. None is more pressing or present, arguably, than AI, and Schindlinger says findings paint a picture of partial adoption without depth
For example, she says 66% of directors report using AI to assist with board-level tasks like summarising materials, generating discussion questions, or conducting retrospective analyses. However, just 3% have fully integrated AI into risk oversight and strategic decision-making.
Similarly, while 84% of directors say they have significantly altered their approach to managing risks (from geopolitical tensions to supply chain disruptions and AI itself), just 10% are leveraging AI tools to handle that growing complexity in real time.
“Directors are telling us that they see a lot of potential,” Schindlinger says – but a governance gap quickly appears. Boards and management admit not knowing how AI actually works, what happens to their data, or even whether the results belong to the company or the LLM provider.
On top of that, return on investment or value creation models are generally opaque. “There is some frustration that they don’t have a good line of sight on how it is actually going, if there is a return on spend.”
A separate pulse-check survey revealed another red flag: many directors are relying on free versions of tools like ChatGPT to prepare for board meetings. “That’s a red flag,” Schindlinger warns; without knowing exactly what tools are in use or how they are deployed, organisations cannot effectively manage risks such as data leaks, inaccuracies, or hallucinations.
The situation mirrors the cybersecurity awakening of 10–15 years ago, Schindlinger says, when boards suddenly had to oversee complex technical risks with little prior infosec fluency. Then as now, there’s a hill to be climbed (it does help that these days, cyber risk remains as complex and challenging as ever, and has become a board issue).
While infosec was purely risk management, she says AI is a little different in that it presents both opportunities and downsides. “We got there with cyber, now we need to get there with AI. And while AI is moving faster and has more to offer, cyber serves as an example of how to go about it.”
She provides three simple, practical and yet powerful steps towards improved governance. Start with a comprehensive audit of AI tool usage across the entire organisation, including the board and senior management, to expose shadow IT and assess exposure. Followed by selecting approved tools and establishing clear policies governing their use. And finally, deliver targeted training that spans the boardroom through to the front lines.
Going deeper, Schindlinger advises having hard conversations about whether the current composition includes sufficient AI expertise. That’s especially pressing for companies pursuing ‘AI-first’ strategies.
Individual directors, meanwhile, should pursue education, press management on audit results and remediation plans. “Pose deceptively simple, but profound questions,” says Schindlinger. “What does this technology actually do? How do you know that it does that? And how are you making sure that it does only that?”
She also cautions against viewing AI as a wholesale replacement for human judgment. “There’s a reason companies that tried replacing entire departments have had to hire people back. AI doesn’t replace expertise, it requires humans in the loop for quality, accuracy, and accountability.”
There are plenty of indications that directors are experiencing growing frustration, if not with the emerging or apparent limitations of AI, then with the shortage of governance guardrails surrounding it.
That’s a good thing, because like fear, frustration can be a source of considerable motivation. But what is clear is the undeniable power of AI, and as always, with great power comes great responsibility. “So, yes, you could say diligence is required. Diligence from the board. And diligence from management.”
