Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: If AI isn’t paying off, look to the boardroom first, says Magna AI chief executive officer in Simple Termsand what it means for users..
A few months ago, a senior executive in the Gulf summed up their organisation’s AI journey with refreshing honesty. “We’ve bought the platforms,” they said. “We’ve run the pilots. Everyone agrees the technology is impressive. We’re just not sure what, exactly, has changed.” It was not a complaint. It was a diagnosis.
Across the Middle East, investment in artificial intelligence is accelerating at a remarkable speed. Governments are launching national AI strategies. Enterprises are modernising digital cores. Ministries are experimenting with data-driven decision-making. Dashboards are multiplying. Proofs of concept are everywhere. AI is estimated to contribute up to USD 320 billion to the region’s economy by 2030. If AI pilots were a national sport, the region would already be fielding a competitive team.
And yet, behind closed doors, many leaders admit the same thing: the promised returns remain harder to pin down than expected. The instinctive explanation is technical: models need refining, and data need cleaning. The platforms need more time. Sometimes that is true. But after several years of sustained investment, a different conclusion is becoming difficult to avoid.
“AI’s return-on-investment challenge is not primarily a technology problem. It is a leadership one.”
This is not anecdotal. Research consistently shows that while a majority of organisations across the GCC have launched AI pilots, only a fraction report measurable enterprise-level financial impact. The gap is not ambition. It is execution, driven by unclear ownership and poorly defined outcomes.
Most AI initiatives stumble for a familiar reason, as they are implemented before they are anchored in a business goal. Tools are procured before outcomes are defined, and pilots are launched before accountability is assigned. Success is assumed to emerge somewhere downstream, often between a steering committee update and the next budget cycle. In most organisations, that moment never comes.
AI creates value only when it is deliberately tied to a decision, process, or outcome that matters. Without that connection, even the most advanced system becomes an elegant side project. Impressive in a demo. Less impressive when a minister, board member, or CEO asks the inevitable question: “So what changed?”
In large, complex organisations, this pattern becomes even more evident. A budget is approved. A platform is selected. Responsibility is delegated to an innovation unit or centre of excellence. Leadership checks in periodically, asking whether the models are performing. When impact remains elusive, disappointment follows. The technology is blamed, and another pilot is proposed. The presentation improves, but the result remains the same.
What is missing is not capability. It is intent. Organisations in the Middle East that are seeing real AI returns tend to start from a different place. They begin with a harder question: which decision, if improved, would materially change outcomes? Only then do they ask whether AI belongs in the answer. Studies from leading advisory firms consistently show that so-called “future-built” organisations generate significantly higher returns by redesigning decision flows and accountability structures, not by deploying more sophisticated models. In these environments, AI is not bolted on. It is designed into how decisions are made, reviewed, and owned.
This distinction matters because AI amplifies whatever system it enters. In a well-designed environment, it sharpens judgment and accelerates impact. In a poorly designed one, it accelerates complexity. Automating a flawed process does not fix it. It simply allows the flaw to move faster, now with a dashboard and a KPI.
The Middle East is uniquely exposed to this dynamic. Decision-making here is often centralised, vision-led, and fast-moving. That can be a strength. But without clear ownership and delegation, AI risks reinforcing bottlenecks rather than breaking them. When accountability is diffused, AI’s impact becomes politely theoretical.
So, what must enterprises do differently?
First, boards must explicitly own AI outcomes, not just AI investment. If no executive can point to a specific decision, they are accountable for improving with AI, ROI will remain elusive. A simple test is: can the board list the top five decisions where AI is expected to change the trend line, on cost, risk, revenue, or citizen outcomes? If not, the organisation is not investing in AI. It is funding experiments.
Second, organisations must redesign decision flows before deploying technology. AI should be mapped to where value is created or lost, not layered onto existing processes by default. That means asking, step by step: who decides, on what information, with what constraints, and how that decision is challenged or reviewed? Only once that map is explicit does it make sense to ask where AI can reliably assist or automate.
Third, accountability must be unambiguous. AI initiatives should sit with leaders who already own budgets, risk, and performance, not be isolated in innovation units once experimentation ends. If AI sits in a lab, its results will stay in a slide deck. If it sits with a P&L owner or a permanent secretary, it has a chance to change behaviour.
Finally, governance must be treated as an enabler, not a brake. Clear rules around oversight, escalation, and human responsibility do not slow AI down. They are what allow it to scale safely and credibly. In regulated and sovereign contexts, this becomes even more important: without visible guardrails, public trust evaporates quickly.
This does not mean leaders must become technologists. It means they must become designers of consequence. AI forces choices that cannot be outsourced. Which decisions should be augmented? Which risks are acceptable? Where does responsibility sit when human judgment and machine output intersect? These are not engineering questions. They are boardroom ones.
The uncomfortable truth is this: AI rarely fails because it cannot perform. It fails because leadership never decided what performance was supposed to mean and quietly hoped the technology would work it out on its own.
This opinion piece is authored by Dr. Moataz Bin Ali, CEO, Magna AI.
