Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Securing AI: the identity challenge behind the AI boom in Simple Termsand what it means for users..
Organisations are currently running towards AI at full speed, driven by what they see as a limitless potential to transform their businesses. The past 12 months, especially, have seen AI agents emerging as a new digital workforce – analysing transactions, triaging customer queries, assisting clinicians and even making operational decisions.
However, the rapid deployment of agentic AI is creating a new challenge for business leaders: identity. As employees experiment with AI tools and agents are introduced outside formal governance, organisations are facing a growing wave of ‘shadow AI’.
In this environment, identity becomes harder to manage – making it difficult to track how agents interact with systems, what data they can access and whether their actions comply with internal policies.
Indeed, research from Okta shows that 91% of organisations are already using AI in some form. Yet only 10% have a well-developed strategy for managing, and importantly securing, non-human identities.
In other words, while businesses are racing to deploy AI, very few are equipped to control it.
The rush to deploy – and the risks
While AI adoption often starts small – a pilot project or proof of concept – things can quickly escalate.
“Typically, organisations begin with simple, measurable tasks. Then very quickly, they want to scale. They want to go bigger and faster,” says Stephen McDermid, CSO EMEA at Okta.
That’s when AI agents move from isolated tools to embedded operators, interacting with systems, accessing sensitive data and, increasingly, acting autonomously. It’s also when governance often falls behind.
Without clear controls and governance, organisations can quickly lose control of how their AI agents behave — what they have access to, how they interact with systems and whether their actions align with policy. According to Okta, only just over half of organisations have full visibility into their AI activity, leaving the rest exposed to significant blind spots.
The new identity challenge
Put simply, AI agents need to be treated as first-class identities, meaning they should be identified, authenticated and governed in the same way as human users. They log into systems, they access applications, they retrieve and share data. In many ways, they behave like employees – but currently without the same guardrails.
“When you look at how these agents operate, they’re acting on behalf of users, customers or services. So they need to be treated as first-class identities in their own right,” explains McDermid.
While businesses are racing to deploy AI, very few are equipped to control it
Yet in many organisations, they’re not – AI agents are deployed without robust authentication, clear permissions or lifecycle governance. That, then, raises critical questions as AI agents take on more responsibility: who is in charge? They might approve a transaction, respond to a customer or access a database. But who authorised that action? What level of access was granted? And who is accountable if something goes wrong?
There have already been cases where poorly secured AI systems have exposed sensitive data or been manipulated into unintended behaviour. In one instance, a recruitment chatbot leaked millions of records due to weak credentials. In others, attackers have exploited AI systems through prompt injection to bypass controls.
“The value of AI comes from the data it can access. But as you connect more systems and add more data, you can very quickly lose control,” says McDermid.
Identity is becoming the control plane for AI
This increasing gap between what AI can do and what organisations can control is what security leaders are calling the “authority gap”. To close the gap, organisations need an approach that treats AI agents as first-class identities. This is where identity platforms come into play.
Okta’s unified identity platform is designed to bring order to this complexity, providing a central layer through which all identities – human and non-human – can be managed, authenticated and governed, underpinning its blueprint for the secure agentic enterprise.
“Identity gives you the ability to control access, monitor behaviour and enforce policy. It becomes the foundation for trusted AI,” says McDermid.
In practice, this means authenticating every agent interaction to ensure only approved systems can operate, and controlling access at a granular level so agents only see what they need to see. It also involves monitoring behaviour continuously to identify anomalies or misuse in real time, and governing the full lifecycle from creation through to decommissioning.Identity platforms such as Okta’s enable a shift from reactive security to proactive control, allowing organisations to scale AI safely.
Innovation without compromise
The good news for business leaders is that governance and innovation are not mutually exclusive.
Companies such as Siemens are embedding identity-led security into their digital transformation efforts, ensuring that every user, system and agent is governed consistently. McLaren Racing, operating in a high-performance, data-intensive environment, has made identity central to both security and operational agility.
In financial services, firms like Paysafe and Equals Money are integrating identity controls directly into AI-enabled products, aligning innovation with regulatory expectations from day one.
The common thread among those businesses is that identity is not an afterthought. It is built in from the start.
First steps
There are some practical first steps that organisations that are still early in their AI journey should consider.
First, find your shadow AI. Assume AI is already in use, whether sanctioned or not. Identify where it exists, how it is being used and what data it touches.
Second, set the rules of engagement. Define clear policies for AI use and educate employees. Most security risks still originate with people – and AI is no exception.
Third, put visibility and control in place. Deploy tools that allow you to monitor, govern and manage AI agents as they evolve. Without visibility, there is no control.
These steps are not about slowing progress, but about enabling it, safely and sustainably.
The future belongs to trusted AI
AI is reshaping how organisations operate. But as it becomes more autonomous, the stakes are rising. The question is no longer whether to adopt AI, but how to do so responsibly.
McDermid notes: “To get AI security right, you have to get identity right.”
Those that invest in identity – establishing clear governance, visibility and control – will be best positioned to unlock AI’s full potential. Those that don’t risk falling into the authority gap.
For more information please visit okta.com/solutions/secure-ai/
Organisations are currently running towards AI at full speed, driven by what they see as a limitless potential to transform their businesses. The past 12 months, especially, have seen AI agents emerging as a new digital workforce – analysing transactions, triaging customer queries, assisting clinicians and even making operational decisions.
However, the rapid deployment of agentic AI is creating a new challenge for business leaders: identity. As employees experiment with AI tools and agents are introduced outside formal governance, organisations are facing a growing wave of ‘shadow AI’.
In this environment, identity becomes harder to manage – making it difficult to track how agents interact with systems, what data they can access and whether their actions comply with internal policies.
