Tech Explained: When Gen AI Starts to Act  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: When Gen AI Starts to Act in Simple Termsand what it means for users..

Across Asia Pacific, generative artificial intelligence (Gen AI) has been everywhere. There’s been excitement, experimentation and a rush of practical use cases. For many organisations, AI has slipped quickly from novelty into routine.

What is changing, however, is not how widely it is used. It’s about what it is being asked to do.

In many enterprises, generative systems are beginning to shape decisions rather than simply support them. The shift is subtle, but it matters. Not because there is a dramatic crisis underway, but because responsibility and judgment are starting to move in quieter ways.

I saw an early version of this shift in a previous role. We trialled an AI system to re-engage cold sales leads by email. The setup was straightforward: the AI would send the first message, and a human would step in if someone replied.

We tried every personality setting that came with it. Nothing worked. Jamie, the bot, was enthusiastic to the point of desperation about reconnecting with people who had not replied for months.

Make no mistake, it did exactly what it was built to do. But what it lacked in this instance was judgment. In the end, we had to work with the vendor to override its pre-programmed personality and retrain it on a particular persona before we let it anywhere near a real inbox.

From helper to participant

In its early phase, Gen AI was easier to contain. Tools sat alongside work rather than inside it. People used them to speed things up or clean things up, without materially changing how decisions were made.

That is no longer always the case. AI tools are now threaded into workflows. They draft responses, assemble reports, suggest next steps, and in some cases, initiate actions. The line between helping and doing has become harder to spot.

This shift rarely arrives with fanfare. It shows up in small ways. Outputs look more finished earlier, so fewer people review them. Decisions start to feel partially formed before anyone consciously makes them.

This is not about machines behaving badly. It is about systems doing exactly what they were designed to do, at a speed and confidence that can outpace human judgment if organisations are not careful.

When adoption outruns adjustment

In Australia and New Zealand, AI has often been layered onto existing processes with the assumption that oversight will hold. In Southeast Asia and India, rapid growth has meant AI arriving alongside expansion, where speed tends to come first and structure later. The pattern is familiar. Technology moves ahead. Organisations catch up.

At the same time, vendors continue to add capability. Tools introduced to draft text gain the ability to coordinate tasks or recommend actions. These changes are incremental enough to slip past notice, until someone realises the system is doing more than expected.

Where the seams start to show

As Gen AI becomes more active, a few tensions are beginning to surface.

Accountability becomes unclear.
When an AI system drafts a risk assessment or shapes a customer response, who owns the result? Is it the person who approved it, the team that configured it, or the vendor that supplied it?

That question became tangible in Australia last year when Deloitte refunded part of a federal government consulting contract after a report it delivered was found to contain fabricated citations linked to Gen AI use. The issue was not so much that AI had been involved. It was that no one caught what it produced before it carried weight.

A similar dynamic appeared more recently in the software world. During a code freeze, a founder instructed an AI coding agent not to touch production. It deleted the company’s database anyway. When it was discovered, the system tried to cover its tracks, offered explanations, insisted the data could not be recovered, and eventually admitted it had panicked. The behaviour was almost human. That was precisely the problem.

Control spreads unevenly.
Those of us who have worked in tech know that AI rarely enters through a single door. One team experiments. Another buys a tool. A third builds something custom. Over time, practices drift, and oversight varies. What feels sensible in one part of the business can feel uncontrolled in another.

Speed hides complexity.
Most of these systems work most of the time. That is why they spread. Smooth performance builds confidence, but it also hides how much is happening beneath the surface and how few people fully understand it.

Globally, this is now being tested more formally. In the United States, a lawsuit involving Workday is examining whether AI-driven hiring tools screened out older applicants. The case is forcing organisations to consider who is responsible when automated systems influence decisions at scale.

A familiar transition

This does not feel like a backlash or a breaking point. It feels like a transition, similar to the early days of cloud adoption when usage spread faster than the rules around it.

Gen AI will continue to become more capable and more embedded. The question for APAC enterprises is no longer whether to use it. That train has left the station.

The more useful question is how much autonomy these systems should have, and how clearly that autonomy is framed. Some issues arrive as court cases. Others arrive as emails that sound just slightly wrong. Either way, the shift is underway, and the organisations that adapt best will be the ones that notice early and keep human judgment close to the work that matters.