Tech Explained: AI and Blockchain: What Actually Matters Now | nasscom  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: AI and Blockchain: What Actually Matters Now | nasscom in Simple Termsand what it means for users..

In serious rooms today, whether it is a bank strategy meeting in Singapore, a family office in Dubai or an investor roundtable in New York, people are no longer asking, “Should we look at AI” or “Should we look at blockchain”. The question has shifted to something more direct:

“Where do these two technologies genuinely fit together, in a way that is responsible and acceptable to our regulators?”

That question is coming from people who move real capital, run real infrastructure, and carry real risk.

AI brings pattern recognition, prediction and automation at a scale we did not have before. Blockchain brings shared ledgers, programmable value, and transparent rules between parties who do not fully trust one another. When both sit in the same design, under the same governance, you get something more interesting than a chatbot on a crypto wallet.

A lot of what is written about “AI x blockchain” still feels like marketing. Under the noise, there is a set of use cases that keep coming up in my work across Singapore, the United States and the Gulf. These are the ones that, in my view, deserve attention.

1. AI-first treasuries and programmable liquidity

Treasury in many organizations still relies on static limits, spreadsheet models and weekly calls. At the same time, cash, funds and real-world assets are starting to appear as tokenized instruments that move on-chain, and markets are always open.

That gap will not last. A more modern treasury stack looks something like this:

● Investment and risk policies expressed clearly in code

● Tokenized deposits, funds and collateral settling on a ledger

● AI systems watching risk, liquidity and yield across venues in real time

The AI is not there to take wild bets. It is there to do the continuous work humans cannot sustain:

● Track exposures against limits every minute instead of every month

● Shift liquidity when spreads, fees or counterparty behaviour change

● Simulate stress scenarios and suggest actions long before a crisis shows up on television

The ledger is what keeps the whole system Responsible. Every move is recorded. Rules are visible. Internal risk, audit and the Regulator can see how a decision flowed from policy to execution.

As tokenized treasuries and on-chain markets mature, this combination will change how banks, corporates and asset managers manage liquidity and risk. It is one of the clearest crossovers where AI and blockchain genuinely need each other.

2. Proof of person and cleaner digital interactions

As language models and synthetic media improve, the internet is filling with convincing but fake people. Payment fraud, account takeovers, scams and disinformation are all getting a boost. Traditional KYC and basic device checks were not designed for this environment. We will need new proof of person approaches that are:

● Strong enough to keep out bots and synthetic identities

● Simple for normal people to use

● Respectful of privacy and local rules

Here, a ledger and AI work together.

● A person completes a robust identity check one time through a trusted provider or public identity system

● The outcome is stored as a reusable credential linked to a wallet the person controls

● Applications do not see full documents; they see statements such as “this is a unique verified person from this corridor within this risk class”

AI then uses behaviour, context and history to manage risk:

● Adjusting friction when actions look unusual

● Detecting patterns that resemble coordinated scripted activity

● Giving genuine users smoother flows based on consistent good behaviour

For a Regulator, this offers a path to a responsible digital identity that is reusable, tamper-resistant and compatible with privacy expectations. For platforms and banks, it is one of the few realistic ways to keep deepfake-driven abuse under control at scale.

3. Universal identity and wallets for AI agents

Almost every AI assistant you interact with today lives inside a single product. When you switch channel or organization, it forgets who you are, what it is allowed to do and which limits apply. The “memory” is trapped with the vendor.

We can design something better.

Think of each serious AI agent, whether it represents an individual, a company or a fund, anchored in a wallet that holds:

● A signed record of which organizations it may act for

● Clear limits on what it can sign, approve or spend

● A history of important actions, recorded as events on a ledger

When that agent connects to a new system, the system does not rely on a static configuration file. With the user’s consent, it reads directly from this shared context.

That enables:

● Consistent permissions across products and channels

● A lower integration burden when an institution wants to introduce agents into existing workflows

● A precise trail of what the agent knew and was authorized to do at any point in time

For Responsible AI builders, this is important because it avoids one vendor owning the full context of a customer’s life. For Regulators it offers a clean way to reconstruct events when an agent is involved in a high-value decision or transaction.

4. Verifiable training data, model lineage and on-chain AI governance

A simple question hangs over many large models:

“What exactly went into this, and how has it changed over time?”

Creators want to know whether their work is buried somewhere in the training set. Enterprises want to avoid hidden intellectual property and privacy problems. Regulators and boards are starting to ask for a proper audit trail.

A ledger gives us a place to record the life of a model and the data around it.

On the training and data side:

● Datasets can be registered as assets with clear license terms

● Contributions from partners, users or public sources can be recorded with size and conditions

● Hashes of model versions and major updates can be written as they are promoted

On top of this, AI tools can help match outputs back to known sources and calculate rewards or credits for contributors, where that is part of the design.

On the governance side:

● High-impact systems can log prompts, responses, policy settings and override decisions

in a way that is hard to quietly edit

● Incidents, red team findings and mitigation steps can be recorded for later review

● Access to these logs can be governed by rules that meet local law and internal policy, while still giving auditors and Regulators enough visibility

This combination moves us from “trust us, the model is fine” to an environment where training, tuning and deployment decisions leave a visible, verifiable trail. That is the kind of Responsible AI story senior stakeholders increasingly expect.

5. AI-assisted smart contracts and on-chain security

Smart contracts already move large amounts of value, but much of the code is still written and reviewed by small teams under time pressure. At the same time, models are becoming good at reading, generating and testing code.

There are two clear angles here.

First, AI as a co-pilot for contract design and legal alignment:

● Business users and lawyers describe the intent of an agreement in plain language

● Models propose contract templates and code structures that reflect that intent

● Both natural language and code are tested with scenarios before anything goes live. Second, AI for continuous testing and monitoring:

● Models generate edge case tests and run them against code in staging and production-like environments

● Live contracts are watched for unusual behaviour, unexpected interactions and patterns that resemble known exploits

● Alerts feed into pause switches and governance processes that were defined up front

This is not about handing control to a model. It is about using AI to reduce human error in complex logic, and to keep a constant watch on systems that are already far too important to be checked only by hand.

For a Regulator or risk committee, the combination of smart contracts, AI testing and clear on-chain governance controls is more reassuring than handwritten code and manual sign-offs.

6. AI for blockchain security and crypto economic safety for AI

The same capabilities that make AI useful also make it a powerful tool for attackers. Recent experiments have already shown that models can find real weaknesses in existing contracts and protocols.

Ignoring that is not an option.

On the blockchain security side:

● AI should be part of internal red teams, constantly probing contracts and infrastructure in controlled environments

● Monitoring systems can use models to flag abnormal patterns on-chain, not just simple threshold breaches

● Post-incident analysis can use AI to reconstruct the path of an attack and suggest better controls

On the AI side, ledgers and incentive design can support safer behaviour:

● Bug bounties and structured reward programs for external testing can be managed transparently on-chain

● Important model changes can be gated through recorded approvals, especially when models control funds or critical infrastructure

● Commitments about model scope and limits can be written in a way that is easy to check later, which encourages more responsible deployment

This is the area where I expect closer collaboration between builders, security researchers and regulators. Both technologies need stronger safety stories, and they can help each other if we design the incentives carefully.

7. Zero-knowledge rails for Responsible data, underwriting and advertising

Companies want more personalisation and better risk models. People and Regulators want less surveillance and fewer data leaks. That tension shows up in credit, insurance, commerce and media.

Zero knowledge techniques let us do something more balanced.

● An AI model looks at a person’s data locally or in a controlled environment

● Instead of sending out raw data, the system produces cryptographic proofs about that person or entity

● The other side sees only what is necessary: for example that income is within a range, payments have been on time, or certain interests are present

A ledger can anchor these proofs and handle small payments when people choose to share them. Over time this can enable:

● Credit and underwriting decisions that use rich signals while staying within privacy rules

● More relevant advertising that does not require companies to warehouse every detail

● Simpler conversations with Regulators about exactly what is shared and what is only proven

For any firm that wants to be seen as Responsible with data, and for any supervisor trying to modernize consent and profiling rules, this pattern will become very important.

8. Climate, carbon and self adjusting supply chains

Climate targets and disclosure rules are now part of mainstream finance and corporate governance. The difficulty lies in moving from glossy reports to measured results.

Here AI, sensors and ledgers can support more honest and adaptive supply chains.

Imagine a chain from raw material to finished product:

1. Machines, vehicles, grids and even satellites produce continuous data about energy use and emissions

2. AI models turn that into more accurate estimates of footprint across suppliers and routes

3. Important metrics and events are written to a ledger that key partners, financiers and sometimes regulators can see

4. Smart contracts link those metrics to specific actions such as changes in sourcing, automatic purchase of verified credits, or adjustments in financing terms

The result is not a perfect system, but it does move behavior:

● Data manipulation becomes harder when many parties see the same records

● Responsible companies can demonstrate progress with more confidence

● Policy makers get a better view of what interventions actually work

This is an area where public interest, investment and regulation are all pulling in the same direction. The combination of AI analytics and shared records is a natural fit.

9. Decentralized compute networks with AI aware orchestration

Access to compute is becoming a strategic issue. Capacity is concentrated in a few large clouds, prices are rising and many teams feel they have limited options.

Decentralized physical infrastructure networks offer a different model:

● Independent operators contribute GPUs, storage or bandwidth

● A ledger coordinates task assignment, payment and reputation

● AI workloads are distributed across this network AI then manages routing and evaluation:

● Matching jobs to nodes based on latency, cost, location and regulatory requirements

● Scoring nodes over time for reliability and integrity

● Steering sensitive workloads only to nodes that meet policy standards

This is still an emerging pattern. It raises real questions around data residency, sanctions and critical infrastructure, so engagement with Regulators is essential. If those issues are handled carefully, such networks could become a useful complement to traditional clouds, particularly for smaller teams and new regions.

How I would judge any “AI x blockchain” idea

In conversations with leaders in banks, funds, exchanges and public agencies, I use a simple filter when someone proposes a new idea at this intersection.

Three questions:

1. What does AI add here that a traditional rules engine or report cannot add

2. What does a ledger add here that a normal database cannot add

3. Which person, institution or Regulator is clearly better off because both are present together

If those answers are vague, it is probably a distraction.

The next step is to choose one or two foundational pilots where the combination truly improves trust and control:

● Treasury and liquidity management with clear boundaries

● Identity and proof of person for high risk interactions

● Smart contract development, testing and monitoring around core payment and settlement flows

● AI governance for models that touch money, markets or public services

These are the areas where I see serious institutions already moving. The common thread is not hype. It is Responsible design.

I do not expect AI to replace people, and I do not expect blockchain to replace the entire financial system.

I do expect that together they will quietly change how trust, value and intelligence move through our economies.

The use cases that matter most to me sit where:

● AI needs a reliable source of shared truth

● Markets and institutions need programmable, transparent rules between many parties

● Regulators need much deeper visibility without shutting down innovation

That is the space in which my teams choose to work.

If you are building something serious along these lines, in banking, payments, public infrastructure, climate, education or any other corridor, I am always open to a conversation.