Tech Explained: Tech an enabler, future success will rely on strong base of trust: Nasscom VP on AI governance  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Tech an enabler, future success will rely on strong base of trust: Nasscom VP on AI governance in Simple Termsand what it means for users..

New Delhi, Jan 22 (PTI) India – with its large, diverse population – is adopting a calibrated approach on AI governance to tackle deepfakes, digital frauds and other user harms, Nasscom Vice President of Public Policy, Ashish Aggarwal has said stressing that trust will form the bedrock of future success for the tech industry.

Terming technology as a great enabler, Aggarwal told PTI that AI itself can be leveraged and deployed to minimise and prevent much of the harms generated by its use. Instead of regulations reacting to users’ harms, there is an increasing emphasis now on setting `technology-led guardrails’, he noted.

“We are a country of 1.4 plus billion people and we have a very diverse, heterogeneous population. So obviously we need to be very careful and we need to make sure that the citizens are not harmed, be it deep fakes and other harms which are really escalating and we see more of that around financial frauds, digital arrest.

“So I think it is very important that a lot of these are addressed well and it will be important even from an industry point of view because any future success has to sit on a strong base of trust,” he said.

Aggarwal noted the recent developments, including the proposed AI framework of the IT Ministry and RBI’s report on a comprehensive approach to drive AI adoption, and praised what he described as a “whole of government” approach on AI governance.

“Government is doing that, and there is a lot of consultation and I think a more fundamental thing is that instead of regulations chasing harms, what we are seeing evolve is an approach where we can set technology-led guardrails.

“The DPI stack is a good example…so having a set the identity layer, payments layer and of course riding on the telecom layer, now we can have a very strong, workable, scalable consent layer which citizens can exercise effectively. So once you can make an effective consent real then I think that is a good way of thinking about guardrails,” he said.

Technology, especially AI, offers a strong way to address user harms arising from new and evolving tech models by tracking and preventing them upfront.

“…I think a lot of it will mean how we can use technology to address the harms which are arising because of the newer models and newer things due to technology. I think, in that sense, technology itself is a great enabler and AI itself can be used a lot to track, minimise, and prevent a lot of these harms,” he said.