Health and care research bodies launch TrustX initiative for safe agentic AI
Image source: istock.com/Artemisdiana
A group of health and social care research organisations have launched an initiative aimed at boosting the development of agentic AI solutions for the sectors.
Named TrustX Health, it involves Health Innovation Kent Surrey Sussex (Health Innovation KSS), the University of Cambridge’s Trustworthy Artificial Intelligence Lab, the Responsible AI Institute and The King’s Fund health and care thinktank.
It will involve the creation of a “unified front door” for evaluating and safely deploying Agentic AI across clinical and non-clinical workflows.
This is intended to support the ambitions of the NHS 10 Year Health Plan for England, which calls for a fundamental shift toward prevention, digital transformation and the widespread use of AI.
Agentic AI is a system that can accomplish a specific goal with limited supervision as the agents can operate autonomously in complex environments. The Responsible AI Institute said it can come with risks – including bias, changes over time, potential errors and misinformation – that are amplified in high stakes environments such as health and care.
Reliability, alignment and safety
It said TrustX will provide a rigorous system for validating the reliability, alignment and safety of these autonomous systems, ensuring that clinicians, patients and regulators can trust how they operate.
It involves an evaluation of how AI agents behave in real world situations, how they interact with other existing technologies and data sources, and how they may change over time.
This leads to the provision of a ‘trusted AI technology’ badge, jointly enabled by the Responsible AI Institute and Health Innovation KSS.
The institute said this creates the governance and technical foundations needed for safe, large scale adoption across the NHS and social care.
The front door for AI agent deployment will involve: a scoring and verification process for existing systems; skunkworks evaluation to determine which NHS and social care problems are appropriate for agentic AI; support in building new AI agents; help in mitigating risks; and a real world evaluation against productivity and cost-effectiveness metrics.
Collaboration and funding
Other elements of the initiative include: an open source trust score; creating an environment for collaboration; partnerships with NHS providers and social care sites to test deployments; and a flexible funding infrastructure.
In addition, pilot investments will support joint clinical and operational fellows, postdoctoral researchers and research assistants, with roles expected to expand across NHS innovation labs, social care innovators and partner institutions.
The institute said: “The Government’s 10 Year Health Plan commits to widespread AI deployment, deeper digital integration, and a shift toward prevention. Achieving this requires safe, trustworthy, and auditable AI systems that work reliably in complex environments and evolve responsibly over time.
“TrustX offers an assurance pathway that reduces risk for NHS and social care organisations adopting Agentic AI across clinical and operational pathways. It sets a new global benchmark for responsible AI deployment in health and care.
“The aspiration for TrustX is to inform and support the development of a shared ecosystem for safe experimentation, rapid learning, and scalable adoption across the NHS.”
Source: www.ukauthority.com
Published: 2025-12-15 11:33:00
Tags:
This article was automatically curated from public sources. For full details, visit the original source link above.
