Tech Explained: Who verifies AI? Deep tech startup ArbaLabs looks at the problem of trust  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Who verifies AI? Deep tech startup ArbaLabs looks at the problem of trust in Simple Termsand what it means for users..

ArbaLabs founder Ashley Reeves speaks at TechGALA Japan, Jan. 28. Courtesy of Ashley Reeves

As artificial intelligence (AI) becomes woven into daily life, from chatbots to industrial systems, a quieter concern is emerging. When AI systems make decisions on their own, it is not always clear how those decisions can be verified or who is responsible when something goes wrong.

Much of the global AI race has focused on making systems more powerful and capable. Less attention has gone to what happens after deployment, particularly as AI moves beyond cloud servers and into physical environments such as factories, vehicles and infrastructure. In those settings, the ability to trace what an AI system actually did can become critical.

One startup exploring this challenge is ArbaLabs, a deep tech company that participated in the 2025 K-Startup Grand Challenge and finished in the final four. ArbaLabs is developing tools designed to verify how AI systems operate on edge devices — machines that run AI locally rather than in centralized data centers.

ArbaLabs founder Ashley Reeves receives a prize for fourth place in the 2025 K-Startup Grand Challenge at Coex, Dec. 11, 2025. Courtesy of Ashley Reeves

ArbaLabs founder Ashley Reeves receives a prize for fourth place in the 2025 K-Startup Grand Challenge at Coex, Dec. 11, 2025. Courtesy of Ashley Reeves

Founder Ashley Reeves describes the company’s work in simple terms. “ArbaLabs builds a way to prove that an AI system is running exactly as it was designed and that its results haven’t been tampered with,” he said. “We focus on trust and accountability for AI in sensitive, real-world environments.”

Reeves likens the approach to adding a kind of flight recorder to AI systems. The technology creates verifiable records showing which AI model produced a result and whether that output was altered after generation.

“A normal AI system can generate a result,” he said. “Our system can prove which exact model produced that result and that it was not modified.”

Such verification does not determine whether an AI’s decision was correct or fair. Instead, it focuses on establishing that a system ran as expected and that its outputs were not tampered with. Reeves argues that this distinction matters in industries where safety and liability are concerns.

He points to scenarios like drones inspecting infrastructure or farmland. “The AI on that device decides whether something is damaged, safe or dangerous,” he said. “If that AI model is altered maliciously or accidentally, the decision could be wrong, with serious consequences.”

Sectors such as drone manufacturing, autonomous vehicles, robotics and smart factories are among those showing early interest in his company, according to the founder. In these environments, AI systems often operate with limited direct oversight, and questions can arise if an incident occurs.

Industry observers note that verification tools do not eliminate AI risks, but they can provide clearer records when investigating failures. In the United States, high-profile autonomous vehicle accidents, including a fatal self-driving test in Arizona, have raised difficult questions about software versions and system states at the time of the incidents.

“When an AI-driven system makes a fatal or near-fatal decision, investigations rely on logs and internal records,” Reeves said. “Without independent verification, it can be difficult to prove whether the deployed model was unchanged or properly calibrated.”

Policymakers in multiple jurisdictions, including Korea and the European Union, have signaled interest in requiring greater transparency and security in AI deployments, particularly in regulated sectors. While standards are still evolving, some companies are preparing in advance.

“We now have AI systems making decisions in health care or industrial automation,” he said. “The question is no longer ‘Can AI do this?’ It’s ‘Can we trust it, verify it and assign responsibility if something goes wrong?’”

As AI systems continue to move into the physical world, the debate may shift from how intelligent they are to how accountable they can be. “Innovation is moving extremely fast and that’s exciting,” Reeves said. “But accountability mechanisms are still catching up. Trust should be measurable, not marketing.”

Alice Hong is a freelance writer and comedian based in Seoul. Follow her at @hippohong on Instagram.