Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: The Breakthrough That Could Solve AI’s Massive Energy Crisis in Simple Termsand what it means for users..
A new hybrid AI approach may drastically cut energy use while improving reliability.
Artificial intelligence is not just changing software. It is also driving a sharp rise in electricity use. In the United States alone, AI systems and data centers consumed about 415 terawatt-hours of electricity in 2024, according to the International Energy Agency. That amounts to more than 10% of the nation’s total energy output, and the figure is expected to double by 2030.
That trend is raising a difficult question for the future of AI: Can these systems become more capable without becoming dramatically more expensive to power?
Researchers at the Tufts University School of Engineering believe the answer may be yes. They have built a proof of concept for an AI approach that could use up to 100 times less energy than today’s standard systems while also producing more accurate results on certain tasks. In a field that often rewards ever larger models and ever larger computing infrastructure, that kind of improvement could be significant.
The work was developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor. It centers on neuro-symbolic AI, which combines standard neural networks with symbolic reasoning, similar to how people break problems into steps and categories.
Rethinking How AI Systems Learn and Act
Scheutz and his team study robots that interact directly with people, so their work differs from screen-based large language models (LLMs) such as ChatGPT or Gemini. Instead, they focus on visual-language-action (VLA) models. These systems extend LLMs by adding vision and movement, allowing robots to interpret camera and language inputs and carry out physical actions such as moving wheels, arms, or fingers.
With conventional and resource-heavy VLA systems, even a simple task like stacking blocks can be error-prone. A robot must scan its surroundings, identify each block’s position, shape, and orientation, and then follow instructions to stack them. Errors can arise if shadows distort perception, if blocks are placed incorrectly, or if the final structure is unstable and collapses.
These mistakes resemble the well-known shortcomings of LLMs. Just as robots can fail in physical tasks, chatbots can produce incorrect or fabricated outputs, such as inventing legal cases or generating images with unrealistic features like extra fingers.
Symbolic reasoning offers a more efficient alternative. It allows systems to apply general rules and abstract concepts, such as shape or center of mass, leading to more reliable planning with less trial and error.
How Neuro‑Symbolic Systems Work Better
“Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors,” said Scheutz. “A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster. Not only does it complete the task much faster, but the time spent on training the system is significantly reduced.”
In experiments using the classic Tower of Hanoi puzzle, the neuro-symbolic VLA system achieved a 95% success rate, compared to 34% for standard VLA models. When tested on a more complex version of the puzzle that the system had not encountered before, it still reached a 78% success rate, while conventional systems failed every attempt.
Training time was also dramatically reduced. The neuro-symbolic system required just 34 minutes to train, while a standard VLA model took more than a day and a half. Energy use dropped just as sharply. Training consumed only 1% of the energy required by conventional models, and during operation, the system used just 5% as much energy.
Scheutz compares these findings to familiar LLMs like ChatGPT and Gemini. “These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings.”
Toward a More Sustainable AI Future
As demand for AI continues to grow and expands into industrial use, companies are racing to build larger data centers. These facilities can require hundreds of megawatts of power, far exceeding the needs of many small cities.
The researchers argue that today’s LLMs and VLA systems, despite their rapid adoption, may not provide a sustainable or reliable long-term foundation. They suggest that hybrid neuro-symbolic AI offers a more efficient and dependable alternative, with the potential to ease mounting pressure on energy resources.
Reference: “The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption” by Timothy Duggan, Pierrick Lorang, Hong Lu and Matthias Scheutz, 22 February 2026, arXiv.
DOI: 10.48550/arXiv.2602.19260
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
