Teaching AI to Say “I Don’t Know”: Themis AI Takes on Hallucinations

As artificial intelligence systems become trusted decision-makers in fields like healthcare, energy, and telecoms, the risk of AI “hallucinations”—confident but incorrect responses—is becoming more serious. An MIT spinout called Themis AI is taking direct aim at this issue by building tools that help AI systems recognize their own uncertainty.

Capsa: A Reality Check for AI Models

Themis AI has developed a platform called Capsa that can be integrated into most AI models. Its purpose? To flag when an AI is venturing into uncertain territory, rather than making conclusions based on strong evidence. The platform watches for patterns that signal confusion or bias, acting as a layer of self-awareness that’s been missing from most systems to date.

Founded by MIT Professor Daniela Rus and researchers Alexander Amini and Elaheh Ahmadi, Themis AI was born from a simple yet challenging goal: teaching machines to admit their limitations.

Real-World Use Cases

The Capsa platform has already made an impact. Telecom companies use it to reduce costly mistakes in network planning. Oil and gas firms rely on it to interpret complex seismic data. Themis has even published research on building chatbots that avoid confidently delivering false information.

This level of uncertainty detection is especially important in mission-critical fields. Whether it’s designing a cancer treatment or managing a power grid, a wrong answer from an AI could have massive consequences. Capsa makes sure those risks are flagged before decisions are made.

Origins in MIT’s Robotics Lab

The team’s journey began in an MIT lab where they worked on self-driving car safety. Backed by Toyota, they explored ways to reduce the risk of fatal misidentification by autonomous vehicles. One of their breakthroughs was an algorithm that could not only detect racial and gender bias in facial recognition systems—but correct it.

They then expanded the approach to pharmaceutical research, showing how AI models could flag when a drug prediction was based on guesswork. The result: less wasted time and more focus on promising leads.

A Smarter Future for Smaller Devices

One key advantage of Themis’ technology is its ability to enhance performance in edge devices, which rely on smaller, less powerful AI models. With Capsa, these devices can handle tasks more reliably and escalate only when necessary, improving efficiency without needing massive compute power.

Making Uncertainty a Feature, Not a Flaw

As AI becomes further embedded in the systems we rely on every day, the ability for models to recognize and disclose uncertainty may be just as important as their intelligence. Themis AI is proving that teaching machines to admit when they’re clueless could be one of the most valuable advancements in the field.

Source: https://www.artificialintelligence-news.com/news/tackling-hallucinations-mit-spinout-ai-to-admit-when-clueless/

Facebook
Twitter
LinkedIn

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *