Teaching AI to Say “I Don’t Know”: Themis AI Takes on Hallucinations

As artificial intelligence systems become trusted decision-makers in fields like healthcare, energy, and telecoms, the risk of AI “hallucinations”—confident but incorrect responses—is becoming more serious. An MIT spinout called Themis AI is taking direct aim at this issue by building tools that help AI systems recognize their own uncertainty. Capsa: A Reality Check for AI […]
Reddit Takes Legal Aim at Anthropic Over AI Data Scraping

Reddit has filed a lawsuit accusing Anthropic of using the platform’s user-generated content to train its Claude AI models without permission or compensation. According to Reddit, Anthropic’s bots have been scraping massive volumes of posts and comments in violation of Reddit’s user agreement, which explicitly prohibits commercial use of site data without a licensing deal. […]
AI Enters the Secure Zone: Anthropic’s Claude Gov Models for National Defense

Anthropic has launched a special branch of its Claude AI models—branded “Claude Gov”—designed specifically for the highest levels of U.S. national security operations. These purpose-built models are now active within classified government environments, where access is limited to authorized personnel only. This release highlights a significant evolution in how advanced AI tools are being integrated […]