A wave of former OpenAI employees has come forward with serious concerns about the direction of the company. What was once a nonprofit initiative to ensure artificial general intelligence (AGI) benefited all of humanity is now, according to these insiders, chasing profit at the expense of safety.
The “OpenAI Files” report compiles testimonies that claim OpenAI is abandoning its foundational principles. At the heart of the criticism is the company’s decision to potentially scrap its profit cap—an original safeguard meant to prevent excessive private enrichment from world-altering technology.
Internal Doubts and Leadership Turmoil
Former staff paint a troubling picture of internal dysfunction, pointing fingers directly at CEO Sam Altman. Some accuse him of deceptive and chaotic leadership. Even OpenAI co-founder Ilya Sutskever, once Altman’s close collaborator, has expressed regret, stating Altman shouldn’t be trusted with AGI-level decisions.
Ex-CTO Mira Murati and former board member Tasha McCauley echoed those concerns. They describe a toxic culture where transparency erodes and dissent is punished—a dangerous dynamic in a company developing technologies that could reshape the world.
Safety Takes a Backseat
Multiple former employees, including Jan Leike and William Saunders, say OpenAI’s focus has drifted from long-term safety research to more commercially viable product launches. Leike described his work as “sailing against the wind,” and Saunders testified to Congress about serious lapses in internal security.
The impression is clear: AI safety is no longer the priority—it’s now about shiny features and market dominance.
Calls for Reform
Despite leaving, many former insiders are rallying for reform rather than writing OpenAI off. They’re calling for:
- Restoring true nonprofit oversight with veto power on safety decisions
- A full investigation into leadership conduct
- Legal protections for whistleblowers
- Reinforced commitment to capped profits
These aren’t just internal disputes—they’re alarms from those who helped build the company from the ground up.
Why This Matters
OpenAI isn’t just another Silicon Valley firm. It’s creating tools that could fundamentally alter economies, societies, and daily life. The question being raised isn’t just about corporate ethics—it’s about who should be trusted with the future of AI.
As former board member Helen Toner put it, “internal guardrails are fragile when money is on the line.” The former employees behind this report say those guardrails are already gone.