An independent advisory board urges OpenAI to remain under nonprofit governance, warning that AI’s global impact demands democratic oversight and accountability.
![]() |
A report from an independent commission recommends OpenAI stay nonprofit, citing AI's sweeping social implications and the need for broad public involvement. Image: CH |
San Francisco, USA — July 18, 2025:
An independent advisory board has called on OpenAI to preserve its nonprofit governance structure, citing the far-reaching implications of artificial intelligence and the critical need for public accountability.
In a report released Thursday, the board argued that AI’s transformative power makes it “too consequential” to be managed solely by corporate or governmental interests. The advisory commission, chaired by former California policy strategist Daniel Zingale, said nonprofit oversight ensures a level of democratic participation otherwise absent in traditional tech governance.
“We believe this work is too important to leave to the private or even government sectors alone,” Zingale stated. “The nonprofit model creates a space for democratic participation.”
Though non-binding, the report lays out a series of recommendations for OpenAI’s governance and public engagement. It calls for expanded transparency, stronger ties to communities impacted by AI, and increased funding for public interest initiatives like AI education, cultural inclusion, and digital equity programs.
Founded in 2015 as a nonprofit research lab, OpenAI later adopted a for-profit capped model to attract investment, culminating in its current estimated valuation of $300 billion. However, the shift has drawn criticism—particularly after the 2023 ousting and reinstatement of CEO Sam Altman, as well as legal challenges from regulators and early backer Elon Musk.
To ease growing concerns, OpenAI recently proposed converting its for-profit arm into a public benefit corporation while keeping the nonprofit entity as a controlling shareholder. Still, the governance details remain unclear.
The commission recommended that OpenAI create a rapid-response fund for AI-related risks, keep the nonprofit arm led by a human executive as a symbolic stance on ethical oversight, and ensure public representation in high-level decision-making processes.
“There’s a strong desire among the public to better understand AI and who’s making the decisions,” Zingale said. “OpenAI should be known, seen, and shaped by the people it claims to serve.”
Notable voices on the advisory board include civil rights icon Dolores Huerta and other prominent public interest advocates. Together, they emphasized the need for OpenAI—and AI companies more broadly—to earn public trust through inclusive governance, especially as AI becomes deeply embedded in daily life and global economies.