OpenAI Faces Backlash Over Proposed Shift to For-Profit Model

OpenAI's proposed shift to a for-profit model faces backlash from former employees who argue it would violate the nonprofit's mission to ensure AGI benefits humanity, not private interests. The letter urges legal action to block the restructuring.

2025年4月28日

party-gif

Unlock the secrets behind OpenAI's proposed restructuring and its potential impact on the future of artificial general intelligence (AGI). This blog post delves into the key points raised by former OpenAI employees, highlighting the critical issues at stake and the implications for humanity's benefit.

The Fundamental Betrayal of the Founding Principle

OpenAI was specifically created because the founders believed developing AGI purely for profit, like Google was perceived to be doing, was extremely dangerous. Its original nonprofit status was meant to ensure decisions prioritize humanity's benefit over making money. However, OpenAI is now trying to change to a structure where profit must be considered. This fundamentally reverses the core idea and goes against the entire reason that OpenAI was established in the first place.

The founding announcement of OpenAI stated, "Our goal is to advance digital intelligence in the way that it is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact." If OpenAI transitions to a for-profit model, it would be betraying this founding logic.

Furthermore, in 2018, OpenAI's Greg Brockman stated, "On the ethical front, that's really core to my organization. That's the reason we exist. When it comes to the benefits of who owns this technology, who gets it, you know, where do the dollars go, we think it belongs to everyone." This statement is now in stark contrast with the proposed restructuring, which would shift the incentives from a duty owed to the public to profits.

Massive Potential for Wealth Transfer

One of the key concerns raised in the letter is the massive potential for wealth transfer if OpenAI transitions to a for-profit structure. The letter alleges that the original capped profit structure was designed to ensure that the immense wealth created by AGI would go back to the nonprofit, representing humanity's interests. However, the letter claims that this cap is likely being removed due to investor demands, meaning the astronomical profits from AGI would now overwhelmingly benefit a small group of shareholders instead of being used for the public good as originally intended.

The letter cites statements from OpenAI's founders, such as Sam Altman's acknowledgment that AGI could "capture the light cone of all future value in the universe," to highlight the scale of wealth that could be generated. The letter argues that this potential wealth transfer from humanity at large to OpenAI shareholders would constitute a profound betrayal of the organization's founding principles and mission.

AGI Could Belong to Investors

The letter argues that the proposed restructuring of OpenAI would mean that the for-profit company and its investors would own and control AGI. This is a significant concern, as the original rules explicitly stated that the AGI technology itself would be created by the nonprofit entity, ensuring it is governed for humanity's benefit.

The letter states that this ownership dictates the control and use of AGI. It claims that OpenAI has not publicly commented on who would own AGI under the proposed restructuring, and that it would presumably belong to OpenAI and its investors.

Furthermore, the letter references reports that OpenAI and Microsoft have discussed removing the contractual restriction on Microsoft's access to AI technologies. This suggests that OpenAI may be changing things that they previously said they wouldn't, which raises serious questions about the motivations behind the restructuring.

Abandonment of the Stop and Assist Commitment

OpenAI's founding charter included a specific and unusual promise - if another responsible, safety-conscious group got close to building AGI before OpenAI, OpenAI would first stop competing and instead assist that group. This was a crucial safety commitment to prevent a reckless race to deploy potentially dangerous AGI technology.

However, the letter expresses concern that OpenAI's proposed shift to a for-profit structure driven by competitive pressures would likely ignore or abandon this crucial safety commitment. The letter argues that this would potentially increase the global risk, as OpenAI may no longer be willing to step down and assist another company, even if they were closer to building AGI.

Given the high stakes involved with AGI development, the abandonment of this "stop and assist" pledge is seen as a significant departure from OpenAI's original mission and a concerning development that could undermine global AI safety efforts.

Charitable Donations Being Used for For-Profit Purposes

The letter expresses concern that OpenAI's proposed restructuring would violate the original charitable purpose for which the organization was founded. Specifically, it states that the law is clear that "charitable contributions must be used only for the purposes for which they were received in trust."

When OpenAI was founded, it received charitable donations with the understanding that the funds would be used to develop AGI in a way that benefits all of humanity. However, the proposed transition to a for-profit structure would mean that these charitable donations are now being used to build a for-profit company, which the letter argues is a breach of the initial promise made to donors.

This raises questions about the legitimacy of using charitable funds for private gain, rather than the public good that was the original intent behind OpenAI's creation and funding. The letter suggests this would constitute a "massive reallocation of wealth from humanity at large to OpenAI shareholders," which fundamentally goes against the nonprofit's founding mission.

Investor Pressure Driving Mission Shift

The letter alleges that investor pressure has been a key driver behind OpenAI's proposed restructuring, which would shift the organization from a nonprofit focused on the public good to a for-profit entity beholden to shareholders.

Specifically, the letter states that OpenAI "primarily cites investor demands" as justification for the restructuring, noting that the organization's "profit caps and other unique governance features that are essential safeguards" are seen by investors as "obstacles" that need to be removed.

The letter claims that recent fundraising rounds have involved "investors insist[ing] on conditions freeing them from certain funding commitments or allowing final redemption of invested funds" if OpenAI fails to simplify its capital structure. This suggests that investor pressure, driven by a desire for greater profits and control, is a key factor driving OpenAI away from its original mission and nonprofit model.

The letter argues that this shift in priorities, from the public good to private financial interests, would fundamentally undermine OpenAI's founding purpose and the trust placed in the organization by its donors and the public.

The High Stakes Nature of AGI

OpenAI acknowledges the serious risks associated with the development of AGI. Its website states that a "misaligned super intelligent AI could cause grievious harm to the world." Samman and AI pioneer Jeffrey Hinton have signed a statement highlighting that "mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

The high stakes nature of AGI development is a key reason why OpenAI's founding mission prioritized ensuring AGI benefits all of humanity, rather than advancing private interests. The letter argues that the proposed restructuring to a for-profit entity would jeopardize this crucial mission, potentially increasing global risks by abandoning crucial safety commitments like the "stop and assist" pledge.

Given the transformative potential of AGI, the letter emphasizes that the control and ownership of this technology is of utmost importance. It argues that the proposed changes would allow OpenAI's investors to gain unrestricted access and ownership of AGI, rather than keeping it governed for the public good as originally intended.

Allegations of Rushed Deployments and Broken Promises

The opposition to OpenAI's proposed restructuring also includes specific concerns regarding the company's actions in deploying its models. The letter alleges that OpenAI has been rushing through safety testing to meet product release schedules, dedicating insufficient time and resources to identifying and mitigating risks.

Furthermore, the letter claims that OpenAI has reneged on its promise to dedicate 20% of its computing resources to the team tasked with ensuring AGI safety. This is seen as a concerning shift away from the company's original mission and commitment to responsible development of powerful AI technologies.

The letter also mentions the departure of Yannic Kilcher, who left OpenAI due to the company's perceived loss of focus on its mission and increasing prioritization of profit. Additionally, the letter alleges that OpenAI's CEO, Sam Altman, has privately attempted to arrange for the construction of trillions of dollars' worth of computing infrastructure with US adversaries, while publicly stating that it might soon become important to reduce the global availability of computing resources.

Finally, the letter claims that OpenAI has coerced departing employees into signing "extraordinarily restrictive non-disparagement agreements," which the authors view as an attempt to silence those who could speak out about the company's alleged missteps.

These allegations, if true, would suggest a concerning pattern of rushed deployments, broken promises, and a potential shift away from OpenAI's original mission to ensure that AGI benefits all of humanity, rather than advancing the private interests of the company and its investors.

Conclusion

The proposed restructuring of OpenAI from a nonprofit to a for-profit entity raises significant concerns about the organization's ability to uphold its original mission of ensuring that AGI benefits all of humanity, rather than advancing private interests.

The key issues highlighted in the letter include:

  1. A fundamental betrayal of the founding principle, as the shift to a for-profit structure would prioritize financial returns over the public good.

  2. The potential for a massive transfer of wealth from the public to a small group of shareholders, rather than being used for the intended charitable purposes.

  3. The loss of legal accountability and public oversight, as the primary responsibility would shift to shareholders focused on financial returns.

  4. The risk of AGI technology being owned and controlled by investors, rather than being governed for the benefit of humanity.

  5. The abandonment of the "stop and assist" commitment, which could increase the global risk of a reckless race to deploy potentially dangerous AGI technology.

  6. The use of charitable donations to build a for-profit company, which may violate the original trust under which the funds were received.

  7. The influence of investor pressure, which appears to be driving the restructuring despite the potential risks to the organization's mission.

  8. The acknowledgment of the high-stakes nature of AGI development, which underscores the importance of maintaining a mission-driven, nonprofit structure.

  9. Allegations of rushed safety testing and the reduction of resources dedicated to ensuring AGI safety.

  10. Concerns about the coercion of departing employees and the imposition of restrictive non-disparagement agreements.

Given the gravity of the issues raised, it is clear that the proposed restructuring of OpenAI requires careful scrutiny and consideration to ensure that the organization's original mission and the public's trust are not compromised.

FAQ