OpenAI, the nonprofit organization, is reportedly considering transforming into a for-profit company. According to anonymous sources cited by The Information, OpenAI’s founder and CEO, Sam Altman, revealed last week to some shareholders that they are contemplating turning their subsidiary, OpenAI LP, from its current capped-profit structure into a for-profit company without profit limitations.
This decision may be related to OpenAI’s main investor, Microsoft, as outlined in the report. Currently, OpenAI’s board of directors consists of eight members, none of whom are external investors. Although Microsoft is the largest investor, they only hold an “observer” seat on the board, meaning they do not have voting power or the ability to influence voting outcomes. If OpenAI becomes a for-profit enterprise, Microsoft will not only be able to exercise shareholder voting rights but also increase their influence through board seats, further enhancing their impact on OpenAI.
With a current valuation of $86 billion, OpenAI’s rumored transformation into a for-profit company could potentially expedite its initial public offering (IPO) process. Altman and other investors would then have the opportunity to own or increase their holdings, resulting in a more optimistic investment return forecast.
However, some investors point out that OpenAI already allows existing employees and other investors to sell their shares through regular secondary offerings. In 2023, OpenAI conducted two secondary offerings for its employees, generating over $800 million in cash. With ample funding, the pressure for OpenAI to go public is not significant.
OpenAI’s departure from its original vision raises questions. Initially established in 2015 as a nonprofit organization in San Francisco, OpenAI aimed to conduct AI research and operations with the goal of developing artificial general intelligence (AGI) that would be accessible to all, avoiding monopolization by large corporations or a select few individuals.
In 2019, due to the increasing demand for research, manpower, and cloud computing infrastructure, OpenAI established another company, OpenAI LP, which operated under a capped-profit structure and raised funds from investors. To maintain the original nonprofit vision, OpenAI set profit limits for investors, ensuring that they could not earn more than 100 times their investment, with excess profits being distributed to the nonprofit organization as operational funds.
Additionally, to preserve OpenAI’s independence, the board of directors, including Altman, was prohibited from holding OpenAI shares. Even major stakeholders like Microsoft and other investors did not have board seats, preventing them from influencing the company’s governance.
Altman and the members of OpenAI originally hoped to attract investments while retaining the organization’s independence and ensuring a stable source of funding to advance the vision of “democratizing AGI” for everyone. However, this system led to a power struggle within the board, resulting in Altman’s dismissal at the end of 2023. The internal turmoil only subsided after intense employee backlash and Altman’s return.
The potential transformation into a for-profit company has generated mixed reactions. For investors, it would provide better assurance of returns on their investments. OpenAI’s current public statements caution investors to view their investments as donations, acknowledging the possibility of losing capital without expecting any returns.
Moreover, binding interests through the transformation into a for-profit company may help prevent future power struggles. However, industry observers concerned about AI development worry about the implications of OpenAI’s potential transformation. Nonprofit organizations in San Francisco are protected by local laws, which help prevent shareholders from initiating lawsuits, accusing companies of prioritizing shareholder interests and reducing the possibility of shareholder interference.
There are concerns that once OpenAI becomes a for-profit company, its nonprofit board may lose control and deviate from the original vision of avoiding AI technology monopolization. These concerns have deepened with the recent addition of a new board member. On Friday, June 14th, OpenAI announced the appointment of Paul Nakasone, former director of the National Security Agency, commander of the United States Cyber Command, and retired U.S. Army general, as the eighth member of the board. This appointment drew a strong reaction from renowned U.S. cybersecurity expert Edward Snowden, who harshly criticized OpenAI, describing the appointment as a “deliberate betrayal of the rights of all people on Earth” and suggesting that the board may face potential intervention from the U.S. government in the future.
Snowden warned the public on his social media platform, X, saying, “Never trust OpenAI or its products. There is only one reason for appointing the former director of the National Security Agency to the board. Don’t say I didn’t warn you.”
Sources:
Cointelegraph, The Information
Proofread by: Yuan-Ting Shao