OpenAI, the company that brought you ChatGPT, sold you out.
Since its founding in 2015, its leaders have said their top priority is to ensure that artificial intelligence is developed safely and profitably. They have touted the company’s unusual corporate structure as a way to prove the purity of its motives. OpenAI was a nonprofit organization, controlled not by a CEO or shareholders, but by a board of directors with one mission: to keep humanity safe.
But this week, news broke that OpenAI will no longer be governed by a nonprofit board. OpenAI is turning into a full-fledged for-profit company. Oh, and CEO Sam Altman has previously emphasized that he doesn’t own any equity in the company, but now he’s acquiring billions of dollars worth of stock in addition to ultimate control over OpenAI. It will be.
In an announcement that doesn’t seem like a coincidence, chief technology officer Mira Murati announced she was leaving the company shortly before the news broke. Employees were so blindsided that many reportedly reacted to her sudden departure with “WTF” emojis on Slack.
The whole point of OpenAI was that it was non-profit and safety-first. Things started to drift away from that vision several years ago, when OpenAI created a for-profit arm in 2019 to attract the huge investment needed from Microsoft as the cost of building advanced AI increased. But some employees and outside advocates continued to expect the company to stick to its principles. That hope can now be laid to rest.
Jeffrey Wu, who joined the company in 2018 and worked on early models such as GPT-2 and GPT-3, said: “We’re saying goodbye to the original version of OpenAI, which wanted to be unfettered by financial obligations. I can do it,” he said.
Sarah Kreps, director of Cornell University’s tech division, said: “The restructuring around the core commercial entity is a sign that OpenAI is trying to cash in on an industry that has seen a huge influx of investment over the past few years. “It formalizes what people have known for a long time.” Policy Research Institute. This move is a departure from OpenAI’s core focus on security, transparency, and the decentralization of power.
And if this week’s news marks the final end to OpenAI’s lofty founding vision, it’s clear who stopped it.
How Sam Altman became an existential risk to OpenAI’s mission
When OpenAI was co-founded (along with Altman and others) in 2015 by Elon Musk, who was concerned that AI could pose an existential threat to humanity, the up-and-coming institute announced the world’s goal with three sentences: I introduced myself to.
OpenAI is a nonprofit artificial intelligence research company. Our goal is to advance digital intelligence in ways that are most likely to benefit humanity as a whole, unconstrained by the need to generate financial profit. Our research has no financial obligations, allowing us to focus on positive human impact.
Now all of that is objectively false.
Since Altman took the helm at OpenAI in 2019, the company has drifted away from that mission. That year, the company created a for-profit subsidiary (meaning non-profit in nature) to allow it to make the huge investments needed to build cutting-edge AI. But they did something unprecedented in Silicon Valley. This puts a cap on the profits investors can make. You can get up to 100 times your investment, but beyond that, the money goes to nonprofits and is used for public benefit. For example, it could fund a universal basic income program to help people adapt to unemployment due to automation.
Over the next few years, OpenAI focused less and less on safety as it rushed to commercialize its product. By 2023, the nonprofit board had grown suspicious of Altman and was trying to oust him. But he used his relationship with Microsoft to quickly return to power and win new executives to his advantage. Then, earlier this year, OpenAI’s safety team collapsed after staffers lost faith in Altman and left the company.
Now, Mr. Altman has taken the final step to consolidate his power, completely stripping the board of control. It still exists, but without teeth.
“It seems to me that the original nonprofit has been stripped of its powers and reinterpreted so that its mission is fully aligned with profit,” Wu said.
Profit may be what Mr. Altman desperately needs for his company. Despite his confident blog post published this week in which he claimed AI could help “fix the climate, establish space colonies, and discover all things physics,” OpenAI is actually stalling. The company has struggled to find a clear path to economic success for its models, which cost hundreds of millions, if not billions, of dollars to manufacture. Restructuring the business into a for-profit company could help attract investors.
But the move has some observers, including Mr. Musk himself, asking, “How could this be legal?”
If OpenAI were to remove the profit cap, large sums of money from nonprofits (expected to reach billions of dollars) would be directed to investors. Because nonprofits exist to represent the public, this effectively means transferring billions of dollars from people like you and me. As some have pointed out, it’s very similar to theft.
“If OpenAI were to retroactively remove the return cap on investments, it would effectively transfer billions of dollars of value from nonprofit investors to for-profit investors,” said a former OpenAI employee and nonprofit said Jacob Hilton, who joined the company before transitioning from group to group. Profit restriction structure. “Unless nonprofits are properly compensated, this will amount to a money grab. In my view, such a thing violates the OpenAI Charter and the OpenAI Charter, which states that OpenAI’s primary fiduciary responsibility is to humanity. It’s incompatible and I don’t understand how the law would allow that.”
However, the structure of OpenAI is so unprecedented that some may feel confused about the legality of such a transition. And that may be exactly what the company is hoping for.
Asked for comment, OpenAI would only refer to the statement in Bloomberg. A company spokesperson said that OpenAI remains “committed to building AI that benefits everyone,” adding, “Nonprofits are core to our mission and will continue to be.” I’m going,” he added.
The take-home message is clear: regulate, regulate, regulate.
AI safety advocates say we need to pass regulations that provide some level of oversight over big AI companies, like California’s SB 1047 bill, which Gov. Gavin Newsom plans to pass in the coming days. The bill must be signed or vetoed.
Well, Altman has made their point well.
“The public and regulators should recognize that by default, AI companies will be incentivized to ignore some of the costs and risks of AI deployment, and those risks can be huge.” said Mr. Wu.
Mr. Altman also validates the concerns of former employees who have published a proposal calling for employees at major AI companies to have a “right to warn” about advanced AI. “AI companies have strong financial incentives to avoid effective oversight, and we do not believe that bespoke corporate governance structures are sufficient to change this,” the proposal says.
Clearly their judgment was correct. OpenAI’s nonprofit organization was supposed to reign above the for-profit sector, but Altman simply upended that structure.
After years of sweet-talking the press, the public, and policymakers in Congress that OpenAI wants regulation and values safety over money, Altman is no longer playing games. I no longer feel like doing it. He is showing everyone his true colors.
Governor Newsom, are you seeing this?
Members of Congress, are you watching this?
World, are you seeing this?
I read 1 article last month
At Vox, we believe everyone can help make sense of and shape our complex world. Our mission is to create clear, accessible journalism that inspires understanding and action.
If you share our vision, please consider supporting our work by becoming a Vox Member. Your support allows Vox to provide a stable, independent source of funding for our journalism. If you’re not ready to become a member, you can still support a sustainable model of journalism with even a small donation.
Thank you for joining our community.
Swati Sharma
vox editor in chief