What's Sam's fault, man?

It's hard to be a Sam in Silicon Valley

The greatest mystery novel of 2023 just dropped: OpenAI announces leadership transition (November 17, 2023). We're here to solve it.

Update - November 18, 2023: $ilicon Valley has spoken. Sam may be rejoining OpenAI.

Update - November 19, 2023: We've done the research. We've read thousands of comments across hundreds of threads, faced the pay wall on dozens of news articles, and combed over the facts. This page is a comprehensive primer on what happened, why, and what happens next.

Update - November 20, 2023: Emmet Shear is replacing interim CEO Mira Murati. Looks like Sam might not be coming back after all.

Update - November 20, 2023: Sam and friends are joining Microsoft.

Update - November 20, 2023: Chief scientist Ilya, supposedly the main instigator, apologizes and wants to reunite.

Update - November 20, 2023: Hundreds of OpenAI employees are threatening to resign immediately unless the board is replaced. In the ultimate Uno Reverse move, Ilya Sutskever is one of the employees threatening to quit.

Here's the short answer 👇

Sam was likely fired because of his focus on rapidly expanding OpenAI as a software business rather than as an organization creating safe artificial general intelligence (AGI) for humanity. His firing represents a philosophical divide across the entire company. Ilya Sutskever (chief scientist) may have instigated it. Greg Brockman (president) quit soon after, and other employees followed suit.

Most employees and other stakeholders (Microsoft, venture capitalists, etc.) were left entirely in the dark about Sam’s firing. There are two likely outcomes moving forward: Sam and his followers start their own AI company or Sam rejoins OpenAI due to massive external pressure from stakeholders.

If you want to understand why this all happened, read the primer below.

Here's the long answer 👇

Let’s start at the beginning. Or, I guess, the end?

Sam was fired on November 17, 2023 by the OpenAI board of directors.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


In other words, he didn’t tell the board some thing(s). He did it consistently. His behavior interfered with the board’s responsibilities and they lost confidence in him.

Let’s ignore the elephant in the room — what exactly he wasn’t candid about — for now.

Focus on the facts: who’s on the board, what are their responsibilities, and what would make them lose confidence.

OpenAI's Board

OpenAI’s six board members at the time:
- Sam Altman (CEO, fired)
- Greg Brockman (President, resigned)
- Ilya Sutskever (Chief Scientist)
- Adam D’Angelo (Quora CEO)
- Tasha McCauley (Entrepreneur)
- Helen Toner (Academic)

The first three members are key founding employees of the company. Greg Brockman helped start the company both as an engineer and as a non-technical leader https://blog.samaltman.com/greg. Ilya Sutskever helped start the company as its research head.

Adam, Tasha, and Helen do not work at OpenAI, are not compensated, and do not hold equity. Adam D’Angelo co-founded Facebook (as CTO) before leaving to start Quora. Tasha McCauley is an engineering entrepreneur who founded Fellow Robotics and now works at GeoSim Systems. Helen Toner is a director of research grants at Georgetown University and an AI policy advisor.

OpenAI's Board Responsibilities

To understand the board’s responsibilities, you first need to understand OpenAI. It’s not a simple company nor a single company. To some degree, it’s not even a company.

It starts at OpenAI Inc., a non-profit charity. This is the kingpin. The charity controls all other entities. The charity’s goal is incredibly simple: to create safe artificial general intelligence (AGI) that benefits all of humanity. The board works for the charity.

The charity controls other entities indirectly through OpenAI GP LLC. This “general partnership” controls both OpenAI LP and OpenAI Global, LLC.

OpenAI LP is essentially a way to maintain majority ownership of OpenAI Global and distribute ownership among stakeholders. The charity, the OpenAI employees, and certain investors all have equity in OpenAI LP. OpenAI LP owns the majority of OpenAI Global.

OpenAI Global is ultimately the “business” entity of OpenAI. It’s a profit-limited, for-profit company that can raise funding and operate outside the legal bounds of a charity. Microsoft is a minority owner of OpenAI Global.

Confused? Don’t worry. OpenAI structure is so complex because its very existence is complex: it has a massively charitable mission which is insanely expensive to achieve. The goal of their corporate structure is to maintain sight of their mission while gaining access to the enormous amounts of capital needed to achieve it. People don’t typically give billions of dollars to charities. They do give them to businesses. So, OpenAI became a charity that controls a money-making business that people can invest in. All for the sake of furthering their research.

Remember that point: this is all for the sake of furthering their research into safe AGI that benefits all of humanity. The charity’s goal, and thus the board’s goal is not to make money. It is not to build the next iteration of ChatGPT. It is **not** to build software-as-a-service. It is not even to research large language models (if LLMs turns out to be a dead-end in regards to AGI).

The board’s responsibility is to ensure that OpenAI is focused on building safe and beneficial AGI. This is spelled out extremely clearly on the website itself:

The Company exists to advance OpenAI, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAl, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and or related expenses without any obligation to the Members. See Section 6.4 for additional details.


Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.


OpenAI's Board Loses Confidence

We now know who’s on the board and what they’re there to do. So, why might they lose confidence in their CEO?

Well, what if their CEO recently hosted a giant sales event launching several fantastic AI/AI-related products that have nothing to do with AGI? What if their CEO, by promising these products, is leading the company to work on AI software as a business, not AI software as AGI research? What if they thought their CEO was more interested in making money and building a business than in pursuing the charity’s mission?

Yea, their CEO did exactly those things. It’s not hard to see why they might lose confidence in Sam.

This entire drama can be summarized in one sentence: OpenAI is not intended to be an AI software business, but the board was convinced that Sam was turning it into one, so they removed him.

And back to the elephant in the room — what exactly wasn’t Sam candid about? It’s not a secret, it’s exactly this. Sam’s supposed to be driving the company towards AGI research. Instead, he’s driving the company towards profit. He never outright communicated that to the board. His actions spoke for him. The red herring in the story is that Sam was fired for lying about something or covering something up. The truth is not nearly as villainous.

Who’s in the wrong?

Where this story gets complicated is the question of who’s wrong. Is Sam a profit-thirsty tyrant leading the charity in the wrong direction? Or is he simply ensuring that OpenAI has the funding it needs in perpetuity by building the business out as a platform for the rest to sit on?

Is the board making practical decisions to course-correct and re-align on fundamental charity values? Or is the board rash, impulsive, and failing to realize the massive consequences of a decision that might end the charity altogether?

Well, no one is wrong. No one is right. It all depends on your personal opinion. There’s no skeleton in the closet to reveal. There’s no definitive answer. You might as well ask what ChatGPT thinks should have happened.

What happens next?

EDIT: If you've seen the latest updates, you know that Sam and friends are joining Microsoft. We got it... half right. He isn't rejoining OpenAI, but he also isn't building anything from scratch. Microsoft represents the original funding and infrastructure, will likely acquire the original talent, and, most importantly, is a for-profit company. The stars aligned for Satya Nadella.

There are two likely scenarios moving forward. Either Sam and his gang of like-minded friends goes off to start their own AI business — this time an actual business and not a charity — or Sam is reinstated at OpenAI because of pressure from external and internal stakeholders. Of the two, we believe in the latter.

Whatever the original intentions of the charity and the hopes of the board, OpenAI is now a money making machine. Powerful people will fight to keep that money machine going.

Microsoft has a vested interest in ensuring that OpenAI continues down its current path. It also has significant bargaining power — OpenAI runs on Azure and billions of dollars of Microsoft investment.

The threat of talent leaving, either for monetary or personal reasons, is a significant bargaining chip. Not all employees are altruistic. OpenAI refocusing on its original mission may result in a significant loss of future earnings potential.

We don’t think Sam will start his own AI company because

Given the above bargaining power, it would be realistic for Sam to be reinstated. Afterwards, the current board would likely be removed. The mission of the company is unlikely to change on the surface. It’s great branding. It’ll just quietly take a backseat to the business.

There is far too much momentum behind OpenAI, its technology, and its infrastructure to abandon it for a similar, from-scratch venture.

In any case, no matter what happens, OpenAI is forever changed. And it changed long before any of this drama.

Here were the earliest theories 👇


  • 👋 Chief Scientist Ilya Sutskever pushed Sam out.
    • Theory: Sam's growing influence was a threat. His potential focus on fame and fortune misaligns with the company mission.
  • 💸 There's a lot of money on the table.
    • Theory: OpenAI was once a non-profit organization and still claims to hold a non-profit mission. Sam compromised on non-profit ideals. Or, oppositely, the board is compromised, and Sam somehow stands in the way of profit, lucrative acquisition, or other financial upside.
  • 🔥 ChatGPT is unsustainably expensive to run.
    • Theory: Sam lied about the level of financial resources required to run ChatGPT and the technology behind it. This correlates with the decision to pause new premium ChatGPT Plus subscriptions.
  • 🕵️‍♀️ Data is being used illegally, inappropriately, or both.
    • Theory: Sam is a villain. Copyright or otherwise protected data is being used to train models, breaching legal contracts and violating users’ security and privacy.

On the fringe

  • 🏴‍☠️ Illegal or immoral personal activities
    • Theory: Sam has issues in his personal life (e.g. Annie Altman) that the company wants to distance itself from.
    • Why Fringe: Information has been out for years. The press release would have been worded differently. Do tech companies really care about morals?


  • 🧠 GPT-5 is replacing its overlord
  • 🤖 Sam has been running OpenAI using ChatGPT and hallucinated to the board one too many times.
Site built with ❤️ and ChatGPT