Airplane has been acquired by Airtable. Learn more →
OpenAI’s moat is stronger than you think

OpenAI’s moat is stronger than you think

Ravi Parikh
Co-founder & CEO
May 8, 2023
5 min read

Recently, a memo leaked internally from Google with the headline “We Have No Moat And neither does OpenAI.” The overall message is that neither Google nor OpenAI will be able to build a defensible business model around huge AI models. The memo’s author thinks that the pace of innovation in open source AI models will soon outcompete Google, OpenAI, and every other big non-open source player.

The article makes several compelling arguments, but I don’t think it will be correct long-term. There are several reasons why I think OpenAI will have a durable “moat,” and the majority of usage of general-purpose AI models in the future will be limited to a few large companies.

High-quality AI models are hard to build

When OpenAI started out, very few people believed that merely adding scaling tons of data and compute would be mostly sufficient to build intelligent systems. Even until recently, the majority of AI researchers thought that intelligence would be significantly more difficult to achieve. Instead, we live in a world where GPT-4 was surprisingly easy to build.

This is one of the many reasons why people think OpenAI’s technology won’t be defensible. After all, anyone else could do the same thing, right?

But GPT-4 being surprisingly easy doesn’t mean that it was easy in an absolute sense. The surprising part is that building GPT-4 looked really similar to building ordinary SaaS software without requiring tons of new research breakthroughs. But ordinary SaaS software is still pretty hard to build and quite defensible. Almost every SaaS company is able to command 50-80% gross margins, which isn’t something you’d see if they were easily commoditized.

GPT-4 didn’t require novel research, but it was more complicated than just buying tons of GPUs and scraping the whole internet and then hitting “run.” There are hundreds of tiny decisions, hacks, and kludges required to make GPT-4 work as well as it does. While OpenAI is pretty private, there are some hints of what they’ve done that will be hard to replicate:

  • Reinforcement learning from human feedback (RLHF): OpenAI has built a team of people who provide examples of desirable model outputs and rank what the model does. It’s not trivial to recruit a large team of people to do this kind of work, build infrastructure for them to do this work effectively, and incorporate this data back into the model. Ironically, one of the defensible parts of OpenAI is this complicated human system they’ve built to help improve their AI.
  • Incorporating feedback from users: over 100 million people have logged into ChatGPT and interacted with it. Every interaction with ChatGPT is more data that OpenAI can use to improve their model, as they mention here. You can opt out of this but most users are opted in by default. No other AI model has achieved anywhere this level of scale.
  • Data filtering: GPT-4 isn’t just based on a naive dump of all content on the internet. They did a ton of work to filter the data before training on it, which they allude to in their GPT-4 blog post. There are hundreds of small decisions and pieces of tribal knowledge that OpenAI has built up over time on how to do this well, just like any complex software company innovating in a new domain.

The model author writes that there is “no secret sauce” underlying state-of-the-art AI models. And this is completely correct–but “secret sauce” is rarely how defensible software businesses are made. Google’s original “secret sauce,” PageRank, was publicly known for Google’s entire existence and yet Google remained a defensible business. As shown in the memo, it may be possible to cheaply replicate something almost as good as GPT-4 is today. But GPT-4 has ongoing improvements sourced from the 100s of millions of people using it daily and thousands of businesses hitting its APIs and filing support issues. By the time you fork one of these open source models and integrate it deeply into your business, GPT-4 has undergone several more rounds of improvements.

In the memo, the author points to several impressive results where open source models are meeting GPT-4’s performance. But the metric used to judge these comparisons is itself using GPT-4 to grade the other models. I’m skeptical that Koala or any of these other models are actually as powerful as GPT-4 / ChatGPT.

Last-mile delivery is valuable and hard

The leaked memo mentioned that “people will not pay for a restricted model when free, unrestricted alternatives are comparable in quality.” I don’t believe that there will be free alternatives that actually are comparable in quality (as mentioned above), but even if there are, I also don’t think it will be easy for the majority of consumers or businesses to use free models off the shelf.

OpenAI has provided two very high quality interfaces to their models: ChatGPT (for consumers) and the OpenAI API (for businesses).

I was caught completely by surprise by the popularity of ChatGPT. My friends and I had been using OpenAI Playground for a couple years already, and so it was astonishing that merely putting a chat UI in front of GPT-3.5 led to 100 million additional people discovering what had already existed for a while. But the form factor really matters. Even I use ChatGPT a lot more than I ever used Playground just because of how much simpler it is to access. This last-mile work to create a polished consumer experience is not to be underestimated.

On the B2B side, having a robust API that meets enterprise SLAs is not trivial. Companies like Stripe and Twilio have built multi-billion dollar, high-margin businesses just by providing rock-solid APIs to commodity services. Snowflake is just a “thin layer” on top of cloud providers like AWS, and yet still people pay a huge premium for their service. Just like B2C, the last-mile work to deliver the value of AI models to businesses in the way they want to consume those models is a durable, defensible business model with plenty of precedent.

Using an OSS model might save on cost and allow for more customization, but for most businesses, paying for a market-leading API on top of a continuously improving state-of-the-art model is a much better bet.

Brands are durable and defensible

Finally, and probably least importantly but still noteworthy, is that the OpenAI brand itself has defensibility. OpenAI has built a household name basically overnight. They’ve eaten up so much of the AI mindshare that competitive offerings will have to work very hard to dislodge.

And uniquely, OpenAI has achieved fast product-market fit both as a B2B and B2C business incredibly fast. There isn’t much precedent for it, but I imagine there’s a strong mutually reinforcing effect where ChatGPT consumer users naturally turn to OpenAI’s B2B offerings as a default option.

As an example, our company Airplane has been using both Anthropic and OpenAI’s models to build AI features into our product. The performance of both is comparable, and Anthropic is even better/faster in some situations. But almost every AI-based company founder I meet mentions using OpenAI’s APIs as a default, and only exploring alternatives if there are specific limitations that they hit. Almost every “Show HN” or Product Hunt launch I see that uses AI is using OpenAI under the hood (when it’s mentioned).

This is how most B2B and B2C purchasing decisions look. Most businesses use Salesforce as their CRM unless they find a specific reason not to; they don’t do an exhaustive analysis of the top 20 CRM vendors and pick the best one. Once a brand has become a “default” it’s hard to dislodge.

Overall, I’m skeptical that open source offerings will meaningfully eat away at OpenAI’s edge. I think the near-term future (next ~5 years) will see <10 major corporate vendors accounting for the vast majority of global usage of large language models.

Share this article:
Ravi Parikh
Co-founder & CEO
Founder at

Subscribe to new blog posts from Airplane.