I'm Bearish OpenAI
A shift toward products and a research brain drain should ring your alarm bells
At its spring demo day on Monday, OpenAI unveiled its latest flagship model, GPT-4o. The model itself is impressive:
And some of the demo videos are literally breathtaking:
Couple this with the news that OpenAI is nearing a deal to put ChatGPT’s technology into the iPhone, and it seems like there are plenty of reasons to be bullish on OpenAI. However, despite these new developments, I have actually never been as bearish on OpenAI. It’s to the point where I’m wondering why the hell people were investing at their latest $80B valuation.
The reason for my bearishness is simple: OpenAI, the software company, will ultimately lose to Apple, Google, and Meta. OpenAI, the hardware company, will also ultimately lose to Apple, Google, and Meta. Their only hope is to be the first to AGI. However, a severe brain drain and pivot to compute-draining consumer-facing products leads me to believe they aren’t all that confident in their ability to build AGI. And if that’s the case, then they are in real trouble.
The Compute Problem
The one thing that everyone can agree on with regard to AI is that it is extremely expensive to do well. Big Tech companies are spending unfathomable amounts of money on compute, data, and energy:
Meta anticipates $35-40B in capital expenditures, with the majority of it going to AI hardware and data centers.
Microsoft invested $11.5B in CapEx in 2023.
Google spent $11B in CapEx in Q4 2023 alone, and they plan to spend significantly more in 2024.
Obviously, this is not for no reason. You need the data centers to keep your AI products running, but it also looks like scaling could be all you need to get to a model that is, even if not outright AGI, pretty damn close to AGI:
OpenAI also subscribes to “scaling is all you need”. Altman believes it, as seen in his famous attempt to raise $7T to build out OpenAI’s chip infrastructure, and Ilya has been preaching the scaling gospel for years. They aren’t building a $100B supercomputer with Microsoft for shits and giggles.
But, as any Nvidia investor knows, all of this competition means that there is a constant shortage of adequate chips. Yes, you can make the argument that this is a problem capitalism will solve, and you already see some evidence of this happening on both the software and hardware sides. But, you can also easily make the argument that acquiring sufficient compute will be a constant problem in the future.
Even if the software and hardware continue to improve, it’s still in the best interests of companies and nations to stockpile as many chips as possible. The more chips you have, the greater your ability is to integrate AI into your products, and if the scaling laws are true, then the more chips you have, the more powerful your models will be. Basically, unless AI development starts stalling or becomes very unprofitable, there’s no reason for chip demand to go down. Couple that with the fact that TSMC, the dominant chip maker, operates in the always tenuous Taiwan strait1, and you can easily envision a scenario where chip supply is unable to meet demand.
Perhaps the best analogy for chips would be another economically crucial resource subject to never-ending growing demand and political intrusions: oil. There’s a lot of oil, and we keep getting better at getting more:
But we also keep consuming more:
And thus, the price has also steadily gone up:
I expect something similar to happen with chips. Even if capitalism drives the cost of compute down, it will be nullified by the hunt for profits and the scaling laws driving the demand for compute up.
A Losing Battle
Software doesn’t exist in a vacuum. Today, it is primarily served to you through phones and laptops. In the future, it will likely be phones, laptops, and maybe AR/VR headsets. Apple and Google dominate the phone market, Apple and Microsoft dominate the laptop market, and Apple and Meta dominate the headset market. All of these companies are building their own models:
Apple is working on their own on-device LLM.
Microsoft, OpenAI’s biggest backer, is working on their own model MAI-1. The project is helmed by Mustafa Suleyman, the acqui-hired CEO of Inflection AI.
Google is already damn close to OpenAI in capabilities, and they are putting their models into their phones.
Meta is the creator of the most powerful open-source model, and their smart glasses already have multimodal AI capabilities.
This is why I don’t really see the bullishness behind Apple’s reported ChatGPT deal. It’s clearly a stop-gap. Even if they never fully catch up to OpenAI’s capabilities (and that’s a big if), they have no economic incentive to pay to use OpenAI’s technology over their own. It’s much more likely that OpenAI will have to pull a Google and pay Apple for the privilege of being on the iPhone.
You can counter that OpenAI’s technological edge will overcome their hardware limitations, but I have a hard time believing this. For one, that edge is already decreasing. and with the amount of money and talent that the Big Tech companies have and the brain drain happening at OpenAI, there’s no reason to expect that gap to expand in the coming years. As I/O shows, Google already has basically everything OpenAI has. And second, it’s impossible to overstate the importance of convenience. People are extremely lazy. Just look at Chipotle, an $87B company that makes food that you can easily make in your house, or DoorDash, a $47B company that delivers food you can easily pick up yourself. If the iPhone has native AI capabilities anywhere close to OpenAI, do you really think people are still going to seek out OpenAI’s products?2
Altman knows that OpenAI is cooked here. It’s why he’s recruited iPhone designer Jony Ive for a personal AI device. Maybe they can make it work, but it’s going to be awfully difficult to get people to buy a personal AI device if their iPhone already has AI natively imbued. Not to mention all of the supply chain advantages that Apple has accrued over the last 15-plus years.
This is why it’s so confusing as to why OpenAI is focusing so much on consumer-facing products. It’s not a battle they are going to win and it takes away valuable compute resources, which might have been the reason for Ilya’s OpenAI coup:
This leads me to believe that OpenAI’s Zuckerbergian shift away from making the model the product is explained by a loss of confidence in their ability to build AGI. Maybe it’s because the scaling laws are actually bullshit and they need a new research breakthrough to achieve AGI. Maybe it’s because Altman is at his core an operator and not a visionary. Maybe it’s something else. Whatever the case, it’s bearish for OpenAI.
It’s entirely possible that OpenAI releases a new model, blows everybody else out of the water again, and continues going up and to the right all the way to AGI. But, right now, there’s an awful lot of pressure on GPT-5 and/or Q*. If they underwhelm, then the heat on Altman is going to awfully warm.
TLDR
OpenAI isn’t going to beat Apple, Google, or Meta in either software or hardware. Their only hope is to be first to AGI.
You’ll likely need a lot of compute and cracked researchers to build AGI.
Compute is an extremely valuable and scarce commodity, and researchers even more so.
OpenAI is wasting compute on consumer products, while their best researchers are leaving.
This makes me think that Altman has either lost the plot or realizes that AGI isn’t attainable in the near future, neither of which bode well for OpenAI.
Every day without AGI is a day that Google, Apple, and Meta further closes the gap on OpenAI.
Yes, I know they are expanding to America and Europe. However, it will still take a while before they match the production in Taiwan.
And that’s assuming Apple even lets you try.