Capital and AGI
It's hard to think of a scenario where a singular AI company and humans both win
It’s time to explore the nuances I mentioned in I’m STILL Bearish On OpenAI:
I used to say that winning the AGI/ASI race is the only thing that can save OpenAI, but now I’m not even so sure of that.
Let’s say that AGI is world-changing in the way all the AI acolytes have been preaching about. The type where you go to bed with AGI and wake up with an ASI Machine God. OpenAI needs to be the first here, and they somehow need to keep it under their control. If this happens, they can use it to basically own the economy, although there are nuances here that I will explore in a future article.
The first factor up for debate is whether there is an upper limit to intelligence. Obviously, physics places some limits on how much intelligence can be stored in computers, but is there a point before that where the marginal gains aren’t significant anymore? In other words, is an AI with an IQ of 10,000 that much better than an AI with an IQ of 9,900? What if you only need an AI with an IQ of 1000 to reach post-scarcity? At that point, it wouldn’t really matter who is “first.” The other competitors would simply catch up shortly after, and then you’d have a mix of AI competing with each other. In this scenario, costs would be driven down, and there wouldn’t be much money to be made for the model producers.
Now, let’s say that there is no upper limit to intelligence. Having a smarter AI is genuinely useful, perhaps for reasons we can’t even understand, so being first is important, as exponential gains will make it impossible for any other company to catch up. In effect, being first will give you a monopoly on intelligence. Even in this scenario, I still don’t see how you make money while avoiding a dystopian outcome.
The first possible outcome in this scenario is that AI has ushered in a complete post-scarcity utopia, removing the need for human labor or money. Obviously, in this world, there is no money to be made. Maybe you can say that those closest to the AI will have extra privileges, so maybe Sam Altman can poof up a new mansion while the regular guy can only poof up some new spoons. This is obviously a dystopian outcome.
Now, let’s say that money is still a thing because society can’t function without it. There still probably wouldn’t be much need for human labor considering we have infinitely smart AIs that do things like cure all diseases and partake in intergalactic travel. So, the money would likely come via UBI out of the AI company’s pocket, either directly or indirectly through taxes. So, in a sense, the money that would be used to buy their products would come from them. I guess this is “profitable,” but it’s just circular and most definitely dystopian. Please Mr. Altman, can I have a bit more allowance this month?
Even if you avoid having to pay people UBI by giving them make-believe jobs like digging holes or whatever, that is still in effect a UBI because the AI company would have to be the one paying the salaries. So, again, there would be a lot of “profit,” but it’s not exactly a utopia.
Ok, let’s tone it down a bit and say that being first to AI is important, but there is no fast takeoff. So, the “winner” would have AGI because of some breakthrough that no other company found, and thus a durable advantage, but there is no ASI, and thus, no real post-scarcity utopia. I still don’t see how this can simultaneously be a good thing for the AI company, let’s say it’s OpenAI, and society as a whole.
If OpenAI wins AGI, every company will have to use OpenAI’s models to stay competitive. These models supercharge productivity and make it so that human labor is no longer required, but they also make it so that every company has the exact same capabilities. There’s no more competitive edge, costs are driven to near-zero, every company has to pay the OpenAI tithe, and because all or a large portion of the workforce is laid off, nobody has extra spending money lying around. It’s basically a race to the bottom, with companies now occupying a smaller slice of a smaller pie. OpenAI would still be able to extract quite a lot of money in this scenario before things really bottom out, but it’s a dystopian future, and if you’re an investor, what would even be the point of having a lot of money? It’d be like being rich in the Sahara Desert.
And, as I broke down above, it’s the same outcome if you give everyone make-believe jobs instead of a UBI, except that the companies will make a bit less money because they have to pay unnecessary human labor.
Maybe you can say that driving the cost to near-zero will usher in a post-scarcity world. Ok, but wouldn’t this just kill the companies or, at a bare minimum, force them to fire all humans? I think it’s much more likely that companies merge and form harmful monopolies. Basically, you’re back to the ASI scenarios, except instead of one super AI company, we have a bunch of powerful conglomerates supplying us with make-believe jobs or UBI and goods at some made-up price. How is this a good outcome?
I could definitely be missing something here. If I am, please tell me. But, right now, the only way I can imagine that one singular AI company and humans both win is if the AI company creates an AI that is simultaneously strong enough to beat all competitors but not strong enough to put all humans out of work. So, an AI that somehow continuously gets smarter but somehow doesn’t meaningfully surpass humans. It’s so outrageous that I feel stupid even typing it.
The best outcome for all of us would be that there is an upper limit to intelligence so that no single AI company can dominate. Basically, we need AI to turn into a commodity. It’ll still be a much different world, but at least we won’t have to live under Altman or Pichai rule.


