If you’re an AI skeptic, I don’t know how you can use OpenAI’s new o3 and o4 models and not want to immediately apply clown makeup.
I used o4 for the first time two days ago. My brother and I had it debug an ESLint issue for Phyt, our startup. This wasn’t a small issue. It was 2700 errors in total, and no other AI model got even close to fixing it. My brother, the CTO, estimated it would have taken him ~8 hours to fix it himself.
We fired up o4 on Copilot just to see what it could do. Our hope was that it could maybe point us in the right direction. Instead, just twenty minutes later, the 2700 errors were done and dusted.
But that’s not what had our jaws on the floor. It was the way in which it did it that had us questioning everything. It didn’t just brute force a solution (looking at you, Claude 3.7). The first five minutes were spent just taking in the codebase and thinking about the problem. The next ten were spent writing code, testing it, and then going back and trying something else if it didn’t work. And the final five were spent cleaning it all up, making sure everything still worked as intended. There was even a moment where my brother was about to jump in and take control when he thought it was about to make a mistake, only for it to course-correct on it’s own right before he did. It felt like God himself took control over the code. All we had to do was keep our monkey retard hands off the mouse, and the machine messiah would do the rest.
I’m a skeptic of the economic viability of AI model producers, but I’m not a skeptic of the idea that AI will cause a hell of a lot of change. But, I’ll admit, I’ve had some doubts over whether AGI is actually coming anytime soon (why would OpenAI waste their time on an app store and social media?). Those doubts are gone now. I suppose everyone has their come to God moment. "What did Ilya see?”, and what not. Well, hunched over watching a machine delete 8 hours of work faster and better than I ever could, I’ve had mine.
I don’t care if it still makes a silly mistake here and there. It has a literal genius-level IQ, codes better than all but the very best, and is better than the median writer, mathemateciain, scientists, philosopher, lawyer, doctor, therapist, etc.
We can politick about whether it’s AGI or not all day, but the debate over whether it’s possible is finished. Whether it was April 16th, 2025, or some other day in the near future, AGI is 100% coming.
This is a huge shift, because now, instead of debating over the best way to get to AGI, or whether it’s even possible, we must now turn our attention to something more important: what happens after?
It’s hard to overstate the importance of this question. This is like the Test in The 100. Get it right, and you have the powers of God at your disposal. Get it wrong, and it’s probably so over, and there won’t be a “we’re so back” in the future. There’s a reason Google is now hiring Post-AGI researchers.
There is just so much to work out, even beyond the all important issue of alignment.
If I’m right, and massive job displacement is inevitable, what happens to people? What do they do for money? Where do they find meaning in their life? What happens to the stock market without a perpetual 401k bid? What happens to the government without revenue from income taxes? What even is the point of going to school, or simply just working hard at anything, if there is a machine that is smarter than you, faster than you, and never sleeps? How do you climb the ladder in a world where you can no longer outsmart or outwork the incumbent?
What happens to the government? Do we start outsourcing governing decisions to the Machine God? Do we even have a choice? Let’s say that France decides to have AI run it’s government, and it starts absolutely kicking ass, what else can you do but also use AI? Will this lead to one united world government, because we are all using the same AI? Will it mean that OpenAI or whoever else is the de facto rulers of the country and/or world? How do you govern a corporation more powerful than the government? What rights do you give an AGI?
What happens to culture? Already we’ve seen a convergence in culture in the name of efficiency and efficacy. All the songs sounds the same, all the movies are reboots and sequels, every baseball swing looks the same, etc. Will AI make this worse, or better? Will there even be a role for human creativity anymore? How could there, if the AI knows better than you and is more creative than you?
What happens to relationships? If people are already addicted to porn and falling in love with Character.AI characters, why wouldn’t they fall in love with an AI that looks like a super hot human, but with all the empathy and charisma that superintelligence grants? Is this just it for human-human love? Will we see human/robot babies in the future?
What happens to the economy? If AI’s become the primary stock traders, will we see a truly efficient market for the first time? What does it mean if every company uses the same AI? What happens if we actually hit double-digit GDP growth yearly, and where would those gains flow to? Will money even still matter? What happens to the poor venture capitalists?!
I could go on and on, but you get the point. Everything is about to change, and we need to really think about what exactly that means.
But, that does beg the question.
If AGI is coming, what’s the point? Couldn’t we just wait and ask it instead?
"If AGI is coming, what’s the point? Couldn’t we just wait and ask it instead?"
Incredible mic drop.