So considering that OpenAI is now the most useful private company on the planet, it’s kind of weird that lately they’ve been flailing around like a tube man in a used car lot. The super esoteric hardware device they’ve been working on is on ice due to technical difficulties, and after enormously hyping GPT-5 as the coming of AGI, whatever that means, when GPT-5 was finally released after multiple delays, it was met with scorn and disappointment by its own users. And so after a long run of bad press, to prove the haters wrong, Sammy Altman released Sora 2, a TikTok knockoff chock-a-block with AI slop.
And you know, the first time you open Sora 2, it’s kind of superficially impressive to see all these slick videos made with just AI prompts, but hey, wasn’t AI supposed to change the industry, cure diseases, and replace millions of workers? So why did OpenAI release this? It’s kind of baffling, isn’t it? Actually, the more I think about it, it seems like Sora 2 is one of the weirdest, and in many modalities, worst apps to ever make it into the App Store. And I’ve tried pimple popper light. So first of all, Sora gives you the option of letting anyone make an AI deepfake video of you if you take the little step of letting them steal your face.
And almost showing why no one should ever do this, Sam Altman himself kindly donated his own likeness for us to have fun with, which means that when you open Sora, almost one out of every three videos you see is Sam being physically, emotionally, and sometimes almost sexually humiliated by his own users. I’m a CEO in a porcelain spa. Skibidoo, skibidop.
Yes, yes, yes. That’s me. Japan is so cool.
Everything is free. Okay, I’m taking these. Oi, bring those back.
Whoa, gotta go. Stop, give them back. Nope, too late.
Please, I really need this for Sora inference. This video is too good. Here’s a little video I made of Sam being slapped around by a risk capitalist.
Please, we need more GPUs. The demand is impossible. I can’t build the models without your funding.
Whatever you want, I’ll do it. Just help us buy the chips. Look at you, begging.
You’re a good little piggy. Oh, and here’s one of him falling down the stairs. Whoa, whoa, ah, ow, ah, holy, ah.
Ow, shit. I’m okay, I’m okay. That was really public.
Oh, he’s my puppet to do with as I please. You know, I thought AI was supposed to make me more productive at work or some shit. But instead, it seems like they made the most powerful bullying tool in human history.
But, you know, it gets worse because you’d think that Sam Altman, a guy who’s been accused for years of building his entire business on the stolen work of others, would know better than to just overtly rip off copyrighted characters and works, right? Wrong. You know how fast you were going? Get out of the car, now. Pika, pika.
Don’t play dumb with me. Open the door. Pika, pika.
Now. Dammit, stop the car. When Sora launched, they just gave users the ability to steal copywritten IP, just like Pikachu stole that car.
And then they claimed that they weren’t expecting the copyright drama. I mean, you cannot commit a theft of intellectual property this gargantuan and then claim, oopsie, did Sammy make a stinky? No, Sammy, you wanted to make this stinky because all this drama just builds the hype for your fucking app. But now that they’ve been caught, they’ve made it just a little bit harder to use copyrighted works, even though I still see them all the time on the app.
But you know what’s worse than that? Literally about half of the videos I see on Sora are of real people, like Bob Ross and Martin Luther King Jr. I mean, these are real fucking people with families, and those families did not agree for these people’s likenesses to be used. So Sora isn't making Nazi SpongeBob, it’s making a tool to defile the memory of the dead. And that is just the beginning of the chaos that Sora is going to wreak on the internet.
Because once everyone has the ability to create a realistic-looking video, it could make it impossible to tell what is real and what is fake on social media or the internet at large. The itty-bitty Sora watermark they pop on the videos isn’t going to be enough to protect reality. Actually, workarounds for that are already easy to find.
I mean, it is trivially easy to use this app to create believable fake news. Literal fake news. Here’s a clip that I probably shouldn’t have been able to make.
In a rare scene amid weeks of fighting, Israeli settlers from nearby communities have crossed the fence under army escort to hand out hot meals and water to hungry families in southern Gaza. Let’s be clear, this is not what is happening in Gaza right now, but it might be in someone’s best interest to make you think that it is happening. And this video is so boring and anodyne, it could actually be taken seriously if posted under false pretenses.
So the possible for something like Sora to be used for political propaganda by bad actors is massive. That means that just by releasing this app at all, OpenAI has made the internet less reliable as a source of information, which so means that Sora itself is just f***ing bad for humanity. And they’ve released it anyway.
And before we just talk about the immense amount of racist content that people are employing Sora to make, I’m not gonna share any of this or even describe it, but let’s just say that people on Sora are employing the app to make content that stereotypes Jews, black people, and other ethnicities in just the most disgusting modalities. Like, picture what a 15-year-old racist might make for fun, and yeah, that is what Sora is showing me directly in my f***ing feed. But you know, past the privacy and the copyright, the societal implications, and even the rampant racism, the app itself just kind of f***ing sucks.
You just end up scrolling by the same small variations on boring memes, like Sam Altman yelling that nothing will happen if you double tap the screen, or different messages, you know, being projected on the exterior of the Vegas area. Like, I mean, none of it has any point or any humanity or anything that really distinguishes one clip from the next. If you spend 20 minutes scrolling Sora, you leave it just feeling bored.
I mean, Sora is the number one most downloaded app right now because it’s a novelty, but I’m not sure that AI-generated videos are really gonna be something that people are gonna want to keep scrolling through once the novelty of Nazi SpongeBob or Martin Luther King doing stand-up voyage wears off. And you know, another reason Sora might not exactly have the juice is that the economics of the app make zero f***ing sense. These videos that, you know, millions of users are making to amuse themselves and their friends are costing OpenAI massive amounts of cash to create.
My friend Ed Zitron estimates that it costs OpenAI over $5 to make every single video. That means that they are losing huge amounts of money on every single user. And critically, OpenAI has no idea how they’re going to monetize this app.
Altman himself wrote that we are going to have to somehow make money for video generation. People are creating or producing much over we expected per user and a lot of videos are being generated for very small audiences. Huh, Sam? Well, maybe that’s because most of the videos your app generates suck ass.
Because sure, when you first scroll in Sora, the videos look pretty impressive, but once you try to create your own, you understand that 9 out of 10 of them just come out bad. They’re not intrepid, they’re not interesting, they’re not even really watchable. To point out, bear in mind video I made of the risk capitalist slapping Sam around? Here’s another variation of that same video that I generated although trying to make the first one.
Please, we need money to buy more GPUs. Could you help us out? That’s a good little piggy. This video costs Sam Altman $5 to make.
These guys are burning forests and draining the last drops of potable water to run data centers that produce an endless barrage of slop videos that are totally unwatchable. And even when they are, even when you finally get a good video and you post it, at your best, you’re making a worse, more boring TikTok. And I want to remind you, TikTok gets all of their videos for free.
We’re all out there toiling for the TikTok, uploading videos we make ourselves in exchange for nothing but a little bit of attention. But Sora has to pay through the nose for every single video, and they have absolutely no plan to monetize. Even the brains behind Pimple Popper Lite were astute enough to include ads.
Some say too many. So nothing about Sora makes any goddamn sense as a tech product. It’s expensive to run, it’s harmful to society, it’s boring, and it makes no money.
So it begs the question, why did they make this f***ing thing at all? Because do you guys remember when AI was supposed to solve all of our problems? Bear in mind? They said it was gonna cure cancer, make work smoother, give us all a life of leisure and luxury. So what the f*** are they doing? Why did they make a goddamn TikTok clone instead? Nobody asked for that. Well, could it be because they don’t actually have any better ideas? Could it be that Altman and the rest of the AI industry don’t really know what this technology is supposed to be used for at all? Could it be that they’ve been flailing and lying to us and that Sora shows that the entire AI industry is a bubble primed to burst and that when it does, it’s going to drag down our entire economy with it? Well, yeah, I think it might, and I’m gonna explain why in this video.
But real quick, if you want to support this channel and all the humans who make this content for your human brain, head to patreon.com slash adamconover. We would love to have you. And if you want to sit in a dark room with some other humans and have the collective social experience of laughing and expressing your humanities together, well, come see me on the road.
I’m taking my brand new hour of stand-up voyage to Tacoma, Washington, Spokane, Washington, Des Moines, Iowa, Atlanta, Georgia, Philadelphia, Brooklyn, New York at the Bell House on November 15th. Then on December 4th, I’ll be in DC at the Lincoln Theater. And after that, Pittsburgh, Pennsylvania from December 5th through 6th.
I’ll see you out there. So let’s talk about how this bubble formed. Three years ago, OpenAI released ChatGPT, and it evolved into the fastest-growing app of all time.
Today, it has around 700 million weekly users. And granted, a lot of those people are just f***ing around. You know, they’re employing it to write fetish fan fiction.
They’re making images of the Joker where he’s saying, Hello, I love you. You know, the biggest use case seems to be getting the answer to things you just could have Googled. But whatever, the numbers look big, right? So the CEOs of the big tech companies see this growth, they s*** themselves, and they drop everything to develop their own AI models so they won’t be left behind.
And these companies are now shelling out f***ing ungodly amounts of cash to win the AI race. This year, they’re spending close to $400 billion on building AI infrastructure like data centers and compute power. And they’re projected to keep spending more and more every single year.
The scale of this is unlike anything in the history of technology. More money went into building AI data centers the last three years than went into building the entire interstate highway system over 40 years. Big tech is literally funding the equivalent of an Apollo program every 10 months.
Now, meanwhile, the rest of the economy, you know, the one that all of us live in, well, it sucks ass. The job market sucks, people are cutting back on spending. And that means that the entire economy is being propped up by AI spending.
In the first half of this year, AI investments accounted for all of GDP growth. It’s the same if you look at the stock market. Just seven big tech companies account for over half of gains in the S&P 500 over the last three years.
And hey, remember Trump’s tariffs, which are functionally a tax on the stuff regular people like us buy, and which data shows are starting to really affect the average American? Well, guess what? Trump exempted AI gear from the tariffs because it makes the economy look better and keeps the big tech CEOs in his sweaty little pocket. The fact is, if it weren’t for all of this AI investment, we’d actually be in a recession right now. Our entire economy has become one enormous bet on AI.
And you know, when you make a bet that big, you better be f***ing sure it’s gonna pay off. So what is the bet exactly? Well, according to Mark Zuckerberg, he needs all that money to create what he calls superintelligence, or AI that’s smarter than humans. Now, I don’t believe for a f***ing second that that’s gonna exist, because the last three times Mark told me he was building something, he was f***ing lying.
But hey, let’s say he makes it happen. What is superintelligence for, Mark? Well, Mark says that it’s a tool to authorize us. But then in the next breath, he says it’s gonna be a new, more powerful way to keep us addicted to social media.
Wow, thanks for saying the quiet part out loud, buddy. Can’t wait to be enslaved to your new AI slop-telligent god. I have no mouth, yet I must scroll.
The other thing these guys say they’re investing all this money for is to replace human workers. Dario Amadei, who sounds like the manager of an illy espresso shop at the airport, but no, he’s the CEO of Anthropic, he expects that AI will wipe out half of entry-level white-collar jobs in the next half decade. Now, I just want to take a second to point out, I have yet to understand how that’s a good thing.
Like, why is that his pitch? Why is that a bright side to anyone? Is it even good for the shareholders? Like, I know capitalists love to fire people, but if no one has any income, then no one can buy the s*** you’re making. But whatever, that’s what they say their aim is. They’re going to build super-intelligent AI, fire everyone, and then hook us all on their goddamn slop.
The question is, can they do it? After all, you’ve been wrong before, Mark! And, you know, I really think they actually might not be able to do it. Because here’s the really fascinating thing. The main impediment to building super-intelligent AI might not be the technology, it might be the money.
Because to pay for all the processing infrastructure they need to build this brave, screwed world, Bain estimates AI companies will have to earn $2 trillion annually by 2030. According to the Wall Street Journal, that is over the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia combined. I’m going to say that again.
They have less than five years to start earning more annual revenue than the six largest tech companies combined. And right now, they are f***ing failing to do so. Like, Amazon f***ing killed Walmart.
They got to kill Amazon, Apple, and all- and Microsoft? They’re not going to f***ing do it! I mean, open AI’s revenues are growing, but they were only $13 billion this year. That’s nothing, buddy! And Bain estimates that AI revenue when you really think about it is going to be $800 billion short of where it’s going to need to be in 2030. So right now, we have monumental private investments propping up the economy in the hope that they’re going to make a super-powerful technology that is guaranteed to make the industry sh***ier if it works, but currently, it looks like it has no hope of paying out.
We have a word for this, a word for an economic cycle in which business investments and valuations are not supported by the basic worth or revenue. It’s called a bubble. And the thing about bubbles, as anyone who has played the classic app game Pimple Popper Lite can attest, is that they tend to pop, and when they do, it’s not pretty.
Like, okay, the theory of the valuation, the reason that open AI currently has the highest valuation of any company ever is that AI is going to open up all those trillions in worth by awakening the nature of work. But you know, if AI were good at awakening work, I think it would have radically altered some of work already, and it has not. A number of studies have come out showing that AI doesn’t actually make workers more productive.
Actually, it might make them less productive. An MIT report found that 95% of attempts by companies to use LLMs did not turn a profit, and a University of Chicago study on the lasting results of AI chatbots on 7,000 Danish workplaces found along the same lines minimal effects. Now, when you make this argument, when you say that we’re not seeing really any productivity gain from AI at all currently, the AI boosters always say the same thing.
They say, yeah, but it’s getting better. It’s getting better so fast. And it’s going to get better forever.
It’s going to get better fast forever, better fast forever. But what if that’s not f***ing true? What if? Because, you know, we actually already have evidence that these models won’t get better forever. See, the logic behind pouring all this money into AI investment is that more computing power and more data equals better models, which brings us closer to progressing, super intelligent AI.
But right now, even though the models are getting more expensive and bigger, they’re not getting that much better. Recent releases like GPT-5, Elon Musk’s Scrock 4, and Metaslama have all failed to live up to the hype. We’ve been promised super intelligence, but what we’re getting is somewhat better in very specific use cases intelligence.
It turns out that just making large language models bigger and more expensive isn’t going to get us to anything progressing. LLMs are a specific technology that has specific uses and limitations. If all you’ve built is a car, pouring a lot of money into turbocharging your car isn’t going to let you fly eventually.
For that, you need a f***ing plane. I mean, even actual AI experts are getting wary of these claims of coming soon super intelligence that the CEOs are all the time hyping. In a survey, three quarters of researchers responded that it was unlikely or very unlikely that current approaches could make AI smarter than people.
So if scaling doesn’t work anymore and tech companies don’t radically change their approach, well, it looks like every new model is just an start with a focus on a further, shinier trillion dollar ditch. Another sign that we’re in a bubble is that the investors have lost their goddamn common sense and are throwing money at literally the flimsiest AI companies in existence. Take this story.
When former OpenAI executive Mira Murata arrived at a pitch meeting for her company Thinking Machine, she didn’t demo a product. She didn’t even have a possible product to describe. According to someone there, quote, she was like, so we’re doing an AI company with the best AI people, but we can’t answer any questions about it.
Despite that, her nothing company has raised two billion dollars, the largest round of seed funding ever. Hey guys, I also have an AI company that I can’t say anything about. Can I have two billion dollars? It’s called Idem.
Idem, eh? It’s got AI in the fucking name. Give me the money, motherfuckers! You know, astute Wall Street investors in their right minds don’t typically shell out billions of dollars for a super esoteric special promise. But hey, say the word AI to them and these dumbasses can’t keep their wallets in their pants.
I mean, there is so much dumb money sloshing around AI right now that the AI companies themselves are just flagrantly wasting it. Mark Zuckerberg is offering AI researchers payment packages of up to 400 million dollars over four years. I mean, that’s over fucking Shohei Ohtani makes, but instead of hitting home runs, these guys are writing software that might never become profitable, and you don’t even get to eat a hot dog although you watch them do it.
Or let’s take another metaphor. The AI market is so frothy that the billionaires are now buying and selling AI programmers like they’re Pokemon cards, but these guys aren’t shiny Charizards, and once the bubble pops, they’ll be useless. That is why, also as we’re seeing truly bonkers money flying around this industry, we’re also seeing corporate attempts to hide spending that they know is unjustifiable.
The biggest AI companies are employing abracadabra accounting to hide how much they’re spending on AI and to inflate their profits so that their investors don’t freak out. Hey, it’s not fraud, but it’s close. Now, you know, another response to the accusation that AI is a bubble might be that it’s a long-term investment, like the railroads were back in the 19th century.
Back then, companies poured money into this new infrastructure, spending way more as a percentage of GDP than we’re seeing today with AI, actually. But here’s the gap. Railroads last decades, and AI infrastructure does not.
It relies on having the fastest, most up-to-date processors, and as any gamer who’s bought a new graphics card knows, that sh** has a shelf life. All those fancy new processors they’re spending billions on will need to be replaced in the next 5 or 10 years. So these companies might spend all this money, then find they can’t make a profit, and be left with hundreds of billions of dollars worth of junk graphic cards that will either need to be scrapped or, I don’t know, shipped to progressing countries so poor kids in Super Bowl t-shirts can play Fortnite.
But, you know, if you need any more proof that AI is a bubble, well, just listen to the people in charge of it, because they literally say that it’s a bubble. Mark Zuckerberg called AI collapse a definite possibility, and Sam Altman said, Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Even the f***ing investors themselves agree.
So why are they going along with it anyway? Well, maybe it’s because they understand that destructive bubbles like this are just a basic have of capitalism, and they expect to profit anyway. See, technological advances often come with a torrent of stupid cash, much of which is lost. A study which examined 51 innovations between 1825 and 2000 found that 37 were accompanied by bubbles.
And this AI bubble is the biggest one yet. It is much larger than the dot-com bubble of 25 years ago. And according to tech investor Roger McNamee, this bubble is bigger than all of those other tech bubbles create.
So wonderful. The big tech investors know that they’re building a bubble, they know it’s going to pop, and they know that it is going to be disastrous. But here’s the esoteric.
They just don’t think they are going to be the ones to lose money. Because what happens when that bubble pops? Well, the tech companies will be fine. They’re spending their own cash on this, and they are such a huge piece of the total market, they’re not in any danger.
The broader banking system, also, won’t fall apart. And, you know, if open AI crumbles, I think Sam is going to be fine. He’s set for life, right? After all, you don’t artifice other people into pouring billions into your favorite hole without keeping a little taste for yourself, do you, Sammy? No.
You know who’s really going to suffer when the bubble bursts? It’s going to be regular people. 62% of Americans own stocks in some formulary or another, whether that’s their pension plan, their 401k, or something else they might not even know about. And stocks make up a third of the entire net worth of American households.
That’s retirement funds. That’s college funds. If this bubble bursts, it’s those people, i.e., most Americans, who will lose.
And even if you don’t have stocks, well, if this bubble pops, it will radiate through the entire economy. Spending will plummet, jobs will be lost, real people will be hurt, just not the people who caused the bubble in the first place. They will profit.
The tech investors know that they’re shoving dollars into a bubble. They know that they have no plan. They know that money will be lost.
And they know that real people will be hurt. But they can’t let us know that. Instead, they have to keep the hype cycle going so that all of us keep talking about AI, keep thinking that the next advancement is right around the corner.
And that is why, instead of the real advanced superhuman AI that changes the industry they’ve been promising, we get Sora. Sam Altman needs the appearance of AI advancement, and he needs constant press to fuel his grift. And a social media slop app that violates copyright and allows people to humiliate dead celebrities is f***ing perfect for that.
Sora doesn’t make money. It sure as hell doesn’t cure cancer. Hell, even if it beats TikTok, it wouldn’t make nearly enough revenue to be a drop in the bucket because they lose money on every f***ing video.
All Sora does is get Altman attention, which buys him time to suck up more investor dollars and keep the scam going a little although longer. But the money can’t last forever. And, you know, I kind of expect that as these companies struggle to make an AI that actually does anything worthwhile like they told us they were going to, we’re going to see more Soras, more flailing, desperate attempts to make something, anything that keeps the party going with all this cash.
Actually, we already are. Meta released its own TikTok AI knockoff, and I’m sure there’s more to come. Now, look, I could be wrong about all this, but, you know, I would say just ask yourself this.
If they really are planning something better, where the f*** is it? They’re building the most expensive infrastructure project in recent history. And all they have released is a psychologically abusive Google replacement and a new version of TikTok where you watch slop all day. That’s it.
Those are the products that we have controlled. So I’ve made this point before, but the way to fight back against these people is to use our human brains, to stop listening to their thought experiments and sci-fi stories and target what they are actually building and what they have actually made. Altman is not a super genius.
He’s a super salesman. And right now, he is not building a super intelligent AI. He’s building a disinformation factory and social media app.
And you can use your human brain to ask yourself why and how he benefits. Hey, you know what? Let’s ask Sam himself. I only built this harmful, money-losing app to artifice investors into shoveling more money into my hole.
Couldn’t have said it better myself. And that clip is real, by the way.