I think the AI industry needs intelligent skeptics that keep the hype in check and ground us in reality.
But Ed Zitron is not it. Here's an example [1] of him fumbling on simple arithmetic. He's also perpetually bearish without any sense of principles on his message.
This is what he wrote in 2024 [2]
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
I think the industry really needs someone better with principles.
I'm firmly on the skeptic side of the AI skeptic/booster divide, but I wish we had better mouthpieces on the skeptic side. I get the feeling that Zitron is more concerned with getting his newsletter numbers up than anything else.
His articles conflate quality with quantity. An aggressive edit with a more coherent structure would improve the message and sound less like a stream of consciousness rambling. Advertising his newsletter as "over 7000 words" is like bragging about LOC, it's an impressive number but doesn't itself indicate whether it's a necessary amount or if it's useful.
Unfortunately though I can't really find anyone else looking at this same information, so for now I have to wade through these newsletters to pick the gold from the shit
Ed can come across as agitated and shrill, and I never stop picturing him as exactly like Jude Law's character in Contagion. But. He's still an important counterpoint to the unexamined mainstream junk, which says more about the world than about him or his style. As we've seen play out with other areas of discourse, the middle shrinks, we're forced into a dialectic tug of war between absurdly polarized extremes, and it all comes to crisis. We might rediscover caution, epistemic humility, compromise and middle-ground, but only after rising absurdity and then some kind of punishment
I personally think the fact that it's an indie reporter like Ed Zitron diving into this says a lot about the state of tech media broadly. Reminds me a bit of how sports journalism works nowadays: nobody wants to call out industry leaders for fear of losing access, because losing access is career suicide.
False. The current mainstream media outlets are by far the more anti technology than pro. It is unclear why you think journalists fear losing access when the status quo is opposing tech.
Respectfully disagree. Frontier lab CEOs have had incredible media access the last 4 years, making huge claims to the press without a lot of pushback or difficult questions. There's obviously no way to give some quantifiable metric on it, and reasonable people can disagree.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
It is very clear that mainstream outlets bias towards anti-tech than pro. I know this is not the most fool proof method, but its at least better than vibes. If you disagree, please do an unbiased sampling and show me otherwise.
I've listened to his podcast a couple of times. I never use this term but for Ed I make an exception: he is a hater. He is making a living out of being a hater.
I have been burned in the past by siding with people who give kitchen-sink type of arguments. So I would not bet my money on things he says.
That being said. Since COVID there seems to be an ongoing and worsening DOS attack. Everybody who have access to media are lying. And we know they are lying! The craziest part is not only that they are getting away with it (so far at least), but this is becoming embraced, standardized and legalized. Which is fucking crazy.
I like listening to Ed's interviews, mainly because he is DOSing back.
I've had to reread the first tweet a bunch, but I don't think Ed is wrong there.
As far as I can tell, in February Anthropic projected their 2026+ annual revenue at $14 billion dollars, based on a month long period. If you added the numbers presented together for the 3 years of time, you would end up at $6 billion dollars of revenue.
But, a month later in a court document they only mention "exceeding $5 billion dollars". For the entire time the company has been in business.
Additionally, the month long period with ~1B would account for a fifth of the total revenue. That's eyebrow raising.
I'm at a loss as to how some of these projects got funded in the first place. Anyone funding these should have had the perspective to see that there isn't enough power for them. Anyone funding them should have had the perspective to see that by the time power could come online for even a significant fraction of them, the depreciation and interest costs should have murdered the company trying to do it, especially if their solution to that problem is the oh-so-21st century solution of "solving" the problem of losing money by levering up. It does no good to go out of business entirely in 2027 to make the phat buxx in 2030, which seems to be the best case scenario for this space as a whole.
The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.
> The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained?
Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.
> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.
But the other part is reflecting the lack of capacity.
"Building datacenters will reduce the cost because increasing supply would reduce the cost."
That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
"This is false. Part of the costs are unit costs which are really high margin."
Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.
Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
> That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
Why wouldn't they make money if they are the ones on whom money is thrown at?
> Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Increasing supply lowers the cost, I'm unsure which part of this is surprising.
> Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
The companies using AI are making money out of it. OpenAI will make money in the future but are losing it because of R&D and training.
In xAI's case, they've gotten gas turbines installed on site with which to make up the electricity generation shortfall onsite. It's unclear exactly how long that short term solution is going to be there, but probably quite a while.
It's probably more correct to say that there are some people who project that 240GW of additional power will be required by data centers in the near future.
Yes, that number is absurd, and data centers will certainly need to make do with less, regardless of actual requirements.
That also fails to take efficiency and cost optimization into account.
Just parsing out salutations and please/thank you from AI requests reduces utilization and that’s not really even intense optimization.
Earlier today on the radio I heard Houston TX was 20 GW at peak load.
Texas is [d]oing its best to build as many datacenters & power plants as possible. They were describing it as "Texas will have more datacenters than anyplace else in the world." This was public radio, but everybody's taking a hit on the ol' AI pipe nowadays.
I’ve never thought of it in terms of “how many new metropoli(sp) are being added”, but it seems like a deceny unit of measure. If we use the average of 6gw, we’re adding essentially 40 nyc’s.
Tractors didn't get it because about the time they became useful for most farmers WWII was pushing the need for less men on the farm so they could go to war. There were tractors before then, but the previous ones had big negatives if you were not a much larger farm than most were then.
I don't understand the question. (maybe the question mark is a mistake?) Assuming it's a statement, my guess is the rapacious capitalists would disagree with that claim.
Comment section isn’t nuanced enough to have this conversation and I am on a phone, but that is the way that the industry slandered the luddites as the parent claims.
The truth was that the machines produced worse quality goods and were less safe, not that people couldn’t skill up to use them and not that there wasn’t enough demand to keep everyone employed. It was quality and safety.
You should look into the issue further, because I had your opinion too until I soberly looked at what the luddites really were arguing for, it wasn’t the end of looms, it was quality standards and fair advertising to consumers.
Every party in the dispute was acting out of economic self-interest: the manufacturers wanted cheaper labour and higher margins, Parliament wanted industrial growth.
Only the workers are getting framed as though self-interest invalidates their position. The Luddites’ arguments about quality standards and consumer fraud were correct on the merits regardless of their motivation for raising them.
“More affordable clothes” that fall apart in a month aren’t more affordable.
And the choice was never mechanisation versus no mechanisation… it was whether the transition would include basic labour and quality standards. With regulation, you’d still have got mechanisation and cheaper clothing in the end… just without the fraudulent goods and wage suppression. Framing it as “society versus a few jobs” is exactly the manufacturer’s argument from the 1810s, which is very effective propaganda reaching through centuries.
“After a few bumps”, mate, people were transported to penal colonies and fucking hanged for asking for quality standards and fair wages.
Parliament made frame-breaking a capital offence to protect manufacturer profits. Saying it all worked out eventually doesn’t justify the process, any more than cheap cotton justified the conditions under which it was produced. And frankly, look at modern fast fashion: cheap clothing that falls apart in weeks, produced under appalling conditions overseas. We’re still living with the consequences of the principle that cheapness trumps everything else.
Trying to keep all of labor's sweat as capitalist's own cash is bad actually.
Making clothing more efficient by employing children in dangerous factories is bad actually (what happened in the original factories and now at fast fashion).
Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Perhaps in a couple of centuries when a tube of nutrient slurry is the standard meal, people will be equally proud of not spending 15% of their salary on food...if salaries even exist by then.
> Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Lots of countries attribute the clothing industry to increasing standard of living and economic prosperity. Like India, Pakistan.
Anyone can make the choice to spend a similarly large amount of their income on clothing the way people did 200 years ago. In fact, it will be even higher quality than people had access to since we have much more advanced materials and techniques than existed back then. But, almost no one does that. Maybe you consider it brainwashing, but I consider it people just making a rational economic choice.
And yes, I can see a world where, if tasteless nutrient slurry was essentially free and perfect nutrition for the body, then people would gladly consume that for most meals, and maybe splurge every now and then on an "old school" meal. I don't really see a problem with that.
> Anyone can make the choice to spend a similarly large amount of their income on clothing the way people did 200 years ago
You really can't. That price/quality point basically does not exist anymore
What's worse is that we have "designer brands" that charge the higher price point but are the exact same low quality as the lower price point stuff. Actual midrange quality just plain does not exist
Sure it does, you just need to get something custom/bespoke/made to measure.
Take your yearly clothing expenditure and multiply it by 10. And then, just like people 200 years ago, be content with 2 to 4 compete outfits. And then stop buying clothes yearly and go more on 10+ year cycle, where you use your funds to mend clothes instead of replacing them.
Even if you only spend $300 on clothes per year, doing it the old school way means you can spend about $15,000 on 2-4 outfits and save the other $15,000 for mending and cleaning over the next 10 years.
I guarantee you you can find a high quality custom outfit for $5000.
There used to be a social contract, but now there are so many people that it's a problem that there is no work for the displaced. The leverage between the very small number of people with vast amounts of capital and a large number of people with very little capital or leverage - this is a societal dynamic that has existed before in the world. There is historical precedent for this, and it's probably worth paying very close attention to what comes next if you are a very wealthy person pushing against all forms of wealth redistribution.
An analysis of datacenter commitments and GPU purchasing through how much power they will demand vs how much is available.
As someone who only has a passing interest, there isn't anything distilled enough in this article for me to comment on as the central point. Everyone seems to be reporting impossible numbers, and buying dramatically more hardware than they can install in a reasonable timeframe given the pace of the industry.
I have already written a comment here, apologies. However, I have something else to say than a hot take, about Ed Zitron:
I believe that Ed Zitron plays a very important gadlfy role in all of this.
However, if you look at his subreddit, it appears that he has created a 100% AI denier following. My gut makes me worry for them, but I wonder where the truth really lies.
For those of us involved with code, Sonnet 3.5 was a revelation, and Opus 4.5 scared the crap out of many, and converted some of us to believers in "the exponential."
Now, in other verifiable output fields like finance/spreadsheets in general, Claude is scaring more people.
I really do respect Ed, but I feel like his schtick might make too many people complacent, thinking that this is all fake. Also, I could be wrong.
I see another impassioned, fervent cry daily about how it’s all going to collapse like a house of cards (as if smart money doesn’t know it and some podcaster is the first to realize data centers take a long time to build).
But unless I missed something, I didn’t see him disclose any financial positions that would indicate him betting on the collapse he is so clearly calling for.
I think that should be required to take any of these articles seriously - if your portfolio doesn’t reflect your stated opinions, your stated opinions aren’t what you really believe.
i see him brag about how long these astroturf joints are on bluesky. had been a while since i'd tried to actually scroll through one and it makes sense now: so much more space to spawn subscription promos. better offline indeed
My current model for understand for how AI will scale out is that we'll move through the following choke points:
AI chip makers -> Data center infra and construction -> regional power companies
Right now we're firmly in the "AI chip makers" part of the expansion, with everything else in the beginning stages. AI is useful, but whether it's hyped or not, it's hard to deny that not being able to build and power data centers will impact how this plays out.
But I wouldn't hold my breath waiting for introspection from that camp. It seems that AI maximalists, like so many other players these days, see it as end-game time. There are no bounds or rules: pick a side, and go. And then eat the rest.
Sure, not everyone sees it this way. There are highly competent, human actors working in their joy toward a better way forward with all of it. But I don't think you'll find that spirit unbridled inside any profit-seeking corporation of any significant standing (though I would be happy to be proven wrong). If it existed there, it is being choked out by selfishness and survivalism.
And then there's Thiel and ilk waxing eschatological, adding a whole other layer to the scheme.
Railroads, e-commerce, and AI - all useful, all were (or may be) credit/stock bubbles. Railroads however have a much better depreciation schedule than GPUs.
He isn't arguing that AI is useless. Only that Nvidia is propping up a massive financial deck of cards and that all the giant numbers being tossed around are fantasies.
The “iPhone moment” wasn’t a result of one thing, but a collection of different bits that formed an obvious whole — one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output
The obvious answer to the power problem will be to have the AI design massively parallel exercise bicycle/electrical generator plants that can be powered by all of the people laid off by the AI.
The article takes an odd turn in the second half and seems to veer from a very interesting deep-dive into how a lot of backlogged US data center production may correlate with GPU "slippage" via questionable resellers and GPU rental outfits to China
Not sure it qualifies as an “LLM prediction,” but he was adamant that Nvidia would not come through with the $100 billion funding round, and sure enough they did not.
To Ed's credit, he's coming with real numbers. Much of his reporting is based on quarterly earnings reports, press releases, correlating reports from outlets like The Information, etc.
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
See this [1] for how he comes up with numbers. I think he says a lot of things without understanding and not many serious participants in the area takes him seriously.
Feel free to point out where the numbers are wrong in this article. If you're right about his ability to math, then you'll have no problem identifying concrete aspects of this piece that are wrong.
This is the crutch of everyone unwilling to actually put their money on the line and bet.
“I think AI is a bubble and it’s going to collapse”
“Ok, then there are ways to bet against it if you’re really sure”
“Oh I’m sure I’m right, it’s just that the market might take a long time to realize I’m right”
Come on… there was no subtlety in the article at all. He said it’s all a house of cards, it’s all going to collapse, it’s all a grift… surely someone that certain should be willing to put something in…
"If you keep predicting market crash every single day of your life you will be the greatest predictor in the history of mankind because markets do eventually crash a little"
Anyone with a single drop of common sense knows that Sam Altman is a grifter. If you don't see that, you are quite simply not bothering to apply critical thinking.
His entire role in it is the grifting part (raising money based on BS). His job is, and has been at this and other companies, grifting. You loop him in if you need a hype-man who'll say any crazy thing to bring in a buck.
105 comments:
I think the AI industry needs intelligent skeptics that keep the hype in check and ground us in reality.
But Ed Zitron is not it. Here's an example [1] of him fumbling on simple arithmetic. He's also perpetually bearish without any sense of principles on his message.
This is what he wrote in 2024 [2]
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
I think the industry really needs someone better with principles.
[1] https://x.com/binarybits/status/2034376359909130249
[2] https://www.wheresyoured.at/never-forget-what-theyve-done/
Edit: here's another example https://x.com/blader/status/2031216372169191678
I get that people make mistakes but it really does seem like there are no principles behind the guy. It seems like he can write whatever.
> It seems like he can write whatever.
Not incidentally, he's a PR guy by trade--who still runs his own PR firm! And that firm has done PR for AI companies!
https://archive.ph/2025.10.27-195752/https://www.wired.com/s...
I'm firmly on the skeptic side of the AI skeptic/booster divide, but I wish we had better mouthpieces on the skeptic side. I get the feeling that Zitron is more concerned with getting his newsletter numbers up than anything else.
His articles conflate quality with quantity. An aggressive edit with a more coherent structure would improve the message and sound less like a stream of consciousness rambling. Advertising his newsletter as "over 7000 words" is like bragging about LOC, it's an impressive number but doesn't itself indicate whether it's a necessary amount or if it's useful.
Unfortunately though I can't really find anyone else looking at this same information, so for now I have to wade through these newsletters to pick the gold from the shit
I briefly listened to one of his podcasts, but the over-the-top, worst-interpretation-possible coverage was just... bleh.
Ed can come across as agitated and shrill, and I never stop picturing him as exactly like Jude Law's character in Contagion. But. He's still an important counterpoint to the unexamined mainstream junk, which says more about the world than about him or his style. As we've seen play out with other areas of discourse, the middle shrinks, we're forced into a dialectic tug of war between absurdly polarized extremes, and it all comes to crisis. We might rediscover caution, epistemic humility, compromise and middle-ground, but only after rising absurdity and then some kind of punishment
You make good points at the end but I don't know why it is important to be unprincipled about it?
> He's still an important counterpoint to the unexamined mainstream junk, which says more about the world than about him or his style.
I personally think the fact that it's an indie reporter like Ed Zitron diving into this says a lot about the state of tech media broadly. Reminds me a bit of how sports journalism works nowadays: nobody wants to call out industry leaders for fear of losing access, because losing access is career suicide.
False. The current mainstream media outlets are by far the more anti technology than pro. It is unclear why you think journalists fear losing access when the status quo is opposing tech.
I disagree. I find popular media to be grossly negligent in their lack of skepticism. They love regurgitating pie-in-the-sky claims for clicks.
Empirically false from a quick research
> Overall, this sample skews anti-tech: 62 anti-tech, 26 neutral, 12 pro-tech.
What is true is that they love regurgitating how bad AI is and the harms that come with it.
https://chatgpt.com/share/69c2e910-41a0-800b-ac8b-f7b93c005c...
All I see is:
Can't load shared conversation 69c2e910-41a0-800b-ac8b-f7b93c005c
fixed.
Respectfully disagree. Frontier lab CEOs have had incredible media access the last 4 years, making huge claims to the press without a lot of pushback or difficult questions. There's obviously no way to give some quantifiable metric on it, and reasonable people can disagree.
But Zitron frequently points out the inconsistencies in these data center deals, noting that companies like OpenAI and Anthropic make these announcements without a formal contract in place, companies like Oracle get a stock bump off of the news, and then we all find out from the mainstream press months later that the deal was never done and in fact may not even be happening anymore.
That's not really behavior you'd expect to see from a vehemently anti-tech press. They're happily making news to boost stock prices short-term, essentially acting as mouthpieces for large shareholders.
I also disagree with you. Here's my proof: I sampled 100 articles and classified them as pro or anti tech.
> Overall, this sample skews anti-tech: 62 anti-tech, 26 neutral, 12 pro-tech.
It is very clear that mainstream outlets bias towards anti-tech than pro. I know this is not the most fool proof method, but its at least better than vibes. If you disagree, please do an unbiased sampling and show me otherwise.
https://chatgpt.com/share/69c2e910-41a0-800b-ac8b-f7b93c005c...
I've listened to his podcast a couple of times. I never use this term but for Ed I make an exception: he is a hater. He is making a living out of being a hater.
I think you should focus on the claims in this article. There are plenty of principles espoused within.
Smearing his character without directly addressing those just stinks the place up.
There's throwing mud and then there's pointing out the mud already flung all over the walls
[dead]
I have been burned in the past by siding with people who give kitchen-sink type of arguments. So I would not bet my money on things he says.
That being said. Since COVID there seems to be an ongoing and worsening DOS attack. Everybody who have access to media are lying. And we know they are lying! The craziest part is not only that they are getting away with it (so far at least), but this is becoming embraced, standardized and legalized. Which is fucking crazy.
I like listening to Ed's interviews, mainly because he is DOSing back.
I've had to reread the first tweet a bunch, but I don't think Ed is wrong there.
As far as I can tell, in February Anthropic projected their 2026+ annual revenue at $14 billion dollars, based on a month long period. If you added the numbers presented together for the 3 years of time, you would end up at $6 billion dollars of revenue.
But, a month later in a court document they only mention "exceeding $5 billion dollars". For the entire time the company has been in business.
Additionally, the month long period with ~1B would account for a fifth of the total revenue. That's eyebrow raising.
I meant this one for arithmetic https://x.com/binarybits/status/2034378777745141799
I'm at a loss as to how some of these projects got funded in the first place. Anyone funding these should have had the perspective to see that there isn't enough power for them. Anyone funding them should have had the perspective to see that by the time power could come online for even a significant fraction of them, the depreciation and interest costs should have murdered the company trying to do it, especially if their solution to that problem is the oh-so-21st century solution of "solving" the problem of losing money by levering up. It does no good to go out of business entirely in 2027 to make the phat buxx in 2030, which seems to be the best case scenario for this space as a whole.
The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained? Who is actually going to productively soak up all this capacity? It seems to me that bringing all this stuff online can't really make things much cheaper than they are now because the fixed costs aren't going anywhere, and if anything, trying to jam so many projects through all at once just raises those fixed costs even higher. It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
I am at a complete loss as to how the numbers are supposed to work here. You can't build a company in 2026 on the economy and tech infrastructure of 2036 anymore than it worked to build a company in 1999 on the economy and tech infrastructure of 2019, no matter how rosy the numbers look on the projections based on conveniently ignoring the fact the company passes through "death" in a year and half. Everything promised in 1999 happened, but trying to artificially accelerate it onto Wall Street's time line burned money by the billions. I'm sure 2036 will have lots of AI in it, but you can't just spend money to bring it forward 10 years by sheer force of will. It has to happen at its own pace.
> The other question I have is... who exactly is doing all of 1. Using AI right now 2. Making substantial money on it or getting real value and 3. Capacity constrained?
Almost all enterprise users for one. At least from what I have seen it is a massive productivity boost for coding and general research. If the costs were ~4x lower, we would be able to do much much more with them. Building datacenters will reduce the cost because increasing supply would reduce the cost.
> It's not like they triple data center capacity (and increasing AI capacity by, what, 10x? 20x?), stick them full of AI systems, and into that 10x+ greater AI capacity they can sell it at the prices they are now. Higher capacity would crash the selling price but the costs would be as high or higher than now.
This is false. Part of the costs are unit costs which are really high margin. I think the margins are around 50% to 60%. By increasing the capacity, the are bound to make even more profit.
But the other part is reflecting the lack of capacity.
"Building datacenters will reduce the cost because increasing supply would reduce the cost."
That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
"This is false. Part of the costs are unit costs which are really high margin."
Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Everybody trying to build a data center at once raises the costs of the data center. Everyone competing for power has already raised power prices and we've barely begun bringing this stuff online. Everyone demanding multiples of what nVidia is producing means nVidias isn't going to reduce prices any time soon.
Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
> That's great for us users but I'm talking from the point of view of the people trying to make money on the data centers.
Why wouldn't they make money if they are the ones on whom money is thrown at?
> Can you explain how everybody throwing their money at nVidia lowers the costs? When they are already apparently at max capacity?
Increasing supply lowers the cost, I'm unsure which part of this is surprising.
> Your use of "even more profit" also implies that you think that the AI world is making lots of money? nVidia is making lots of money. To a first approximation, everybody else involved has lost billions. Maybe not Apple. But everyone else you can name is deep in the negative on AI.
The companies using AI are making money out of it. OpenAI will make money in the future but are losing it because of R&D and training.
In xAI's case, they've gotten gas turbines installed on site with which to make up the electricity generation shortfall onsite. It's unclear exactly how long that short term solution is going to be there, but probably quite a while.
Someone please correct my math.
The article says 240 Gigawatts of capacity is allocated for AI datacenters.
New York City draws about 10 Gigawatts in the hottest months of the year due to extra load from AC use.
So am I understanding correctly that these people want to foist upon the power grid 24 NYCs?
It's probably more correct to say that there are some people who project that 240GW of additional power will be required by data centers in the near future.
Yes, that number is absurd, and data centers will certainly need to make do with less, regardless of actual requirements.
That also fails to take efficiency and cost optimization into account. Just parsing out salutations and please/thank you from AI requests reduces utilization and that’s not really even intense optimization.
Earlier today on the radio I heard Houston TX was 20 GW at peak load.
Texas is [d]oing its best to build as many datacenters & power plants as possible. They were describing it as "Texas will have more datacenters than anyplace else in the world." This was public radio, but everybody's taking a hit on the ol' AI pipe nowadays.
Which running 24/365 would correspond to about half of total electricity consumption of USA...
At cost of 0.01 per kwh would be 21 billion... And electricity generally is not that cheap everything considered...
I’ve never thought of it in terms of “how many new metropoli(sp) are being added”, but it seems like a deceny unit of measure. If we use the average of 6gw, we’re adding essentially 40 nyc’s.
Something I heard a person say recently:
> Isn't it weird how there is no huge industry pushback on all this new AI datacenter power need, as there was about electrifying vehicles?
Almost as if that "industry pushback" argument was not made in good faith? I wonder who would be against electric vehicles?
> I wonder who would be against electric vehicles?
The fossil fuel industry ?
And car companies, who made the classic “holding back the tide” strategic blunder and basically created Tesla, BYD, and others.
Turns out the market routes right around slow movers.
What!? As if to suggest! An assertion most improper!
Was there a big industry push back against looms, tractors, computers, or the internet?
Tractors didn't get it because about the time they became useful for most farmers WWII was pushing the need for less men on the farm so they could go to war. There were tractors before then, but the previous ones had big negatives if you were not a much larger farm than most were then.
Famously against looms, yes. That's how we got the term Luddite, that rapacious capitalists redefined to be a negative.
I was going to cite that too but it's not exactly industry pushback, it's labor pushback.
EV on the other hand does have some obvious industrial adversaries.
At that time, the laborers WERE the industry?
I don't understand the question. (maybe the question mark is a mistake?) Assuming it's a statement, my guess is the rapacious capitalists would disagree with that claim.
Trying to prevent goods and services from being produced more efficiently is bad actually.
Comment section isn’t nuanced enough to have this conversation and I am on a phone, but that is the way that the industry slandered the luddites as the parent claims.
The truth was that the machines produced worse quality goods and were less safe, not that people couldn’t skill up to use them and not that there wasn’t enough demand to keep everyone employed. It was quality and safety.
You should look into the issue further, because I had your opinion too until I soberly looked at what the luddites really were arguing for, it wasn’t the end of looms, it was quality standards and fair advertising to consumers.
The mainstream conclusion is that the luddites were speaking for their own economic safety mainly along with other things.
Every party in the dispute was acting out of economic self-interest: the manufacturers wanted cheaper labour and higher margins, Parliament wanted industrial growth.
Only the workers are getting framed as though self-interest invalidates their position. The Luddites’ arguments about quality standards and consumer fraud were correct on the merits regardless of their motivation for raising them.
Everyone's interests should not be viewed as the same. More affordable clothes is more important for society than a few people's jobs.
“More affordable clothes” that fall apart in a month aren’t more affordable.
And the choice was never mechanisation versus no mechanisation… it was whether the transition would include basic labour and quality standards. With regulation, you’d still have got mechanisation and cheaper clothing in the end… just without the fraudulent goods and wage suppression. Framing it as “society versus a few jobs” is exactly the manufacturer’s argument from the 1810s, which is very effective propaganda reaching through centuries.
To drive the point home even clearer
The clothes did get dramatically more affordable after adjusting for quality (after a few bumps).
“After a few bumps”, mate, people were transported to penal colonies and fucking hanged for asking for quality standards and fair wages.
Parliament made frame-breaking a capital offence to protect manufacturer profits. Saying it all worked out eventually doesn’t justify the process, any more than cheap cotton justified the conditions under which it was produced. And frankly, look at modern fast fashion: cheap clothing that falls apart in weeks, produced under appalling conditions overseas. We’re still living with the consequences of the principle that cheapness trumps everything else.
I don't condone killing obviously.
But on quality: I found this an interesting read https://marginalrevolution.com/marginalrevolution/2025/05/ha...
Trying to keep all of labor's sweat as capitalist's own cash is bad actually.
Making clothing more efficient by employing children in dangerous factories is bad actually (what happened in the original factories and now at fast fashion).
Given the absolute slop that passes as clothing nowadays, the Luddites had very good points actually.
Personally, I enjoy not spending 15% of my salary on clothing and textiles.
Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Perhaps in a couple of centuries when a tube of nutrient slurry is the standard meal, people will be equally proud of not spending 15% of their salary on food...if salaries even exist by then.
> Of course you would enjoy that when every single externality involved has conveniently been exported elsewhere and you have been handily trained over generations to accept piss-poor quality clothing as normal.
Lots of countries attribute the clothing industry to increasing standard of living and economic prosperity. Like India, Pakistan.
Not even just piss-poor quality, much of our clothing is actually poisoning us with PFAS and microplastics.
Anyone can make the choice to spend a similarly large amount of their income on clothing the way people did 200 years ago. In fact, it will be even higher quality than people had access to since we have much more advanced materials and techniques than existed back then. But, almost no one does that. Maybe you consider it brainwashing, but I consider it people just making a rational economic choice.
And yes, I can see a world where, if tasteless nutrient slurry was essentially free and perfect nutrition for the body, then people would gladly consume that for most meals, and maybe splurge every now and then on an "old school" meal. I don't really see a problem with that.
> Anyone can make the choice to spend a similarly large amount of their income on clothing the way people did 200 years ago
You really can't. That price/quality point basically does not exist anymore
What's worse is that we have "designer brands" that charge the higher price point but are the exact same low quality as the lower price point stuff. Actual midrange quality just plain does not exist
The simple reason is that there isn't a market for it. If there were, we would see it.
Sure it does, you just need to get something custom/bespoke/made to measure.
Take your yearly clothing expenditure and multiply it by 10. And then, just like people 200 years ago, be content with 2 to 4 compete outfits. And then stop buying clothes yearly and go more on 10+ year cycle, where you use your funds to mend clothes instead of replacing them.
Even if you only spend $300 on clothes per year, doing it the old school way means you can spend about $15,000 on 2-4 outfits and save the other $15,000 for mending and cleaning over the next 10 years.
I guarantee you you can find a high quality custom outfit for $5000.
There used to be a social contract, but now there are so many people that it's a problem that there is no work for the displaced. The leverage between the very small number of people with vast amounts of capital and a large number of people with very little capital or leverage - this is a societal dynamic that has existed before in the world. There is historical precedent for this, and it's probably worth paying very close attention to what comes next if you are a very wealthy person pushing against all forms of wealth redistribution.
An analysis of datacenter commitments and GPU purchasing through how much power they will demand vs how much is available.
As someone who only has a passing interest, there isn't anything distilled enough in this article for me to comment on as the central point. Everyone seems to be reporting impossible numbers, and buying dramatically more hardware than they can install in a reasonable timeframe given the pace of the industry.
I have already written a comment here, apologies. However, I have something else to say than a hot take, about Ed Zitron:
I believe that Ed Zitron plays a very important gadlfy role in all of this. However, if you look at his subreddit, it appears that he has created a 100% AI denier following. My gut makes me worry for them, but I wonder where the truth really lies.
For those of us involved with code, Sonnet 3.5 was a revelation, and Opus 4.5 scared the crap out of many, and converted some of us to believers in "the exponential."
Now, in other verifiable output fields like finance/spreadsheets in general, Claude is scaring more people.
I really do respect Ed, but I feel like his schtick might make too many people complacent, thinking that this is all fake. Also, I could be wrong.
> My gut makes me worry for them, but I wonder where the truth really lies.
Why worry?
Also I'm pretty sure I have seen a similar comment before
I think these schizophrenics are stuck in a loop.
Sigh.
I see another impassioned, fervent cry daily about how it’s all going to collapse like a house of cards (as if smart money doesn’t know it and some podcaster is the first to realize data centers take a long time to build).
But unless I missed something, I didn’t see him disclose any financial positions that would indicate him betting on the collapse he is so clearly calling for.
I think that should be required to take any of these articles seriously - if your portfolio doesn’t reflect your stated opinions, your stated opinions aren’t what you really believe.
i see him brag about how long these astroturf joints are on bluesky. had been a while since i'd tried to actually scroll through one and it makes sense now: so much more space to spawn subscription promos. better offline indeed
Very good points.
My current model for understand for how AI will scale out is that we'll move through the following choke points:
AI chip makers -> Data center infra and construction -> regional power companies
Right now we're firmly in the "AI chip makers" part of the expansion, with everything else in the beginning stages. AI is useful, but whether it's hyped or not, it's hard to deny that not being able to build and power data centers will impact how this plays out.
Pointed and excellent.
But I wouldn't hold my breath waiting for introspection from that camp. It seems that AI maximalists, like so many other players these days, see it as end-game time. There are no bounds or rules: pick a side, and go. And then eat the rest.
Sure, not everyone sees it this way. There are highly competent, human actors working in their joy toward a better way forward with all of it. But I don't think you'll find that spirit unbridled inside any profit-seeking corporation of any significant standing (though I would be happy to be proven wrong). If it existed there, it is being choked out by selfishness and survivalism.
And then there's Thiel and ilk waxing eschatological, adding a whole other layer to the scheme.
Everybody’s lying to me. Haven’t you head: lying is a virtue now.
Railroads, e-commerce, and AI - all useful, all were (or may be) credit/stock bubbles. Railroads however have a much better depreciation schedule than GPUs.
He isn't arguing that AI is useless. Only that Nvidia is propping up a massive financial deck of cards and that all the giant numbers being tossed around are fantasies.
It’s supply and demand, as long as the demand is there the numbers can be maintained
> He isn't arguing that AI is useless.
This is what he said in 2024
-----
The “iPhone moment” wasn’t a result of one thing, but a collection of different bits that formed an obvious whole — one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output
https://www.wheresyoured.at/never-forget-what-theyve-done/
The obvious answer to the power problem will be to have the AI design massively parallel exercise bicycle/electrical generator plants that can be powered by all of the people laid off by the AI.
A literal "virtuous cycle", if you will.
The article takes an odd turn in the second half and seems to veer from a very interesting deep-dive into how a lot of backlogged US data center production may correlate with GPU "slippage" via questionable resellers and GPU rental outfits to China
The lying is not even subtle anymore. The gap between the demo and the product has never been wider and people are starting to notice.
I don't think Ed has made a single correct LLM prediction, despite posting in a fury probably monthly since ChatGPT was released. Grifters gonna grift
Not sure it qualifies as an “LLM prediction,” but he was adamant that Nvidia would not come through with the $100 billion funding round, and sure enough they did not.
To Ed's credit, he's coming with real numbers. Much of his reporting is based on quarterly earnings reports, press releases, correlating reports from outlets like The Information, etc.
Contrast that with hyperscalers no longer reporting AI revenue separately, making bold claims about long term growth with no evidence to back it up, and a tech media apparatus that has largely avoided asking founders hard questions.
I know just as well as you how this is all going to turn (which is to say, nobody really knows). But I'll take the person doing the math over the person trying to hide numbers all day long.
See this [1] for how he comes up with numbers. I think he says a lot of things without understanding and not many serious participants in the area takes him seriously.
[1] https://x.com/binarybits/status/2034376359909130249
Feel free to point out where the numbers are wrong in this article. If you're right about his ability to math, then you'll have no problem identifying concrete aspects of this piece that are wrong.
"The market can stay irrational longer than you can stay solvent"
This is the crutch of everyone unwilling to actually put their money on the line and bet.
“I think AI is a bubble and it’s going to collapse”
“Ok, then there are ways to bet against it if you’re really sure”
“Oh I’m sure I’m right, it’s just that the market might take a long time to realize I’m right”
Come on… there was no subtlety in the article at all. He said it’s all a house of cards, it’s all going to collapse, it’s all a grift… surely someone that certain should be willing to put something in…
"If you keep predicting market crash every single day of your life you will be the greatest predictor in the history of mankind because markets do eventually crash a little"
I don't think Sam Altman has made a single correct AGI prediction, despite saying AGI is a few months off. Grifters gonna grift
[flagged]
You may be a bit emotionally invested in this topic if you feel you're getting a lot of information from that exchange.
Why do you think so?
Because you’ve posted a dozen times here and it seems to be about the only topic you post on.
What topic do you mean?
Anyone with a single drop of common sense knows that Sam Altman is a grifter. If you don't see that, you are quite simply not bothering to apply critical thinking.
Altman is a grifter who is floating on the unexpectedly rapid advances in AI.
He will likely end up like Musk, another grifter who was floating on low hanging fruit in EV's and rocketry for a decade before being revealed.
The guy predicting a world changing technological revolution 12 years ago and he pioneered it himself. That is the opposite of a grifter.
His entire role in it is the grifting part (raising money based on BS). His job is, and has been at this and other companies, grifting. You loop him in if you need a hype-man who'll say any crazy thing to bring in a buck.
Its a pity how shallow people's worldviews are
For sure.
Lol, he "did" it. Sam did nothing himself. CEOs, glorified spokespeople, don't do anything. Bet you also think Musk "made" electric cars and rockets
That blogpost also didn't really "predict" anything.
lol u think it's marxist to say Sam Altman didn't make ChatGPT when he didn't do a single line of code
Ah yes, now that the rails are all built what could we possibly do next?