AI hand concept

For several months now, Michael Burry, the investor featured in the film “The Big Short”, has been at the center of controversy for his statements on X, where he points the finger at tech giants. In Q3 2025, he purchased about $1.3 billion in put options against Nvidia and Palantir, betting on the AI bubble bursting.

Recently, he shed light on the accounting game in which big tech companies reported fewer costs and, therefore, higher profits by purposely underestimating the depreciation of Nvidia GPUs. His latest criticism focuses on the enormous energy consumption of these GPUs, arguing that the United States is losing the AI race against China precisely because of the national strategy of using this type of chip without having adequate energy infrastructure to sustain them.

Burry compares Nvidia to Cisco Systems during the 2000 dot-com bubble. Both companies reached extremely high valuations at their peak, with Nvidia currently obtaining the leading position on the stock screener. In the case of Cisco Systems, these valuations proved to be wrong, as they were based on unrealistic expectations of future demand. Cisco Systems saw its stock collapse by 78% once it became clear that the enormous spending on infrastructure would not generate the hoped-for profits, since there wasn’t adequate demand to sustain it.

Today, the market situation is very similar. Are the so-called hyperscalers (such as Google, Facebook, Amazon, Microsoft, Oracle Cloud, and Cloudflare), with their enormous CapEx forecasts for the coming years, making the same mistake? Will the hundreds of billions of dollars they have spent and will spend on building ever-larger data centers be economically sustainable?

In China, it’s the opposite. They have been able to acquire a structural advantage that could prove decisive, generating double the electrical energy compared to the US.

To understand the significance of central energy consumption in the context of AI and the severity of the situation in the United States, it’s sufficient to consider that the Nvidia A100 chip, introduced in 2020 and still widely used in data centers around the world, has a TDP (Thermal Design Power) of 400 watts. According to International Energy Agency estimates published in April 2025, datacenters consumed 1.5% of global electricity consumption with just this type of chip alone.

However, the A100 is considered obsolete today. The Nvidia H100, the workhorse and most widely used chip since the AI boom began in 2022, has a TDP of 700 watts — almost double that of its predecessor. A single H100 consumes the equivalent of the average annual electricity consumption of an American. Considering that at least 3.5 million H100s are expected to be installed by the end of 2025, we arrive at an aggregate consumption higher than the combined electrical consumption of Guatemala and Lithuania.

The situation is likely to worsen with the Blackwell B200/B100 chips, Nvidia’s next-generation chips already in the deployment phase, which require 1,000 watts per GPU. In 2024 alone, American data centers consumed over 4% of the total national energy consumption. By 2030, this figure is expected to increase by 133%.

In China, energy independence has long been at the center of national development, and as a result, it now possesses an almost insurmountable advantage. China’s energy generation capacity surpassed that of the United States and Europe combined in the early 2000s, and since then, the gap has only widened.

David Fishman, a Chinese energy expert, told Fortune in August 2025 that in China, “electricity isn’t even a question.” Rui Ma, founder of Tech Buzz China, after a recent tour of Chinese AI infrastructure, reported that “everywhere we went, people treated energy availability as a given.”

But that’s not all. As Burry emphasizes in his X posts, Chinese electricity production has increased by 600% over the last 25 years, while American production has declined by 10% during the same period. While the US views AI data centers as a threat to national grid stability, China treats data centers as a convenient way to “absorb the oversupply” of electricity.

In the AI race, we’re not just losing — we’ve already been lapped. While we debate electricity bill increases, China is developing its AI future with confidence, having solved the electricity problem decades ago.

In all this, Nvidia, the current near-monopolist in the production of all the chips used in AI, will be compelled to provide investors with guarantees of sustainability for its products. If hyperscalers miss profit forecasts due to enormous management costs and consumer demand that doesn’t seem overwhelming, Nvidia stock will suffer considerably, as hyperscalers are Nvidia’s main customers. Despite the positive news of Q3 2025 revenues, such high valuations are often harbingers of volatility should the company fail to meet earnings expectations in 2026.

LEAVE A REPLY

Please enter your comment!
Please enter your name here