Viewed through an industrial and financial lens, several bearish catalysts are indeed starting to pile up: AI cloud profitability, energy constraints, the scale of CapEx, ecosystem circularity, creative accounting on depreciation and rising debt. However, none of these factors calls into question the considerable technological progress that AI represents. However, their combination materially shifts the risk/return profile of these tech companies, which are tipping from an historically asset-light model into a heavy-industry logic with, in the process, accounting and narratives that are at times more opportunistic than convincing.
Here the aim is not to be alarmist, but rather to lay out the facts. Because at this stage of the cycle, I sincerely think it will be the most down-to-earth constraints - electricity, hardware obsolescence, industrial execution, and accounting - that will dictate stock returns.
The AI cloud business: a growth engine, but murky profitability
At MarketScreener, our architecture is hybrid. Our platform relies mainly on servers that we own, although we also rent infrastructure - notably for AI projects, where we need those famous GPUs.
For example, a server with a V100 GPU (16 GB of VRAM) rents for around €400 or €500 per month. Given that these chips date from 2017 and cost about €10,000 bare, the investment is certainly paid back for cloud providers. However, today, hardly anyone rents these V100s, as they do not support the latest software required to run the best LLMs efficiently. You are more likely to use an L40 GPU, launched in 2022, or an A100.
And that is the crux: both hardware and software development have accelerated. New chips are arriving faster and faster - Nvidia is on roughly an annual cadence - and the open-source community is continuously innovating on the software layer. The upshot is that infrastructure bought today may be technically OK in three years but commercially obsolete versus the efficiency gains of new generations.
It is getting hard to imagine replaying the V100 scenario: renting a chip bought for €10,000 at €500 per month for more than three years. And again, we are talking bare chips, not a full, operational, cooled, cabled AI rack, etc. Nor are we talking energy costs: an AI rack running 24/7 pulls on the order of 100 kW. Far from negligible.
Two things can therefore happen - and we are already starting to see it.
- The open-source community can adapt software to let the latest models run on prior-generation chips, extending their economic life. Pretty cool.
- Hyperscalers will raise prices. And that is even necessary if they want to avoid ending up with a ROIC below 5%. Typically, OVH's CEO anticipates 5%-10% cloud price hikes by mid-2026, with internal server costs up 15%-25%, notably due to DRAM/SSD pressure and AI hardware.
And you know what? I think a good chunk of AI cloud demand will not follow if prices jump too abruptly. Simply because monetizing an AI project is not that simple. Prototyping is very easy - and I am well placed to say so - but putting a reliable, scalable, and profitable system into production is another story.
For several months our goal has been to make MarketScreener's search bar intelligent, and I can tell you: it is easier said than done if you do not want to spend €0.10 every time a user hits Enter. And if we dare to contemplate this kind of project, it is precisely because GPU rental pricing still looks affordable to us. But if the price doubles or triples, I sincerely think a large part of demand will not absorb those increases.
The physical bottleneck
Goldman Sachs estimates data center electricity demand will rise by about +165% by 2030, driven overwhelmingly by AI.
Supply of dispatchable power is not growing anywhere near as fast. The question is no longer even "at what price?" but "will there be enough, in the right place, at the right time?"
In Northern Virginia, the world's top hub, new projects face waits of up to seven years to connect to the grid. And it is not a local exception: US and European grid operators report lengthening queues everywhere AI wants to set up.
Tens of billions of dollars of capacity are being built... that cannot run because they are not hooked up (shortages of transformers, cooling equipment, specialized labor, etc.). And the irony is that these assets depreciate while waiting to be plugged in.
That is the real opportunity cost of AI: not just capex, but industrial downtime.
It is such a mess that hyperscalers are signing or financing SMR nuclear projects (Oklo, X-Energy, etc.). But do not kid yourself: meaningful arrivals there are also more likely post-2030.
This energy bottleneck has two direct effects on hyperscalers:
- Unplanned additional CapEx: they must fund not only data centers but sometimes their own energy solutions.
- ROI delay: potential AI revenues are there, demand is there (for now, and at this price), but deliverable capacity is constrained, so monetization is pushed out.
Investments on a scale never handled by private companies
Make no mistake: we are in an unprecedented historical regime, with hundreds of billions of CapEx per year concentrated in a handful of companies.
I hardly dare cite exact figures for fear of being out of date. From what I recall, we are talking about over $600bn in annual AI investments by 2026-2027.
According to the Wall Street Journal, US AI investment may have accounted for half of the country's GDP growth over the first six months of the year.
Who can execute such investment programs without destroying value?
The players themselves acknowledge it: none of these companies has ever run a $50bn industrial project - and they are now launching a dozen simultaneously. At this scale, the smallest logistics error becomes a money pit, and delays cost fortunes.
Funding these investments is not trivial either. A true shift in capital structure is underway at tech companies we knew as extremely solid financially.
In 2025, more than $120bn of debt was issued by these hyperscalers, a sharp increase versus prior years, and the projected dynamic for 2026-2027 is even stronger.
Oracle is a good example: net debt is now above $80bn and leverage above 3.
Even if, since the end of the shutdown and the publication of macro figures, sentiment seems to have eased in the US, this step-change matters: less leeway if rates remain high, more sensitivity to the economic cycle, and therefore greater dependence on AI's commercial success.
Ecosystem circularity
One point repeatedly raised by analysts around the globe concerns the circularity of economic flows. This incestuous flow pattern leads us to think that part of the growth reported by actors in this ecosystem is not truly synonymous with net value creation.
The scheme is fairly simple:
- Hyperscalers invest massively in data centers.
- A large part of AI demand comes from... AI players themselves: start-ups, labs, platforms, model publishers.
- These players fund their cloud consumption via capital raises in which hyperscalers and semiconductor giants are often shareholders, partners, or exclusive suppliers.
- Cloud revenues are thus inflated in part by a system where the supplier indirectly finances its own customer.
None of this is catastrophic, but it is worth noting that this circularity enables these players to post spectacular growth even though end demand, from the general public or traditional enterprises, has yet to prove that it is keeping pace.
Why is that fragile?
- Because the loop depends on the cost of capital. As long as money is plentiful and valuations remain high, start-ups can consume cloud at a loss. If market appetite slows, the loop contracts quickly.
- Because revenues are correlated. If one link cuts spend (for example, an AI platform slowing its growth), the hyperscaler sees its cloud revenues slow, reducing its ability to reinvest, which weighs on GPU demand... etc.
- Because end value is not yet guaranteed. The ecosystem is spending enormously today on a promise, but if monetization of use cases takes longer than expected, the whole edifice becomes very sensitive to a regime change.
Creative accounting
Some will call it fraud, others creative accounting. To me it is the most pragmatic point here: hyperscalers have started to assume the useful life of their servers and chips has increased, and are therefore lengthening depreciation schedules even as the product life cycle of AI hardware is shortening. A worrying divergence, suggesting they are booking earnings today at the expense of tomorrow.

Useful life of servers (depreciation schedule) for the three main hyperscalers. The coordinated shift from 3/4 years to 6 years is easy to observe over time. Source: "Why AI factories bend, but don't break, useful life assumptions", SiliconAngle.
In 2023, Alphabet extended estimated useful life for servers from 4 to 6 years and some networking gear from 5 to 6 years. Same for Microsoft.
In 2025, Meta extended to 5.5 years (previously 4-5 years). The same year, Oracle also began depreciating hardware over 6 years.
If you do not grasp the financial mechanism, a quick example:
Suppose a hyperscaler buys $100bn of servers/AI GPUs. If the company deems the useful life of this hardware to be 3 years, it books around $33bn per year of depreciation (assuming straight-line). If it stretches to 6 years, it books only $17bn per year.
Result: +$16bn of operating profit per year in the short term, without cash changing by a cent. It is just income statement timing, because in four years the carrying value of these assets on the balance sheet will be disconnected from economic reality, and the company may be forced to impair the residual value in one go - surprising many investors.
Michael Burry estimates the scale of under-depreciation of these companies' AI assets will reach around $180bn by 2028. According to him, this accounting choice boosted Oracle's profits by 26.9% and Meta's by 20.8%.
Burry is not the only one going out on a limb. The Economist runs with "the USD 4 trillion accounting puzzle at the heart of the AI cloud".
By their estimates, if these assets were depreciated over three years instead of the longer lives now used by companies, annual pre-tax profits would fall by about 8%. And if depreciation truly matched Nvidia's imposed pace (which is extreme and does not really make sense to me - just look at iPhones), the implicit shock to market value could reach $4 trillion.
We can debate the word "fraud," because it is not easy to estimate the useful life of equipment subject to more or less annual innovation. But the economic mechanism is undeniable: pushing the bill into the future while the tech accelerates. That divergence is odd. However, as I said earlier, we are indeed seeing an effort by the open-source community to build software that lets the latest models run on older-generation chips. And it is entirely possible that, in practice, the newest generations of chips prove very resilient, especially in the final phase of their life, for simpler applications (and not training) with clients like us.
In the end, perhaps these warning signs have prevented the AI bet from getting even more congested than it already is. That is Bank of America's somewhat bold theory, which thinks skepticism benefits the daring by leaving other investors on the sidelines. Needless to say, the US investment bank's semiconductor research team has no doubt about the sector's momentum, despite the noise.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.