Sam Altman’s chip ambitions may be crazier than he feared. Record
opinion OpenAI CEO Sam Altman’s dream of creating a network of chip factories to fuel the growth of artificial intelligence may be much more difficult than he feared.
As reported last month, Altman is supposedly seeking billions of dollars in funding from partners including Abu Dhabi-based G42, Japan’s SoftBank, and Microsoft, all to build out neural network accelerators.
Now, a Wall Street Journal report, citing anonymous sources, claims that the ambitious project could involve raising as much as $7 trillion.
That’s a staggering amount, and from this eagle’s perspective, it defies logic.
To put that number in perspective, that’s nearly 14 times the total revenue of the entire semiconductor market last year. According to Gartner, near-global revenues will exceed $533 billion in 2023. Despite all the hype around generative AI, analysts expect the sales figure to grow by 17 percent to $624 billion this year.
But let’s say, for the sake of argument, that Altman and company are really that brave, and could somehow argue that a quarter of 2023 US GDP would fund this endeavor. What does $7 trillion buy you?
That’s enough money to acquire Nvidia, TSMC, Broadcom, ASML, Samsung, AMD, Intel, Qualcomm, and every other chip maker, designer, intellectual property holder, and hardware vendor that matters all the way — and still have trillions left over.
Although it would be fun to watch Sam burn through a ton of cash starting what could be the biggest antitrust fight of the century, investing that money in factories and processor packaging is probably what he has in mind to boost chip production. In fact, we can think of a lot of better ways to spend that kind of money, but let’s stick with the chips for a bit.
Now that’s a lot of fab
No matter how you slice it, $7 trillion is still an enormous amount to spend on manufacturers, even on a network of them.
Today’s leading-edge chip factory costs between $10 and $30 billion, depending on the size and location of the site. Let’s assume that the facilities Altman envisions ultimately cost about $20 billion on average. At this rate, $7 trillion gets you about 350 foundry locations.
The issue then becomes, who will build it? These facilities are among the largest and most complex operations in the manufacturing world, requiring components and materials from countless suppliers and specially trained personnel to install, maintain and operate them.
For this reason, it is not uncommon for these facilities to take four or more years to become operational and perhaps much longer to bring revenues to acceptable levels. There is nothing fast about building factories correctly.
In the US, we have seen a wave of investment in domestic semiconductor manufacturing and R&D, driven largely by the $53 billion government support fund made possible by the CHIPS financing bill. However, foundry operators have already faced serious problems.
As previously mentioned, a shortage of skilled workers has already delayed the development of TSMC’s plant in Arizona. TSMC has gone so far as to send technicians from Taiwan to America in an attempt to get the facility back on track.
Last summer, the Semiconductor Industry Association (SIA) and UK-based Oxford Economics warned that the US semiconductor industry faces a shortage of 67,000 technicians, engineers and computer scientists by 2030. Intel, which is leading one of the largest build-outs in the US United fabs, this number is estimated at approximately 70,000 to 90,000 over the next few years.
This is for a small number of manufacturers under development in the United States. It doesn’t take much imagination to see how 350 additional sites could create a problem on a global scale.
Flooding the market
If that wasn’t enough, demand for semiconductors tends to ebb and flow in a cyclical manner. Purchasing periods usually follow long digestion cycles, and increases in computer hardware sales tend to coincide with the release of operating systems or software.
We assume for a moment that these hundreds of manufacturers will serve not only OpenAI or the AI world in general, but also everything adjacent to it, although it may be that Altman really just wants an endless stream of machine learning accelerators and related ones. count – count.
The memory market is only now recovering from the inventory glut that pushed average selling prices to record levels. Meanwhile, Intel has reportedly pushed back the completion date of its Ohio factories to late 2026, blaming current weaknesses in the semiconductor market and delays in obtaining CHIPS Act funding.
Of course, industry rumors have yet to detail the timeline within which Altman’s supposed $7 trillion semiconductor project will be completed. It’s safe to assume that it won’t happen overnight. These types of developments must be modified to avoid building too aggressively and flooding the market with too many chips.
Even over the next 25 years, we’re still talking about a huge amount of money, enough for 14 factories a year at a cost of $280 billion a year. To achieve this goal, TSMC, Samsung and Intel will need to nearly triple their capital spending and direct it entirely to chip factories.
Granted, this sounds less crazy, but given this theoretical timeline, why would Altman need to raise $7 trillion now? Typically, when you see companies like Intel talking about their foundry roadmaps, they tend to only fund what’s in the works right away.
For example, when the x86 giant announced its plan to invest $100 billion over the next decade in a massive factory in Ohio, it actually only committed to building two sites at an estimated cost of $10 billion apiece. As mentioned before, even that has been delayed.
Part of a bigger plan?
So, perhaps this $7 trillion project is a bigger plan to support OpenAI’s ambitions. All those chips are going to have to go somewhere. This means that not only will it need factories to make the chips, it will need data centers to use them, and (hopefully) clean energy to power everything, and that will cost a lot of dollars, too.
The chips used to run AI models are notoriously power-hungry. An Nvidia H100 node with eight GPUs is rated at 10.2 kilowatts. Scale that up to 350,000 GPUs — that’s how many Meta units it claims it will deploy this year — and you’re looking at a massive amount of power.
With a budget of $100 billion, just 1.4% of the $7 trillion budget, for GPUs, you could buy five million H100 units at an average volume of $20,000 each. For the record, that’s more than double the number Nvidia is expected to ship in all of 2024.
Needless to say, power will be an issue. So allocating some money to address this challenge would make sense.
The good news here is that Altman has a long history of backing energy startups. Last year, Oklo, a nuclear fission startup backed by the CEO of OpenAI, announced its plans to go public.
Meanwhile, on the more experimental side of things, Altman has thrown his weight behind Helion Energy, which is commercializing a helium-3 fusion power plant. Despite the fact that Helion has yet to prove that its reactor is actually working, Altman’s involvement appears to have been enough for Microsoft to sign a power purchase agreement with the startup. The technology is not expected to see deployment until at least 2028, assuming it makes it work.
However, this leads your humble hacker to the conclusion that the $7 trillion figure used to describe the scope of Altman’s ambitions is either a gross exaggeration, or part of a larger, more comprehensive plan. ®