A new global map of the AI future
With help from Steven Overly and Derek Robertson
It’s easy to think of new tech developments as totally virtual — ideas that can spread anywhere, powered by the internet, blossoming wherever human ingenuity lives.
But sometimes tech is rooted in places, in a surprisingly old-fashioned way.
For Vili Lehdonvirta, a Finnish researcher based at Oxford, the AI revolution has a fundamentally physical basis, one that could be as relevant to global power as the geography of proven oil reserves and pipelines has been for decades.
That’s why he’s currently mapping it.
AI’s geography is built around the distribution of “compute,” or raw computing power. The author of 2022’s “Cloud Empires” — a book about the power of big tech platforms — Lehdonvirta, 43, has just completed a geographical survey of the world’s largest cloud computing data centers, which he is set to publish later this year.
Now, he’s turning to graphics processing units, the ultra-fast chips used to develop the most powerful machine learning models. AI isn’t just about cutting-edge programming and good training data: it also requires facilities to house huge clusters of these GPUs.
The world’s most important chips aren’t evenly distributed: A handful of companies dominate the industry, and their facilities are concentrated in a handful of countries, like the U.S., Ireland, Germany and Japan. This map is poised to take on outsized importance as AI becomes a focal point of global economic and geopolitical competition.
DFD caught up with Lehdonvirta, a professor of economic sociology and digital social research at Oxford’s Internet Institute, to talk about the geography of AI, the reversal of the personal computing revolution, and Amazon’s relationship with British intelligence.
Our conversation has been edited and condensed for clarity.
Where did you get the idea to map AI infrastructure?
I was talking to some computer scientists in Oxford, and I was saying to them, repeating the stuff I hear, which is, “Oh, data, that’s the new oil, and that’s the crucial resource and the bottleneck to what you’re doing, right?” And they were like, “Not really. Data is not a problem for us. We have all the data we need. The problem is compute.” And then they told me: “That’s why we need to partner with big tech, because they’re the only ones who have enough compute.”
How does geography come into play?
This is another thing where a conversation with a computer scientist caused another lightbulb to go. This was October 2022 in Germany. He is somebody who trains large AI models, and he was complaining to me about the fact that when he wants to train a model, he has to send all of his data to Dublin, because that’s where the nearest GPU instances of the type that he needs are. And you know how the Germans, as a rule, they’re very privacy-conscious, and the idea of having to send all of your training data, which could be personal data, to another country, even though it’s still within the EU, was not appealing to this computer scientist.
Most countries of the world, if you’re training large AI models, you’re sending your data abroad. And that might be illegal. It might be against local laws or regulations. Even if it’s legal, it might expose you to risks that aren’t there if the data remains within your jurisdiction, such as risk of surveillance and espionage.
You’re doing this by making a map. A map is a funny tool for the digital age.
For the past 20 years we’ve been talking about how the future is knowledge-intensive. Industry is immaterial. It’s based on human capital.
And now suddenly it seems like industrial capital has made a comeback. The future is in capital-intensive industries, not knowledge-intensive industries. You need to have these massive, industrial-scale facilities to house your data and your chips, and you need to cool them, and you need to provide them with megawatts of power.
This is kind of not the future we were promised, which is this immaterial one in which just what you know matters and not what you own. Now, the concern is “Can nation-states raise enough capital to be competitive in the 21st century digital economy?”
You’re not only mapping locations. Why is it important that the maps take note of individual tech companies?
It’s not just physically where the computers are, but also who owns it in that location. Because the same German computer scientist, he was complaining to me about the fact that he has to enter into an agreement with Amazon to train an AI model, and he has to do it under their terms. It’s not a negotiation. Amazon has certain terms of service, and you agree to those. And at the moment they’re not particularly onerous on AI developers. But if you’re a sort of very digital liberty-minded computer scientist, you might not like the fact that you have to accept Amazon’s terms of service whenever you want to create a model. Often they’re trying to do something to offset the powers of large tech companies like Amazon.
What does that look like in practice?
We’ve mapped all the hyperscale data centers belonging to AWS, Azure, Google Cloud, Alibaba, Tencent and Huawei.
We have looked at what I call the country’s cloud infrastructure alignment, which is what percentage of the cloud infrastructure physically located in a given country is US versus Chinese. And that gives you a number between 0 percent to 100 percent.
And then I put that into a statistical model with trade, international trade variables and international security, various variables describing bilateral security relationships between countries, alliances and conflicts. And what that gives you is it shows you how these factors shape this geography of compute.
What can we learn from this sort of map?
The US government security relationship with, let’s say the UK, means that the UK has actually gone all-in on the US cloud. The tax agency in the UK, their digital services are entirely based on AWS. The spy agencies, the MI5, the MI6, the GCHQ, They store their state secrets on AWS, which I think is really intriguing.
What does that mean for policies about computing infrastructure?
One of the things I’m looking at and wondering is, the UK government, the chancellor says we’re going to invest £900 million into national AI compute. At the same time, Amazon announces they’re going to invest $30 billion into their Virginia data centers alone, although over a longer period of time.
I’m not sure at this point that medium-sized nation states going head-to-head with big tech in infrastructure investment and capital investment is the way you want to play this game.
Let’s first try to develop some more general models of “What are the factors that shape this geography?” and then develop policy interventions on that basis.
So there’s friction between tech giants and nation-states. Where does that leave regular people?
There’s a sort of reverse personal computing computer revolution going on right now. When I was young, the personal computer revolution was a big thing because it decentralized computation and therefore political power from mainframes, server rooms into every office and every home.
What we’re seeing now is a reversal of that, where computation moves back inside closed doors. You’re not allowed to go into the red room of a hyperscale data center (where confidential data is processed). They’re very careful about whom they let in there. I don’t know if a head of state can get in there.
For years, Tristan Harris and Aza Raskin have been urging policymakers to force social media companies to change their business models. Now, the co-founders of the Center for Humane Technology are doing the same with artificial intelligence.
At an AI summit hosted by Senate Majority Leader Chuck Schumer last week, Harris and Raskin warned that tech companies have a financial incentive to release new AI models as quickly as they can make them, pushing the industry toward a pace of technological advancement that they argue society isn’t equipped to handle.
“The faster that they go, the more jobs they’re going to displace,” Harris said on today’s episode of POLITICO Tech. “The faster they go, the more intellectual property issues are going to come up. The faster they go, the more they’re not going to take time to fix discrimination issues and bias issues from the last round of AI that they released.”
To prove the danger inherent to AI, the Center for Humane Technology sought to create a nefarious version of Meta’s open-source AI known as “Llama 2.” Harris explained that an engineer with $800 was able to easily overcome Facebook’s guardrails and strip away safety protections to make “Bad Llama” and subsequently convinced it to provide directions for building biological weapons.
The solution, they propose, is to slow the industry down and hold companies accountable for problems their AI models perpetuate.
“Talking about responsibility and safety is great, but it will always get steamrolled by market dynamics and market competition,” Raskin said. “So, instead, the language of business is liability…. We need to have liability for foreseeable harms.”
Harris and Raskin discuss the consequences if Washington doesn’t act quickly during the full interview on POLITICO Tech. You can subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts. — Steven Overly
The United Kingdom’s flurry of AI activity continues — with the nation’s Competition and Markets Authority now sounding the alarm this morning about industry consolidation.
In a new report (which numbers 129 pages in the full version), the U.K.’s anti-consolidation watchdog warned that with weak competition “people and businesses could be harmed, both immediately, and over the longer term,” and “exposed to significant levels of false information, AI-enabled fraud, or fake reviews… if a handful of firms gain or entrench positions of market power and fail to offer the best products and services and/or charge high prices.”
With a burgeoning industry dominated by players like Google-acquired DeepMind and OpenAI, with its tight-knit partnership with Microsoft, it’s not difficult to understand why the CMA is sounding a warning bell despite the rapidly growing startup environment. (The report also points to Google’s relationship with Anthropic as an example of vertical integration in the industry.)
The report presents a list of guiding principles to encourage competition, and its authors say they plan to consult with civil society groups, leading foundational model developers like OpenAI and Anthropic, and other regulators to get a fuller picture of how the quickly emerging industry takes shape over the coming months. — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ((email protected)); Derek Robertson ((email protected)); Mohar Chatterjee ((email protected)); Steve Heuser ((email protected)); Nate Robson ((email protected)) and Daniella Cheslow ((email protected)).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.