sign up log in
Want to go ad-free? Find out how, here.

Martin Beraja, David Yang and Noam Yuchtman conclude that China's exports of facial-recognition technologies are eroding democratic institutions

Technology / opinion
Martin Beraja, David Yang and Noam Yuchtman conclude that China's exports of facial-recognition technologies are eroding democratic institutions

US President George H.W. Bush once remarked that, “No nation on Earth has discovered a way to import the world’s goods and services while stopping foreign ideas at the border.” In an age when democracies dominated the technological frontier, the ideas Bush had in mind were those associated with America’s own model of political economy.

But now that China has become a leading innovator in artificial intelligence, might the same economic integration move countries in the opposite direction? This question is particularly relevant to developing countries, since many are not only institutionally fragile, but also increasingly connected to China via trade, foreign aid, loans, and investments.

While AI has been hailed as the basis for a “fourth industrial revolution,” it is also bringing many new challenges to the fore. AI technologies have the potential to drive economic growth in the coming years, but also to undermine democracies, aid autocrats’ pursuit of social control, and empower “surveillance capitalists” who manipulate our behavior and profit from the data trails we leave online.

Since China has aggressively deployed AI-powered facial recognition to support its own surveillance state, we recently set out to explore the patterns and political consequences of trade in these technologies. After constructing a database for global trade in facial-recognition AI from 2008 to 2021, we found 1,636 deals from 36 exporting countries to 136 importing countries.

From this dataset, we document three developments. First, China has a comparative advantage in facial-recognition AI. It exports to roughly twice as many countries as the United States does (83 versus 57 links), and it has about 10% more trade deals (238 versus 211). Moreover, its comparative advantage in facial-recognition AI is larger than in other frontier-technology exports, such as radioactive materials, steam turbines, and laser and other beam processes.

While different factors may have contributed to China’s comparative advantage, we know that the Chinese government has made global dominance in AI an explicit developmental and strategic goal, and that the facial-recognition AI industry has benefited from its demand for surveillance technology, often receiving access to large government datasets.

Second, we find that autocracies and weak democracies are more likely to import facial-recognition AI from China. While the US predominantly exports the technology to mature democracies (these account for roughly two-thirds of its links, or three-quarters of its deals), China exports roughly equal amounts to mature democracies and autocracies or weak democracies.

Does China have an autocratic bias, or is it simply exporting more to autocracies and weak democracies across all products? When we compared China’s exports of facial-recognition AI to its exports of other frontier technologies, we found that facial-recognition AI is the only technology for which China displays an autocratic bias. Equally notable, we found no such bias when investigating the US.

One potential explanation for this difference is that autocracies and weak democracies might be turning specifically to China for surveillance technologies. That brings us to our third finding: autocracies and weak democracies are more likely to import facial-recognition AI from China in years when they experience domestic unrest.

The data make clear that weak democracies and autocracies tend to import surveillance AI from China – but not from the US – during years of unrest, rather than pre-emptively or after the fact. Imports of military technology follow a similar pattern. By contrast, we do not find that mature democracies import more facial-recognition AI in response to unrest.

A final question concerns broader institutional changes in these countries. Our analysis shows that imports of Chinese surveillance AI during episodes of domestic unrest are indeed associated with a country’s elections becoming less fair, less peaceful, and less credible overall. And a similar pattern appears to hold with imports of US surveillance AI, though this finding is less precisely estimated.

At the same time, we do not find any association between surveillance AI imports and institutional quality among mature democracies. So, rather than interpreting our findings as the causal impact of AI on institutions, we view imports of surveillance AI and the erosion of domestic institutions in autocracies and weak democracies as the joint outcome of a regime’s pursuit of greater political control.

Interestingly, we also find suggestive evidence that autocracies and weak democracies importing large amounts of Chinese surveillance AI during unrest are less likely to develop into mature democracies than peer countries with low imports of surveillance AI. This suggests that the tactics employed by autocracies during times of unrest – importing surveillance AI, eroding electoral institutions, and importing military technology – may be effective in entrenching non-democratic regimes.

Our research adds to the evidence that trade does not always foster democracy or liberalise regimes. Instead, China’s greater integration with the developing world may do precisely the opposite.

This suggests a need for tighter AI trade regulation, which could be modeled on the regulation of other goods that produce negative externalities. Insofar as autocratically biased AI is trained on data collected for the purpose of political repression, it is similar to goods produced from unethically sourced inputs, such as child labour. And since surveillance AI may have negative downstream externalities, such as lost civil liberties and political rights, it is not unlike pollution.

Like all dual-use technologies, facial-recognition AI has the potential to benefit consumers and firms. But regulations must be carefully designed to ensure that this frontier technology is diffused around the world without facilitating autocratisation.


Martin Beraja is Assistant Professor of Economics at MIT. David Y. Yang is Professor of Economics and Director of the Center for History and Economics at Harvard University. Noam Yuchtman is Professor of Political Economy and a fellow of All Souls College at the University of Oxford. Andrew Kao contributed to this commentary. Copyright: Project Syndicate, 2024, published here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

3 Comments

Free with every infrastructure project! (Or else!)

Up
0

A golden time for humanity when MIT, Harvard and Oxford can come together to declare that 83 is "roughly twice" 57.

Also worth noting that this piece was originally published elsewhere over 9 months ago.

Up
0

It's worth pointing out that Chinese made surveillance equipment is now endemic in New Zealand.  There's even a Wikipedia article about Mass surveillance in New Zealand.  Weak democracy with autocratic tendencies accurately describes the last Labour government, IMHO.

Up
0