sign up log in
Want to go ad-free? Find out how, here.

Nations with authoritarian governance like the UAE and China have massive advantages over Western counterparts, Angela Huyue Zhang says

Technology / analysis
Nations with authoritarian governance like the UAE and China have massive advantages over Western counterparts, Angela Huyue Zhang says
OpenAI's Dall-E 3's AI take on a giant data centre powered by nuclear energy.
OpenAI's Dall-E 3's GenAI take on a giant data centre powered by nuclear energy.

By Angela Huyue Zhang*

Last year, the United Arab Emirates made global headlines with the release of Falcon, its open-source large language model (LLM). Remarkably, by several key metrics, Falcon managed to outperform or measure up against the LLMs of tech giants like Meta (Facebook) and Alphabet (Google).

Since then, the UAE has positioned itself as a frontrunner in the global artificial-intelligence race by consistently releasing updates to its powerful model. These efforts have not gone unnoticed: in April, Microsoft acquired a US$1.5 billion minority stake in G42, the UAE’s flagship AI company, underscoring the country’s growing influence.

Analysts often attribute the UAE’s emergence as an AI powerhouse to several factors, including robust state support, abundant capital, and low-cost electricity, all of which are necessary for training LLMs. But another important – and often overlooked – factor is the country’s authoritarian governance model, which enables the government to leverage state power to drive technological innovation.

The UAE is not alone. Authoritarian countries like China have a built-in competitive advantage when it comes to AI development, largely owing to their reliance on domestic surveillance, which fuels demand. Facial-recognition technologies, for example, are used by these regimes not just to enhance public safety but also as powerful tools for monitoring their populations and suppressing dissent.

By contrast, facial recognition has become a source of enormous controversy in the West. The European Union’s AI Act, which came into force on August 1, has effectively banned its use in public spaces, with only a few narrowly defined exceptions.

This provides AI firms in China and the UAE with a massive advantage over their Western counterparts. Research by David Yang and co-authors shows that Chinese AI firms with government contracts tend to be more innovative and commercially successful, owing to procurement practices that provide them with access to vast troves of public and private data for training and refining their models. Similarly, UAE firms have been allowed to train their models on anonymised health-care data from hospitals and state-backed industries.

AI firms seeking access to such data in Western countries would face numerous legal hurdles. While European and American companies grapple with strict compliance requirements and a surge in copyright-infringement lawsuits, firms in China and the UAE operate in a far more lenient regulatory environment.

This is not to suggest that authoritarian countries do not have laws protecting data privacy or intellectual property. But the national goal of promoting AI development often takes precedence, resulting in lax enforcement.

Meanwhile, consumers in authoritarian countries tend to be more supportive of AI. A 2022 Ipsos survey, for example, ranked China and Saudi Arabia – another authoritarian Gulf state with technological ambitions – as the world’s most AI-optimistic countries. These regimes’ widespread use of surveillance tools seems to have accelerated the commercial adoption of emerging technologies, possibly increasing public trust in the companies deploying them.

Moreover, authoritarian governments benefit from the ability to coordinate and direct resources toward innovation, especially through state-owned enterprises and sovereign wealth funds. Both the UAE and China have implemented top-down national strategies aimed at positioning themselves as global AI leaders. As I explained in a recent paper, the Chinese government is not just a policymaker but also a supplier, customer, and investor in this sector.

The UAE has adopted a similar approach. In 2017, it became the first country to appoint a Minister of State for AI, whose primary mission is to facilitate public-private partnerships and provide firms with convenient access to valuable training data. Notably, the Falcon AI model was developed by the Technology Innovation Institute, a state-funded research center. G42, which is backed by the UAE’s sovereign wealth fund and chaired by the government’s national security adviser, collaborates with various state agencies.

Recognising the vital role of academic research in driving technological progress, the UAE also established the Mohamed bin Zayed University of Artificial Intelligence, the world’s first university dedicated exclusively to AI.

Despite the many similarities between the AI strategies of the UAE and China, one crucial difference stands out: whereas China’s progress in advanced technologies could be impeded by Western restrictions on chip and equipment exports, the UAE enjoys unrestricted access to these essential resources. In 2023, G42 signed a US$100 million deal with the California-based startup Cerebras to build the world’s largest supercomputer for AI training. And earlier this year, the company reportedly engaged in talks with OpenAI chief executive Sam Altman about a potential investment in an ambitious semiconductor venture that could challenge Nvidia’s dominance in the industry.

But the reasons for the UAE’s success are still widely misunderstood. Tellingly, Altman recently suggested that the country could “lead the discussion” on AI policy, acting as a “regulatory sandbox” for the rest of the world. In praising the UAE’s approach, Altman obscures a fundamental point: it cannot be replicated in a democratic environment.


*Angela Huyue Zhang is Professor of Law at the University of Southern California. She is the author of High Wire: How China Regulates Big Tech and Governs Its Economy (Oxford University Press, 2024).

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

3 Comments

OK, as we all know, AI needs access to voluminous data, so the question is at what cost.

We don't work for AI, it works for us, so it's for each country to figure out the trade offs it is prepared to accept.

If non democratic countries don't want so many personal freedoms to gain an 'advantage' that's on them.

The critical issue is sovereignty and security of the data - one for NZ to watch. Could end up that no off shore data centres will be acceptable ultimately.

Mark

 

 

Up
4

That is a naive view, a more powerful ai in the hands of an authoritarian regime is everyone's problem, not just that nations 

Up
2

Well yes but when you design your system of governance so that you plan to racially identify groups & political dissidents, target them and all their family and then remove them to concentration camps you tend to have a lot of biometric data on your population already available. When you keep track of all interaction of people in their daily lives so you can punish people and deny human rights so they are effectively denied living needs, access to essential functions, and deny them work you tend to keep a lot of private information that you openly share across industries and orgs. When you have a firewall and monitor every online conversation so you can mod and block any mention of historical protest or criticism of the govt or even of sensitive subjects (like the ethnic cleansing) to the point even the date of an event cannot be mentioned, nor a combination of numbers like the date, then yes you have a lot of conversational data already captured and stored.

I think the downsides of this are brutally obvious and since AI has not resulted in the massive profits or effective cost savings (aside from those scamming investment capital) it seems like a lot to give up for very negligible effects on AI performance, on the very minor financial and societal improvements that AI offers to start with.

This article could not be more ignorant and lack self awareness if it tried. If significant chunks of the population are denied work & integration with the community that is a huge productivity, consumer and wealth hit to a nation and in those countries in the article the opportunities, effective allowed productivity and wealth is very much determined by birth where many are literally treated as slaves within the nation. Sure slavery is cost effective for businesses if you don't maintain a healthy or basic state of living for the slaves but lets not kid ourselves it is sound practice or works long term for the population. The use of the population data designed to target and restrict human rights is not a good thing or a smart financial decision & creates highly flawed AI models equally designed to limit potential for a nation. It harms a nation more to cripple its population productivity & consumer base. With China, much of the population is much poorer then those in most nations in the OECD so they do rely heavily on being exporters to other nations. With the UAE already half the population is restricted in work & consumer access; it is really bad when you already cripple your nation's productivity by 50% and rely on the luck of a finite resource beneath the ground, a recipe for future strife.

Up
0