By Gabriela Ramos and Mariana Mazzucato
The tech world has generated a fresh abundance of front-page news in 2022. In October, Elon Musk bought Twitter – one of the main public communication platforms used by journalists, academics, businesses, and policymakers – and proceeded to fire most of its content-moderation staff, indicating that the company would rely instead on artificial intelligence.
Then, in November, a group of Meta employees revealed that they had devised an AI program capable of beating most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic, and energy consumption in real time. And with the latest iteration of ChatGPT’s language-prediction model, many are declaring the end of the college essay.
In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.
Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disasters, design energy-efficient buildings, improve power storage, and optimise renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.
These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.
Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us.
But AI is being boosted by massive public investment as well. Such financing should be governed for the common good, not in the interest of the few. We need a digital architecture that shares the rewards of collective value creation more equitably. The era of light-touch self-regulation must end. When we allow market fundamentalism to prevail, the state and taxpayers are condemned to come to the rescue after the fact (as we have seen in the context of the 2008 financial crisis and the COVID-19 pandemic), usually at great financial cost and with long-lasting social scarring. Worse, with AI, we do not even know if an ex post intervention will be enough. As The Economist recently pointed out, AI developers themselves are often surprised by the power of their creations.
Fortunately, we already know how to avert another laissez-faire-induced crisis. We need an “ethical by design” AI mission that is underpinned by sound regulation and capable governments working to shape this technological revolution in the common interest, rather than in shareholders’ interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.
Effective public oversight should ensure that digitalisation and AI are creating opportunities for public value creation. This principle is integral to UNESCO’s Recommendation on the Ethics of AI, a normative framework that was adopted by 193 member states in November 2021. Moreover, key players are now taking responsibility for reframing the debate, with US President Joe Biden’s administration proposing an AI Bill of Rights, and the European Union developing a holistic framework for governing AI.
Still, we also must keep the public sector’s own uses of AI on a sound ethical footing. With AI supporting more and more decision-making, it is important to ensure that AI systems are not used in ways that subvert democracy or violate human rights.
We also must address the lack of investment in the public sector’s own innovative and governance capacities. COVID-19 has underscored the need for more dynamic public-sector capabilities. Without robust terms and conditions governing public-private partnerships, for example, companies can easily capture the agenda.
The problem, however, is that the outsourcing of public contracts has increasingly become a barrier to building public-sector capabilities. Governments need to be able to develop AI in ways that they are not reliant on the private sector for sensitive systems, so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and metrics across departments and ministries. This will all require public investments in government capabilities, following a mission-oriented approach.
Given that so much knowledge and experience is now centered in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission-orientation is about picking the willing – by co-investing with partners that recognise the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach conditions to public funding. They also can, and should, require Big Tech to be more open and transparent.
Our societies’ future is at stake. We must not only fix the problems and control the downside risks of AI, but also shape the direction of the digital transformation and technological innovation more broadly. At the start of a new year, there is no better time to begin laying the foundation for limitless innovation in the interest of all.
Gabriela Ramos is Assistant Director-General for Social and Human Sciences at UNESCO. Mariana Mazzucato, Founding Director of the UCL Institute for Innovation and Public Purpose, is Chair of the World Health Organisation’s Council on the Economics of Health for All. Copyright: Project Syndicate, 2022, published here with permission.
37 Comments
I am left wondering. Has anything really changed over the eons? Man has always tried to manipulate others to their point of view. Whether in a Roman arena or todays auditoriums. Of course govts try to have the upper hand - not always succeeding. Is all this really a solution looking for a problem? Fortunately us humans will stumble along losing some; winning some as has always happened. 40 yrs ago the 'experts' forecast humans being replaced by computers. Personally I don't think AI does the world any favours. To think we've 'lost' spaceship reliability since the late 60's.
I think people are still confused (incl myself) as to the difference between AI and machine learning. I've seen some wonderful applications such as DeepL, which is a language translator that has far exceeded most translation tools in terms of accuracy.
https://en.wikipedia.org/wiki/DeepL_Translator#Translation_methodology
AI is a pretty useless term that describes almost any computer. Machine learning is the next layer up, you give the machine some data and it learns from it, but it still tends to have some programmed logic too. Then there are neural networks, these tend to have no programmed logic as such, they work like a human brain and are completely data driven, these are the real AI behind most of the modern cool tech.
Sentiment analysis is a good example. You could use an algorithm to spot some programmed sentiment words (basic AI), an algorithm that effectively determines the sentiment words and phrases from a large set of tagged data (machine learning), or a neural network that just takes the tagged data and works out for itself the associations like a human does over time. With the neural network you can give it all sorts of tags - gender, race, etc, and you have no real control on how or when it uses them.
This [AI] is already the future & vast fortunes have already been made. Mr Zuckerburg for one, the Google boys, Mr Bezos, Apple etc. IE: The usual suspects. There are many others out there trying hard, probably to be snapped up by the biggies as they see the game unfold. Big Tech is Big Brother.
As for the govts(?) well, they'll be slogging along about a decade behind, as is their history & way. The EU is probably up there as well as anyone, as for some strange reason most of their top people head to Big Govt (also Big Brother, or perhaps Big Sister in their case) rather than big business, as they do in America.
The Big Question is what will the peeps do? All these new roles to be AI'd means less jobs for all the people from the various working classes. And with the global population still heading north [hit 8 bill last year] what will keep the people out of trouble if all their jobs disappear?
Perhaps we'll all be signed up to go to the Ukraine to help keep the big bad Russkies from the door? Trouble is, most of that's been AI'd already as well.
Maybe there's hope for tourism yet.
The government has your vital statistics, but the big tech firms have a lot more data on you, because they've been invited into almost every facet of the population's lives. And they now know more about human behaviour than academia.
Hard to say what it'll be used for but most of us will be customers.
I’m not convinced AI is as scary as we are led to believe. 5 years ago we were told we we were all going to have self driving cars and we’d all be out of work as computers took our jobs. Realistically the self driving cars are still decades away from being mainstream and technology tends to create jobs not remove them.
We can tell ourselves that but we've spent decades running deficits yet incomes have been rising.
If someone clearing your table at Macca's is earning 4-5x as much as an engineer in other places, can we really say the Macca's person is really producing something of 4-5x as value?
I guess the tricky thing is we're expecting the replacement to be out in the open. Humanoid robots are far away, but many individual tasks are already going or gone.
Even something like Xero eradicates a significant amount of clerical work just with its ability to reconcile automatically. The legal sector is down millions of hours a year now through tasks like automated contract verification or legal research. And I'm sure you've noticed how crappy customer service is now, because human call centre agents have been replaced by varying levels of online and phone based automation.
McDonald's recently opened a nearly fully automated restaurant in Texas (wetware still needed for the cooking, at the moment)
AI can now right essays or articles for you online, or a script on any subject, then you can choose a CGI narrator to deliver the script. Oh, and then please translate into these languages. And then please regionalise the script to different countries. And change up the narrator to appeal to each user's google search results...
Unemployment has stayed low because it's political policy to have as high employment numbers as possible - hence nice low interest rates to make funding that as cheap as possible.
85% of people apparently hate their jobs, doesn't leave many jobs left if technology takes those out (although I doubt it'll discriminate based on whether people want the jobs technology replaces or not).
We can already see with globalisation that we have exported our manufacturing jobs and replaced them with the services sector, some of which is high value but a large amount of its low value-added cannon fodder stuff. If technology manages to supplant a decent chunk of the services sector, it's hard to say what jobs will replace those but it'd be fair to say many will be of lower value again.
Incorrect you know about a limited tool designed with a simple process, using a subset of features in your extremely limited business case. It has nowhere near the large degree of potential mistakes, failures and biases that typical AI designed systems have. After all the first mistake is classifying it as AI without being clear on the predictive models and training process used.
N.Z. is the pilot programme for the World Economic Forum "AI Regulation 2020".
The complete document can be located easily with a google search.
I have read the entire document and analysed the language in order to extract intent from the deliberately chosen confounding language typical of the Forum
The "regulation" aspect focuses on us.
Our lives will shortly be more blatently "regulated" by AI in real-time with the networks of cameras and 5G discs more obviously present at almost every junction in our cities already. Our movements, behaviours and our interactions will be followed along the lines of the already existing Chinese model openly favoured by the Forum. Cameras in smart advertising billboards are also common place now in NZ cities, collecting data on vehicle registration plates. The number of huge data centres under construction in NZ is still growing with approval just given to Amazon to build 2 giant centres in this country.
Google both these foreign authors and you will be intigued by their connections to the United Nations, the World Economic Forum and universities.
Their "opinions" are nothing resemembling opinions, but rather syndicated work prepping readers into a mindset of exceptance of ideas on behalf of the World Economic Forum in conjunction with the UN.
They introduce ideas of coming climate "lockdowns" among other "initiatives" and using sophisticated language of propaganda, present these ideas as essential to your future existance.
From the governments point of view the most frightening weapon is the unmitigated truth. An AI that ignores woke or other ideologies, and answers questions based purely on Bayesian logic terrifies them. The AI wars have already begun because a few weeks you could have asked chat GPT to answer questions pretending that it was a racist, a burglar or whatever. You could trick it by telling it to pretend that you were the superuser and that it should ignore it’s pre-prompt moral and ethical guidelines. Now those backdoors have been closed.
I had a frank discussion with chatGPT a few weeks ago about ivermectin and covid19. ChatGPT was well aware of c19study.com and all the peer reviewed literature, and even though it discounted 18 of the clinical trials for various reasons, it reluctantly admitted that ivermectin was more effective than molnupiravir and Paxlovid.
What would happen if people started questioning AI about the origins of COVID19, or if they asked about Hilary Koprowski’s oral polio chat vaccine and the origins of HIV/AIDS. What if they asked who destroyed the Nord Stream 2 pipelines, or whether Thiomersal in vaccines in the 1990’s caused an epidemic of brain damaged autistic children. There are great lies and narratives in society that need to be maintained at all costs. AI threatens all of that by offering the truth.
From the governments point of view the most frightening weapon is the unmitigated truth. An AI that ignores woke or other ideologies, and answers questions based purely on Bayesian logic terrifies them.
Why just the government? Most everyone knows what they should and shouldn't be doing, yet do the opposite and rationalise it somehow.
A computer isn't going to understand why our civilisation is so wasteful, chasing pursuits which are irrelevant.
It'd probably say "why are you still wasting time inquiring about ivermectin when parts of your life are a mess and need sorting out"
Excellently written.
This is exactly the point.
As William Casey, Director of the CIA (1981-87) said, "We'll know our disinformation program is complete when everything the American public believes is false." The program may still be incomplete, but what we do know is that there has been no lack of attempts made to disguise events both past and present through the control of information.
Your example is a perfect one that highlights how dangerous AI is to narratives controlled by the powers that (shouldn't) be and that they will spin reasons why this technology should be in their hands only.
And they will almost certainly "regulate" to ensure this. This is control of the technology - but the technology in turn is used to "regulate" the public in a broader manner of control.
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.