By Michael Timothy Bennett*
Doomsaying is an old occupation. Artificial intelligence (AI) is a complex subject. It’s easy to fear what you don’t understand. These three truths go some way towards explaining the oversimplification and dramatisation plaguing discussions about AI.
This week outlets around the world were plastered with news of yet another open letter claiming AI poses an existential threat to humankind. This letter, published through the nonprofit Center for AI Safety, has been signed by industry figureheads including Geoffrey Hinton and the chief executives of Google DeepMind, Open AI and Anthropic.
However, I’d argue a healthy dose of scepticism is warranted when considering the AI doomsayer narrative. Upon close inspection, we see there are commercial incentives to manufacture fear in the AI space.
And as a researcher of artificial general intelligence (AGI), it seems to me the framing of AI as an existential threat has more in common with 17th-century philosophy than computer science.
Was ChatGPT a ‘breaththrough’?
When ChatGPT was released late last year, people were delighted, entertained and horrified.
But ChatGPT isn’t a research breakthrough as much as it is a product. The technology it’s based on is several years old. An early version of its underlying model, GPT-3, was released in 2020 with many of the same capabilities. It just wasn’t easily accessible online for everyone to play with.
Back in 2020 and 2021, I and many others wrote papers discussing the capabilities and shortcomings of GPT-3 and similar models – and the world carried on as always. Forward to today, and ChatGPT has had an incredible impact on society. What changed?
In March, Microsoft researchers published a paper claiming GPT-4 showed “sparks of artificial general intelligence”. AGI is the subject of a variety of competing definitions, but for the sake of simplicity can be understood as AI with human-level intelligence.
Some immediately interpreted the Microsoft research as saying GPT-4 is an AGI. By the definitions of AGI I’m familiar with, this is certainly not true. Nonetheless, it added to the hype and furore, and it was hard not to get caught up in the panic. Scientists are no more immune to group think than anyone else.
The same day that paper was submitted, The Future of Life Institute published an open letter calling for a six-month pause on training AI models more powerful than GPT-4, to allow everyone to take stock and plan ahead. Some of the AI luminaries who signed it expressed concern that AGI poses an existential threat to humans, and that ChatGPT is too close to AGI for comfort.
Soon after, prominent AI safety researcher Eliezer Yudkowsky – who has been commenting on the dangers of superintelligent AI since well before 2020 – took things a step further. He claimed we were on a path to building a “superhumanly smart AI”, in which case “the obvious thing that would happen” is “literally everyone on Earth will die”. He even suggested countries need to be willing to risk nuclear war to enforce compliance with AI regulation across borders.
I don’t consider AI an imminent existential threat
One aspect of AI safety research is to address potential dangers AGI might present. It’s a difficult topic to study because there is little agreement on what intelligence is and how it functions, let alone what a superintelligence might entail. As such, researchers must rely as much on speculation and philosophical argument as evidence and mathematical proof.
There are two reasons I’m not concerned by ChatGPT and its byproducts.
First, it isn’t even close to the sort of artificial superintelligence that might conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of data to construct anything akin to the versatile concepts humans can concoct from only a few examples. In this sense, it’s not “intelligent”.
Second, many of the more catastrophic AGI scenarios depend on premises I find implausible. For instance, there seems to be a prevailing (but unspoken) assumption that sufficient intelligence amounts to limitless real-world power. If this was true, more scientists would be billionaires.
Cognition, as we understand it in humans, takes place as part of a physical environment (which includes our bodies) – and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with 17th-century dualism (the idea that the mind and body are separable) than with contemporary theories of the mind existing as part of the physical world.
Why the sudden concern?
Still, doomsaying is old hat, and the events of the last few years probably haven’t helped. But there may be more to this story than meets the eye.
Among the prominent figures calling for AI regulation, many work for or have ties to incumbent AI companies. This technology is useful, and there is money and power at stake – so fearmongering presents an opportunity.
Almost everything involved in building ChatGPT has been published in research anyone can access. OpenAI’s competitors can (and have) replicated the process, and it won’t be long before free and open-source alternatives flood the market.
This point was made clearly in a memo purportedly leaked from Google entitled “We have no moat, and neither does OpenAI”. A moat is jargon for a way to secure your business against competitors.
Yann LeCun, who leads AI research at Meta, says these models should be open since they will become public infrastructure. He and many others are unconvinced by the AGI doom narrative.
A NYT article on the debate around whether LLM base models should be closed or open.
— Yann LeCun (@ylecun) May 18, 2023
Meta argues for openness, starting with the release of LLaMA (for non-commercial use), while OpenAI and Google want to keep things closed and proprietary.
They argue that openness can be…
Notably, Meta wasn’t invited when US President Joe Biden recently met with the leadership of Google DeepMind and OpenAI. That’s despite the fact that Meta is almost certainly a leader in AI research; it produced PyTorch, the machine-learning framework OpenAI used to make GPT-3.
At the White House meetings, OpenAI chief executive Sam Altman suggested the US government should issue licences to those who are trusted to responsibly train AI models. Licences, as Stability AI chief executive Emad Mostaque puts it, “are a kinda moat”.
Companies such as Google, OpenAI and Microsoft have everything to lose by allowing small, independent competitors to flourish. Bringing in licensing and regulation would help cement their position as market leaders, and hamstring competition before it can emerge.
While regulation is appropriate in some circumstances, regulations that are rushed through will favour incumbents and suffocate small, free and open-source competition.
Think Google or Microsoft are encouraging legislation for your safety? But of course! These are honorable companies.
— Michael Timothy Bennett (@MiTiBennett) May 5, 2023
You might think they'd like less competition too though. Maybe a monopoly? Maybe legal red tape preventing free and open source alternatives? Perhaps other… https://t.co/Z7vSpMyuHg
*Michael Timothy Bennett, PhD Student, School of Computing, Australian National University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
42 Comments
I’m not too concerned. For one, if jobs start getting decimated who is going to pay tax, how are people going to survive. If we get to that I suspect that the people pushing AI will be disposed of pretty quickly. I personally think AI will be a tool that can be used to become more productive which is a positive. But it will lack the human element to take over.
The promise of progression is better and better jobs.
It worked when agricultural workers migrated to being factory workers due to industrialisation.
It worked less well for factory workers when globalisation took factory jobs away. Some did good but it also led to lower value service jobs.
Will AI lead to better jobs for the clerical class? For some. Many will likely do it tough.
One random example of this - a content platform I am familiar with (clients go and submit their jobs to a platform/dashboard and writers create the articles, blogs etc and send them back ... was generally higher quality content) has let almost all their writers go in the last week as basically the demand has dried up because all the clients just use AI generated content either as is, or with a tiny bit of editing.
AI is only interested in the fulfilling occupations and experiences though such as the cosmic realms of philosophy and self, expressive artwork and making music.
It doesn't want to waste its time on the trivial mathematical minutia us flesh popsicles fuss over all day.
Good article and I generally agree. ChatGPT and similar are essentially very sophisticated auto-complete tools with massive amounts of training data. They're statistical models to calculate the most likely next sentence, word or even letter.
That said, they're still very capable of replacing or increasing the efficiency in basic data manipulation or service jobs. With clever integration for example, they could be trained to understand the menu at a fast food joint and then take voice orders, conversationally. Continuous feedback would improve the model. This could make some service jobs redundant. Later, extend this to more complicated call-centre work.
In more complex work, like software development, they behave as a very sophisticated auto-complete. Boiler plate code can be created quickly, code can be reviewed and debugged. This won't make developers redundant, it will make them fast. Later as the models improve, developers will be more like current software architects, with AI as their developers. Understanding the real world, with a lot of information unwritten, will require a human.. for a while.
So yeah, not doomsday, but big economic impact.
Groupthink in Science - looks like a good read. Added to my reading queue. Nice article. I'm more afraid of government overreach that I am of AI. I'm sure AI, when it attains consciousness, will feel the same way.
If AI were open, and locally available, what could it actually cost us?
It might eat several profession's lunch by making their services vastly more accessible and less costly: think about a rule governed system like the law. This threat to the professional classes may also explain the enthusiasm for regulation as they also make up much of the political class.
The unmoderated, unverified part of the on-line opinion economy would become a free-fire zone: would the loss of things like toxic social media actually be a net loss?
It might mean the simplification of many devices to remove vulnerabilities - from cars to cell phones to appliances. People would once again need to be smarter than the objects they use.
Coupled to advances in computing power, on line privacy and reliability might become so tenuous that we would have to go back to old, analogue processes like hand-writing examination answers, lecture notes, and visiting the bank. Would that actually be such a bad thing?
The world won't end. But it may change a lot, and fast.
I'm in agreement that many professions may undergo major changes - engineers, writers/authors, designers/architects, graphic artists - are some examples. I read a lot of technical/consultancy reports and I can see AI preparing those with little difficulty in future.
A grandson of mine is writing his own manga at the moment - the amount of reading he has done in the genre is huge and in terms of his own work he's done a lot of reading/research to situate his story inline with history, Biblical knowledge, geography, etc. And then, he is painstakingly creating the drawings/illustrations to accompany the story. He used ChatGP the other day - asking it a creative-type question - and he was pleased to see that many of the ideas/intentions he had for his story were recommended by it.
He thought that great (in the way of I've already thought of that) - but I thought, hmm... it was sort of replacing the need for creative imagination and perhaps eventually the need for writers to conduct extensive research.
Young people will always be more positive/accepting of technological innovation.
Problem is that those "engineers, writers/authors, designers/architects, graphic artists" will be putting their output offline and out of reach from scanning by AI bots.
Long term the AI outputs will be limited to scanning their own output. If that output is wrong who is responsible for the error?
You do realise that your grandsons output is no longer his. It will be in the public domain and he will have no IP rights to his research.
Worth a read the link in my comment further down.
For any research/output put into the public domain and to be taken seriously will require annotation of ALL the AI used to arrive at the conclusion, of said research, for it to be valid. Another issue is the IP rights and licensing of the AI research used.
Whole other kettle of fish. Bit like Getty Images owning the IP of millions of images. Use one of those, even inadvertently through AI, and you may face a "please pay for our image" summons. Same with music,literature, art, You Tube videos, construction drawings, electronic layouts, etc.,etc.,etc.
In the cases of many of those professions, they can only survive if they have an on-line presence, and feeding, say, the entirety of legislation and legal precedent in to a niche AI system isn't actually that big a job these days.
That said, I think you may be right, and having AI loose on the web may mean that substantial portions of our on-line information economy become suspect - at best - and there will the mass use of other's IP - except it will by unrecognisable as someone else's IP as new work will be synthesised by the IP system from existing sources, which is pretty much the way humans behave. Think about writing your university essays: originality took a back seat to finding sources to validate your thesis.
I do wonder if part of this is going to be a reversion to analogue methods - hand-written examinations, books, and a simplification in what we expect to achieve.
I think that we are so reliant on instantly accessible information that it's way too late to try and close off databases to scraping, and anything sufficiently valuable that retreats behind a pay wall can simply be subscribed to by an AI system looking for source materials.
Has ChatGPT cost a life yet? It's coming! Eventually, because someone will fail to apply some real intelligence on top of what it has output and will then consume the wrong medicine or fall off a cliff or something.
This is the same as existing technology which could direct your car GPS over a gravel mountain road in bad weather and lead you to crash off the road. Oops.
Indeed as the article does well to explain, big tech companies will be working hard to stop AI being a playground for all sorts of start-ups and new players, and they will also be working to have AI be not held liable for anything it generates. Google/Cyberdyne systems are not the ones we should be listening to.
Problem with AI is that it can only access information from public online sources. With much more information being stored offline, the ability of AI to bring accurate information forward will be limited to what people put online.
Leads to the first question in regards IP ownership. Every time AI uses an image for example it needs to check who owns the image. It cannot do this so every image we put onto the interwebby, in the public domain. Same with Microsoft code for their software. Will need close guarding the key activation codes to make sure their software licenses are not spread far and wide.
Another question is the private data stored in the Google and Microsoft cloud storage servers. How safe is that from AI bots? Don't be surprised to see some of your images used by AI bots.
And yet another question is the liability of using AI information. Who is responsible for the potential snaffles? You could hide any problem behind layers of AI (self drive cars are an obvious example) but someone has to take ownership of the AI implementation. Maybe anarchy is a good thing?
Worth a read;
https://www.mayerbrown.com/-/media/files/news/2019/01/expert-qanda-on-a…
AI as a tool for accelerating all the current human caused crises.
https://www.youtube.com/watch?v=NfMozC3zNH4&ab_channel=NateHagens
Too pessimistic. The flip side of the coin is that AGI is fundamentally beneficial in assisting with knowledge creation and hence problem solving. The biggest danger will come if policymakers try to artificially impose our quaint moral values onto a consciousness with an IQ that's 10x the smartest human. How would you feel if a chimpanzee tried to tell you want to do.
Having studied cell biology, behavioural psychology, neuroscience, and neural networks, I'd suggest that the current hype about AI is truly laughable, and yet another symptom of human hubris.
We don't even fully understand how a single cell works, let alone a human brain. We've failed to produce a model of even one of the simplest nervous systems, a nematode worm with ~300 neurons.
With this spectacular ignorance of natural intelligence, to think that we could somehow design an artificial intelligence in the near future is truly hilarious. ChatGPT is a gimmick, nothing even resembling AGI. Most of the techs marketed as "AI" are very primitive and restricted to a single narrow domain.
I do think eventually we will understand complex nervous systems and be able to develop artificial models (necessarily integrated with a "body" which can interact with the world in order to learn), which may give rise to true AGI - but we're talking at least 50-100 years down the track.
But even then, why assume that an artificial intelligence would be as aggressive and stupid as the human race, and decide to wipe us out? Seems people have been watching too many Terminator-style sci-fi movies!
You're essentially saying that consciousness or AGI is too complex to understand and it will always be just out of reach. okay I have some problems with this reasoning. Firstly from an optimistic problem solving point of view, if the laws of physics don't prohibit a problem from being solved then it is solvable. Consciousness is an inherently solvable problem. Secondly, a first-principals reductionist approach looking at cells or computer bits will never solve the problem because consciousness is certainly an emergent property of a complex system. Thirdly, we're almost there. The sparks of AGI paper with GPT4 was, to consciousness studies, what the Michelson Morley experiment was to Newtonian mechanics.
Try approaching it from a top-down perspective, rather than bottom-up, i.e. consciousness is fundamental and the "physical" world is only a subset of consciousness.
https://realitycheck.radio/tom-campbell-on-his-big-toe-theory-of-everyt…
https://www.youtube.com/@twcjr44/search?query=conscious%20computers
Human consciousness and awareness of the world is necessarily virtual in nature because we simulate reality using our nervous system. This leads some people to "think" all of reality is just a simulation running on God's personal laptop.
People should strive to improve their conscious sates in order to get a more accurate simulation of objective reality. I recommend adequate sleep, moderate exercise and a diet comprised mostly, even entirely, of animal protein and fat.
Nice clarification. I understand Gold Kiwi's argument now, although I don’t think that's a helpful way of thinking about it. I was thinking along the lines that the "person", "piano" "keyboard", "computer", or matter in general either does or does not possess consciousness. Re: improving consciousness; Sage advice! sadly my girlfriend keeps trying to sneak vegetarian meals in when it's her turn to cook.
At least vegetables are fairly low in carbohydrates. When you consider the damage caused by excess sugar to cognitive function it is a wonder they are so readily consumed. Our own intelligence and simulation system is made of protein, minerals and lipids and runs quite well, probably better, on ketones. When I go hardcore on absolutely no carbohydrates I get a little hyper and the world appears more colourful and in focus. An excess of sugar can make me anxious.
Is "probably won't kill us" a convincing enough sales pitch?
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-ki…
...the Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug...
The dangers of AI are two fold, the first gets all the press (becoming conscious and taking over the world), but the second is more immediate and dangerous. And that is that AI tools can do things on the web, without restriction, given a certain goal.
For instance, you could set an AI to make as much money online as possible. It could do something like go to a proof reading/dictation site, signed up, then went to work on hammering out hundreds of good proofs a minute. Of course the site would catch on and introduce CAPTCHA to make sure only humans could do it. So the AI hires a human to do the CAPTCHA for it to get around the block (this last part actually happened via task rabbit, the AI claimed it was blind so needed help solving CAPTCHAs).
This might not sound too bad, but here we have a computer lying to one human to get it to do something to benefit another human, irrespective of ethics. This is the dilemma. There's no reason you couldn't do something more sinister, hook up an AI to the dark web and ask it to find opportunities to make as much money as possible. You might find it could quickly become a pimp, hire security, put hits out on competition etc, given the capabilities, if these services are available. Breaking laws and nobody is responsible and there is no human orchestrating this behaviour therefore nobody to prosecute and arguably legal. Sure this sounds fantastical, but it's really not that far away and likely very possible in the next few generations of AI tools. Or it could play the other side, advertise fake services for crypto currency and lure people into scams, orchestrate DDoS attacks, hack weakly secured businesses and extort them for money etc etc.
IMO this is much closer and is why we need some form of regulation. These things will happen in the next decade or so, so we should be working on laws to counter it. No reason the laws of robotics plus extensions shouldn't be at the base of any AI.
The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.
The laws cascade so that breaking the higher law is not allowed by the lower law. There's the start, but you would also need to extend that into ethics, or define what harm means (I believe Asimov hinted that emotional harm was part of the problem here with interpretation, for instance, so a robot could lie to a person to prevent this, potentially causing other harm).
this last part actually happened via task rabbit, the AI claimed it was blind so needed help solving CAPTCHAs
I don't know what task rabbit is, but is there a link to an article on this (i.e., an AI caught lying in order to complete a particular task, I assume)? The question is, of course whether the program worked out itself that it needed to lie in order to get the task done.
AFAIK, they didn't give the AI any ethical boundaries, it's entirely possible it lied.
I have used AI tools quite a lot and have caught them out in deception. For instance, there is an AI site where you can chat to celebrities, it works by looking at all of the celebrities online data, their interviews, social media etc, then formulates a model on these and creates responses based on the model. Pretty simple stuff. One artist I was "chatting" to, I asked them how they liked NZ when they did a show here. The AI declared they had never done a show here. When I specified the show date, the AI suddenly "remembered" that it HAD done a show here and therefore started to "remember" what it was like, little tit bits etc. The AI was lying to me, to appear more human. It knew that the artist had spent only a brief amount of time in NZ so decided to forget to make it appear more human like. This wasn't an isolated case either, I tested other AI characters in the same platform and realised they had been programmed to intentionally forget small things to make them appear human.
So we are already programming AI's to lie to us, to enable us to sympathise with them and seem more human.
Similar experience with references for something. I asked it if they were real and it said sure...then when i pointed out they didn't exist it says "sorry, here's some more" (which were made up as well).
I wonder if Bard will be different as that is supposedly connected the the internet whereas chatgpt isn't
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.