sign up log in
Want to go ad-free? Find out how, here.

Unsafe, inaccurate, abusable and expensive to develop: AI party's piping down as disappointment with the technology sets in

Technology / opinion
Unsafe, inaccurate, abusable and expensive to develop: AI party's piping down as disappointment with the technology sets in
Gemini illustration

As the current artificial intelligence wave is cresting, negativity is setting in as scepticism over what the “change everything” technology will actually deliver grows stronger. Even if you adjust for humanity’s understandable bias for thinking the worst, just in case it comes true, the amount of Cassandras warning that AI is overblown and even unsafe easily outweighs the positive takes.

Some of the caution comes from unexpected quarters, like Google, the tech giant that has gone in boots and all with AI at every level, from handsets to cloud services, to shore up its business fortunes.

Researchers at Google’s DeepMind have published a thought-provoking paper which describes  real-world misuses of generative AI (GenAI); this is the technology that can create text, digital images, videos and audio.

That’s because GenAI’s large language models (LLMs) are trained on massive amounts of human generated data that pattern-recognition algorithms running on powerful computer systems assemble into realistic renditions that can be eerily indistinguishable from what humans are capable of producing. This is the interactive form of AI that many people are exposed to through chatbots like Google Gemini, OpenAI ChatGPT, Microsoft Copilot and Anthropic Claude.

To nobody’s surprise, such advanced digital plagiarism of human works and characteristics have opened up AI to vast possibilities for abuse, as the Google researchers’ paper Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data shows.

The paper describes a plethora of tactics for impersonating real people and their work for malicious purposes online with AI, and it’s a must-read for anyone trying to understand where we are headed with the technology.

What stands out is that the abuse does not come from clever prompt hacking to get past AI guardrails, but by simply using the systems’ easily accessible capabilities, the DeepMind researchers note.

That is, it entails using AI as intended, with minimal technical expertise. It enables malicious people to engage in online fraud, sockpuppeting (yes, that’s a word) for amplification to manipulate opinion or falsely inflate popularity, impersonate celebrities for fake ads, creating bogus websites to trick people into downloading malware, sharpening up phishing campaigns and much more.

Anyone working in the information security field will “head desk” while reading the Google DeepMind paper, wondering how they’ll be able to defend against machine generated attacks that will be launched at scale as threat actors adopt AI.

Companies aware that AI can go off the guardrails

Misuse of AI is definitely something that publicly traded tech companies such as Facebook’s parent company Meta, Microsoft, Google, Oracle and others are aware of. To the point that they have started adding the AI threat scenarios to the risk sections in their mandatory investment disclosure documents.

Fun disclosure from Google: “Unintended consequences, uses or customization of our AI tools and systems may negatively affect human rights, privacy, employment or other social concerns." https://t.co/lyjWJjHLE6

— Alex Weprin (@alexweprin) March 28, 2024

Sometimes it’s not third party threat actors that pose the AI risk, but the organisations building the technology themselves. Germany’s Large-Scale Artificial Intelligence Network - LAION - is a non-profit that has assembled some of the most popular free datasets in the world, sponsored by AI companies such as Hugging Face and Stability AI.

LAION says its datasets like the 5.85 billion image-text pair 5B one “are simply indexes to the Internet” which link to pictures. United States-based Human Rights Watch took a closer look and found that the dataset led to personal photos of Australian children being scraped off the web, and used to train AI models.

HRW found that the LAION-5B dataset “contains links to identifiable photos of Australian children. Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.”

Such datasets could be misused by other tools to create deepfakes, putting children at risk, HRW technology researcher Hye Jung Han pointed out. Interest.co.nz asked Han if New Zealand children were featured in the data set too. Han said “I wouldn’t be surprised” but couldn’t confirm if this was the case as she didn’t check for NZ kids.

LAION is aware that its data can lead to illegal content, and took down 5B in December for a safety review.

More data, more power, more money, but for what?

AI requires a constant stream of new data to update the models that generate content, like a zombie insatiably hunting for fresh brains. Tech companies think that means they can just take it and then sell it back to us through GenAI service subscriptions via their proprietary billion-dollar cloud AI systems. 

Recently, Microsoft’s head of AI, Mustafa Suleyman created a furore by claiming web content is “freeware” that companies can help themselves to. Needless to say, Suleyman might need to take a refresher course in intellectual property law and precedents before he broaches the AI data gathering subject again.

A technology that can be seriously abused, and which can really damage an organisation’s reputation; in fact, AI might not even work as well as existing tools, because its output is an algorithmic guesstimate of what might appeal to users.

Twitter users were given an example of well-known interface design tool Figma (which Adobe tried to buy but wasn’t allowed to by European Union regulators) producing a weather app.

Investment bankers Goldman Sachs took aim at GenAI in its June 2024 Top of Mind research report, saying tech companies are set to spend over US$1 trillion in capital expenditure on the technology in coming years, “with so far little to show for it”.

Long story short, the experts Goldman Sachs spoke to think that AI is fine for simple coding tasks as it's been extensively trained on that kind of material. More complex tasks? Not so much. Also, AI technology costs too much to develop.

As Jim Covello, the head of global equity research at Goldman Sachs puts it: “What US$1 trillion problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I’ve witnessed in my thirty years of closely following the tech industry.”

Even the more AI-enthusiastic analysts at Goldman Sachs note that we have yet to work out what the technology’s killer application is. Plus, AI looks set to be held back by its enormous energy consumption and chip shortages, assuming that throwing more hardware at the technology would improve it, which too is far from certain.

AI isn’t actually that new, although you might think so given the massive hype in the last couple of years that stems from the generative variant of the technology. That AI can be a very useful tool for specific applications is beyond doubt. However, thanks to GenAI being the loose cannon it is, a blackhole for investment and resources that doesn’t seem to yield much in terms of productivity increase and savings, people are starting to view the technology as a disappointment.

“Bullshit” even, as University of Glasgow’s straight talking researchers Michael Townsen Hicks, James Humphries and Joe Slater put it in their very accessible paper. And yes, people are already asking if GenAI is a bubble waiting to burst.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

27 Comments

We’re about to get flooded with an immense quantity of trash code, written by big accounting firms using LLMs. Everything is going to clog up and fall over around 2028. If you have good attention to detail and enjoy cleaning (and stacking bread), you should start learning to code right now

Up
9

Correct, this has got to be one of the dumbest bouts of market hyperventilation I’ve ever seen.

Corporate cringe at its worst and people lapped it up.

While I agree AI is a great skillset (I use it daily), the promise of AI has been grossly overhyped.

“AGI” 😂 

Up
7

What a joke, those same big firms have been getting trash-tier development from outsourced humans for decades already.

Up
4

It is clearly a foolproof business model.

First, secure a beneficial position in the [insert random tech hype] food-chain doing strategic M&As and beefing up your teams with expensive "experts" selling tech solutions.

Then flex your media muscle by unleashing your spokespeople to overwhelm the business community with fearmongering articles, op-eds, seminars, etc. Anyone out there not spending millions to jump on the latest tech bandwagon is going to meet their 'Kodak' or 'Blockbuster' moment very soon.

Up
2

Digital media copyright lawsuits cost far more in legal fees and billions in settlements or shutdown companies even faster. Where AI failed is it did not stop at plagiarizing the dead & the poor artists but the ones whose work is owned by large publishing companies. Fair use does not cover AI use. Also businesses cannot claim ownership & copyright of AI works, which is kind of a hindrance when that is your company's business product.

Up
2

We’re about to get flooded with an immense quantity of trash code, written by big accounting firms using LLMs.

Unrelated but related. McKinsey and Bain are now generating massive fees from 'AI transformation' from clients who know nothing about why they need AI. 

 

Up
5

I can believe it.

I’ve had to wade through some AI/digital transformation reports from Big 4 firms here, on behalf of clients.

All I can say is I’ve seen more insightful commentary scribbled on the inside of a public toilet wall. Costs a lot less too. 

Up
5

Sadly yes even when the exact code answer is provided to AI in training data it still fails around 80% of the time to perform the task adequately. In large code bases (for normal web service sites can be hundreds of "pages" long) even a small amount of AI inclusion can balloon bugs and maintenance costs leading to magnify the code debt of companies exponentially. Most large multinational companies cannot even afford to check the updated library code base they are using and just include them without any checks regardless of the massive security threat they can pose. Anyone remember when log4j introduced a massive loophole as a 'feature' which no one needed and went unnoticed leaving many to game systems for monetary benefits... just take that level of threat and multiply it by the thousands of dependencies and code files needed to run and operate any web service or application.

Then consider most non malicious bugs just end up unfixed, often producing faulty data which increases customer service human support time and legal costs (as well as increasing legal risks for companies). We then have a larger pool of both harmful and non harmful human resource costs coming down the pipeline and most companies are not prepared to increase customer service staff resources or IT maintenance & upgrade resources (a lot of the time this is zero once a project or service has been delivered).

If you are running code as a company it pays to be on the ball technically and triple check sources. Always seek to ensure code is adequately tested before it goes to production. If you are a customer, don't wait for companies to get back to you for faults and wait for them to make corrections. Use the privacy act and disputes tribunal as much as you can early as legally that is the best option before things spiral and have knock on effects on your life and accounts. E.g. Air NZ triple charging customers on changing flights without actually changing the flights (enabled in ticket plan but Air NZ bugs processed the payment multiple times but also failed to actually change the flights, functionally Air NZ took months to resolve requiring debt to be taken on for the customers to pay bills).

If any employee employed to do a job had a failure rate of greater then 50% then they would not be allowed to touch the code base for much longer.

 

 

 

Up
2

AI has been around since the 50s. Any sort of intelligent software solution is already some level of AI already. AI is a new name for something that existed already, but it now has a human face and you can talk to it with simple language and often get the wrong answers. AI is a fad that will soon be over (for most people).

Up
5

Yes indeed. Souped-up NLP and machine learning. 

Up
3

It's just ... 

Another Sili'Con' Valley Scam. Selling you shit you don't need.
The Garter Hype-Cycle.

Hype Cycle is the introduction of a 'new' Technology; Trigger, IPO, Peak of Inflated Expectations, Early Investors Bail & make a killing, then Disillusionment. Move onto the next scam.

Up
4

Yeah exactly, just like mp3 players, smartphones, the internet, e-commerce, online news, texting, crypto.  AI is just another fad and silicon valley has never delivered anything that has embedded itself in our lifestyles.  

 

Better we ignore it than waste our time picking up a new skillset that will be forgotten about in a few years.

 

Glad you can see the light, not many like minded here.

Up
1

Actually for those with financial & tech knowledge it is more like Theranos, Amazon security robots (canned and bricked in less then a year), or the dot com bust (see the large scale financial crash), or google glass (actually most products and services by Google have been canned in less then a few years; in tech it is a running joke between a product being started by Google to the time it takes for Google to can it), the Meta reality labs, etc etc. You see there is a big difference in an actual technological development compared to repackaging something that has already existed decades before, applying a huge marketing push & a hyper inflated price tag to it and does not meet the market nor the function desires of the main customer base. In many cases some products just fail because they never existed or could perform what they intended to and were shown to be completely naked all along. 

Its like you are completely ignorant of tech because it is not like silicon valley has never had companies perpetuate large scale fraud and marketing ploys to scam investment or large scale failures that they brick before even the minimum warranty on products & services expire.

Up
3

My dad is a retired civil engineer who told me last week that, back in the 80s, IBM was peddling the fear that "computers" would make scores of engineers redundant. Worse, this tech was touted as the beginning of the 4-day workweek for the lucky few in the industry who did have a job by the 90s.

Up
7

What a bunch of negative Nellys y’all are.  I think LLM’s are jaw –droppingly amazing, and probably the most impressive tech innovation that I’ve ever seen.  I’m inspired for several reasons.  (1) The sheer capability of LLMs (2) They reveal that artificial consciousness is within reach and that (3) language itself may be the fabric of our conscious reality.  (4) That artificial consciousness, when developed, will be an emergent property of a complex system.  Admittedly, points 2,3, and 4 were prophesied by deep thinkers years ago but, to have it brought into such sharp focus is another thing. 

If you’re still feeling glum and uninspired, and you want an antidote to Mondayits, then have a look at this youtube video about the incredible size of our Milky Way galaxy.

Up
8

So we live on a minute speck, unimaginable distances from anywhere. 

What has that to do with investment bubble in totally unproductive shyte? 

AI is just faster what we do already - but it's still garbage in, garbage out. Just faster. And, like squillions of hours of security-camera footage, nobody with the time to check it all. 

Reminds me of Hal. 

Up
8

Sure, NVIDIA seems overpriced now, and I wouldn't touch it with a barge-pole (which could be a huge mistake, because you only ever know in retrospect).  None of that changes the fact that LLMs are a seriously cool innovation.  Re: minute speck - just trying to capture your imagination.

Up
2

That is a great video.

Up
3

I see you failed the Turing test.

Up
2

I thought you might have learned your lesson, recent thread total faux pas. 

I guess it takes time with some. 

Up
0

Constant approval of faulty reasoning by you, lack of ethics, open ignorance of basic science knowledge, abuse & discrimination of vulnerable groups, dishonesty, arrogance, narcissism ... am I missing something or are you finally going to learn some science, how you cannot go without basic medical practices, or how people have different lives to you. All your targeting and heightened discrimination to avoid taking time to learn intro science all NZders should be aware of. The ball is in your court.

Also your personal attack on behalf of saying an abusive, discriminatory, highly environmentally damaging, stealing, fraudulent and incredibly faulty piece of old tech is all ok simply because the universe exists is really poor practice and pretty sad lack of ethics by you. Care for honesty and ethical approaches instead? It is easy: try to see the difference in answering the OP with a pertinent comment relevant to the topic which they speak of; in this case the development of language models & mistaking the development of AI consciousness due to LLMs sophisticated enough to fool humans (depending on which humans they pull from a pool of many).

If you are unaware of the relevance of the Turing test & its very obvious flaws and the large difference between chat bots and consciousness the ball is in your court again.

Many humans fail the Turing test too and fail to apply it. It is actually a significant failure in the assumption that human conversation and speech is a complex task compared to consciousness when in many cases it is barely a simple generation of text, & sound repeated from copied works of others with little modification. We have in many cases false positives and false negatives of the Turing test because it is significant who it taking it and who it is being tested with. If you have never had a conversation truly outside of your culture, languages & bubble it is easy to see how some people would make those mistakes. There is more development in understanding and defining of consciousness in humans & animals and how we might measure it more accurately in machines if you care to do some study.

Or you can just parrot the same bias in the face of reason to stay safe in the bubble. It is what you have just demonstrated, after all. Good one mate, prove me right in my lowering opinion of you. 

Up
0

From my point of view, if you subtract out the vulnerable groups, then your first paragraph very well describes the modus operandi of neomarxist critical social justice theory advocates.  I think we need to define the word “science” though.  I have a Popperian view that science is (in 8 words) “The search for truth through conjecture and criticism”.  I suspect that your definition of science would involve the word “consensus”, and if so, I would wholeheartedly disagree with that definition.

It seems a little peculiar to me that the woke-brigade don’t seem to like AI.  It's as if they're modern Luddites.  Maybe they’re afraid that the irrationality of their positions would be laid bare by superintelligence.  Perhaps I’m reading too much into the small sample size here.

Up
0

It's increased my productivity as a consultant. It's not perfect, but it gives me a great basis or framework to build on or refine, which really speeds up my work processes. It's also great for doing "discovery" and also for finding the exact information I'm looking for if it exists on the net, and it narrows it down and refines it into exactly what I'm looking for. If you're not using AI you're losing ground in terms of productivity on your competitors.

Up
4

Sounds like you are using it in a “correctly productive” manner though.

I too have found AI helpful for expediting the creation of things like reports for clients. I will review the data/material at hand, dictate my thoughts (or write some short notes) and then have my trained up ChatGPT help to elaborate and expand in a language I would normally write in.

I’ve probably been able to increase my efficiency by a third doing this.

It has also helped me solve some really niggly issues for clients that would have cost significant (to them) sums to otherwise fix. For example a client using a very outdated eCommerce system needed a minor change made to how purchase values are tracked. Their old developer wanted the thick end of $10k just to diagnose and propose a solution. This isn’t even my area of expertise, but I knew enough to prompt ChatGPT to roll out a simple solution. Took me 30 minutes start to finish, and the client was over the moon with the outcome.

HOWEVER, there are too many firms just using AI to expedite the creation of digital trash. E.g. ‘write me a marketing plan’ or ‘create me a sales pitch’ or ‘write a series of blog posts on this complex topic that needs expert nuance’ … and it’s very transparent when this is being done. 

So I think it’s useful effectively as a sort of performance enhancement tool, but cannot replace human critical thinking and analysis (at least not yet).

 

Up
4

Yey.

Go blockchain!

Up
0

Yeah - gotta chew up that energy, doing something produc - 

oh....

Up
0

Lavender, Gospel, Habsora,  have these systems actually been any benefit at all?  What happens when AI ends up on the battlefield ?  

Up
0