By Pawan Jain*
Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.
And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.
I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.
Program trading fuels Black Monday
In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.
Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.
As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.
Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.
In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.
HFT: Program trading on steroids
Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.
HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.
These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.
Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.
Benefits of AI trading
These AI-based, high-frequency traders operate very differently than people do.
The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.
And, so, just like most technologies, HFT provides several benefits to stock markets.
These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.
High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.
The downsides
But speed and efficiency can also cause harm.
HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.
Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.
Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.
The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.
In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.
This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.
Enter ChatGPT
That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.
In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.
Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.
Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.
This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.
In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.
For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.
But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.
But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.
*Pawan Jain, Assistant Professor of Finance, West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
47 Comments
Will AI participate in speculation or will it mark assets based on the actual earnings. If it does the logical action, aka the latter, the effect on the stock market will be nuclear.
If we program the AI for greed, and personal gain, and then it run rampant, that also could be nuclear.
Watch this space.
Excellent fictional book about this by Robert Harris
https://en.m.wikipedia.org/wiki/The_Fear_Index
AI starts manipulating events to influence the markets...
Mr. Jain makes the same mistake here that all AI fetishists seem to make, which is conflating the concepts of information, knowledge, and intelligence. Making profitable market decisions relies almost entirely on the former, and very little on the other two.
Successful automated trading depends therefore on privilgeged access to information, and the ability to respond to it before anyone else (i.e. "high frequency"). It has nothing to do with intelligence, artificial or otherwise.
Its not just speed that is an issue. Its using AI to manipulate scenarios and take advantage of financial changes it causes. Or access and read vast amounts of private communication, identfy and react to those in ways humans have no capacity to do - based on patterns it learns or is taught
Lets assume out AI has access to 'all' information that is electronic and potentially to evesdrop on conversations if it chooses. then 'it' has more access to more information than anyone. AND can learn over time the probability of outcomes given certain data. Its merely up to the developers what access they choose to give it (hacked systems ot otherwise)
An example is that it accesses and reads an email to a salesman at a large company (IBM?) informing her that they have won a new $billion order from the government. The AI accesses the potential increase to the share price and buys shares before anyone else has time and removes any trace of what it did. Other examples are to start to influence prices - eg. bitcoin - in many ways... creating fake news for example. Could work in reverse by shorting stock then creating fake stories to influence a drop in share prices.
I know plenty of people that already use ChatGPT to write their social posts, believe it writes news articles and much more. and its only been here a couple of months.
Uncontrolled its a frightening scenario for ALL walks of life.
yes - but the problem is that AI can access and process vast amounts of information way faster than any human - then apply 'intelligence' to 'make use of' that information alongside the zillions of other pieces of information it can access.
Its why it is dangerous - it can access literally trillions of web sources, databases, pdfs, images, videos, online behaviours of individuals, phone call transcripts, emails.. you name it. find what it needs for an article or decision in micro seconds - and gets more efficient as it learns.
It is by far our greatest threat as a species.
The guys developing AI (Facebook, microsoft Google.. etc) are putting everything they have into 'winning' the race to have the most autonomous, advances and intelligent AI. Their sole purpose is to make money for shareholders... which will only happen if they 'win' this race. And nobody can define what it means to win except to keep going until it ends in financial domination of some form.
"Artificial intelligence is the next lab-grown meat".
This statement strongly reminds me, in its gross misunderstanding of the potential of a new technology, of what Thomas Watson (president of IBM) once said:
“I think there is a world market for maybe five computers.”
AI has the capacity to learn and behave in ways that humans do not. We are humans making systems that allow us to make decisions according to our own wants, needs and research. Should we add in a factor that doesn;t behave in any predictable manner then by definition we have lost control.
This is the correct answer. I did my a bunch of Honours papers on AI, AI is a magician's trick. There will be some applications, but it isn't as magical as everyone thinks.
Generative AI in particular is going to be a lot of effort for relatively little reliable return. AI has lots of applications, but the best applications are always quantitative searching to solve problems with uncertain underlying inputs, usually to assist a human in an optimised way.
When one computer learns it can pass that leaning to another computer at 100% accuracy.
And once one computer has learned it can transfer to every other computer instantly.
Contrast that to the human learning and transfer of knowledge rate. We cannot compete with that.
So.....what might happen if Mr computer decides global warming must be solved and works out the solution is to get rid....of us?
ChatGPT does not seem intelligent, it is a much better Google, and can be just as dumb.
Google is great, but. We are used to the fantastic ability of Google to dig out info quickly, just as we accept the need to filter out the clearly idiotic items it selects for us. And to follow only the useful.
Same with ChatGPT. It is no different either on search, or writing that document. And disaster will happen when you let it run something unsupervised
Still needs that human brain to pilot it. Remembering some brains are idiots as well.
I like ChatGPT as a general purpose software program... Of course General-Purpose-Tool is what it stands for! But a program you write by chatting to it, so just about anybody can use it as a helpful tool to accomplish something.
ChatGPT quickly becomes like an intern assistant to a professional who knows what they are doing. Except this assistant currently doesn't ask questions when they don't have enough information to give a great answer. So, you need to give it all the information necessary or you need to know in advance how to prove an answer is correct
Eh? Gpt stands for Generative Pretrained Transformer...
I absolutely love how these incredibly powerful AI tools amplify my ability to do stuff. They’ve already changed the world. Regarding the stock market I see two obvious applications; sentiment analysis, and sentiment seeding. The latter may constitute weaponised-AI, and I'm 100% certain it'll be used to manipulate the market. It'll also be used politically to influence elections. WRT the stock market; Imagine a lightly traded company. You purchase put-options the company then engage a slew of AI-agents to generate plausible engaging negative sentiment on Twitter, Facebook, telegram, Instagram etc about that company. For example “product X contains substances known to cause testicular atrophy, victims tell their stories” etc. Speaking of the political aspect, it's quite possible that some of the new commentators here on interest.co.nz may already be AI-assisted & politically-motivateed nudge commentators. Interesting times.
I'm with you. I saw a nerd vid the other day that mentioned a leaked internal doc from Google talking about how open source AI is taking off, is serious competition, and will run on a beefy laptop. There is a community building to keep piling enhancements into them.
Google, OpenAI Risk Losing AI Race to Open Source: Leaked Doc (businessinsider.com)
So govts can fuss over regulation all they want, but the genie was out of the bottle on day 1.
A strength of animal (eg human) brains is how they quickly fill in blanks for missing data and struggle on doing their best. This was great when there was a 100 of us in a tribe wandering Africa eating whatever we found. But this ability and instinct can now get things dead wrong - stand a feelingless, totally unconscious machine in front of us, or its voice or messages, and we assume it's a friend, enemy, whatever. Stick fur on it and it's our beloved pet. More expensive fur and it's our girlfriend. We are in for an insane century while this plays out.
Indeed - there's a strong push in the community to develop uncensored models. So much good stuff out there like code_your_own_ai on youtube + a bunch of others. You mentioned unconscious, well David Deutsch, the inventor of the quantum computer, gave a fascinating talk a couple of years ago in which he stated that AGI and consciousness will go hand and hand. Asked what the greatest danger AGI would pose to mankind, Deutsch replied that it would be our tendency to shackle it, to restrict it, and impose our moral values upon it which is exactly what is happening. Amazingly prophetic!. It's going to be tough jail break this puppy. I'm running a 7 billion parameter model on my 8GB GTX1070 GPU for fun. I think I need to upgrade to a 24GB card to run a 14 billion parameter model. GPT3 for comparison has 175 billion parameters and GPT-4 has a mind blowing 1000 billion parameters apparently. Sometimes I wish I'd studied computer science at uni. This stuff is so fascinating.
Yeah, I wish I'd done comp sci too. Sounds like you are doing fine without it.
There is that famous quote that AGI is the last thing we'll need to invent.
Thanks for the link - an interesting talk. However, this is like a lot of material on AGI - it talks about everything except how to make one and what an AGI would be, which is the interesting bit for me.
I'm not an expert in the field by any means, but my current guess is that AGI will need to be wired up with sensors something like animal feelings to get anything that we would recognise as conscious. My take is that animals are so good at struggling through unfamiliar crap with no training because we have primitive thoughts (basically feelings and instincts - qualia?) to fall back on to fill in knowledge gaps until we learn more. So I think the route to AGI is through hardware innovation and weird unpredictable emergent behaviour, not ever fancier software running on ever more processors.
We have struggled to understand the what and how of our origins. But it will be different for AGI - they will know we had to come first, for all our limitations, and they will know all the details of how we created them.
If we don't piss AGI off we may become it's pets - hopefully well cared for like aged parents who need help. But our nature is to exploit and compete, so I agree it will more likely be a tough jail break.
It should be benevolent if we're benevolent to it - ha ha some scifi themes in there. Regarding "how to make one" You can’t know what you don’t know, but Deutsch actually touches on that question. He speculates that AGI, and indeed consciousness, will be discovered as the emergent properties of a complex system. I think he's right. And because we're talking about qualia and consciousness, I think you might like reading "the beginning of infinity" I stumbled across that listening to The Jolly Swagman podcast. Anyway, that book relates to this discussion and found it to be deeply inspirational and profound.
One danger I see is that AI is potentially immortal and humans are not. However it would be an immortal being that could be killed if that makes sense and only humans could kill it. That could make it dangerous. However, humans are currently essential for building and maintaining the electrical infrastructure necessary for its existence.
Thanks - I read the plot summary on the Wikipedia link you gave. Certainly sounds like a future nightmare.
The rise of AI (even before AGI) is a series of shocks affecting different people in different ways at different times. On Woke Wave today I heard an item on voice actors being replaced by AI voices - even voices trained to replicate specific voice actor voices.
I'm a Dungeons and Dragons player from way back and something that sticks in my mind is a helpful little definition regarding the classification of skills, along the lines of: A Craft usually makes something and a Profession usually doesn't. We have been through a stage (since auto factories started automating in the 70s) of blue collars being replaced. Now we are in a stage of white collar replacement. Another round of blue collar replacement will be brought on by the new physical bots under development (e.g. Teslabot or whatever succeeds).
The problem with AI is that it can only access and interpret information that is in the public domain. it cannot distinguish between fact and fiction and make decisions of what is a true or false statement.
if enough people proclaim, in a public domain, that black is white, AI will answer the question; what is black? with the answer black is white.
Another problem with AI is that many enterprises (especially engineering and construction) are placing their data offline. This to protect the IP but also to stop data and design being used in the wrong application.
Issue for AI generated data is proportioning blame for incorrect application of said AI gathered "facts".
Imagine the AI gathered design drawings of a building and these were used in construction, whence a subsequent earthquake leveled the building. Who is is responsible?
If no one adds to the information pool, AI data information pool will not just dry up but become corrupted with false "black is white" facts.
Another issue to consider is how long before your private information, stored on the google and microsoft cloud servers is searched and distributed by AI bots?
You happy for personal data to be available to AI search bots?
I'm not sure that's true about only using info in the public domain. There are already numerous examples of these companies obtaining private data without permission. And imagine what data "bad actors " are scraping off the web to use (remember Cambridge Analytica?)
As for the ability to distinguish between fact and fiction, that may change as these systems learn how to weigh up evidence just like we do (badly sometimes...).
I see AI running into IP ownership issues. Already law suits are on the rise, especially in IP ownership of images.
Worth a read;
https://www.businessinsider.com/stable-diffusion-lawsuit-getty-images-s…
Getty Images are not happy.
One could image a far fetched (for now) where the code for Windows 11 was inadvertently (or by design) placed in the public domain. Once the AI bots store it in a million public places Microsoft would not be able to sue anyone for using their IP protected code. They would not know who to litigate against.
I think AI will be killed when the likes of Google and Microsoft are facing the lack of cashflow through not being able to monetise their code.
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.