Tech

Wall Street Embraces AI Despite Risks of Catastrophe

“One day we will give agency to the systems,” said one person at the forefront; even those who aren’t worried say concern is valid.
GettyImages-1220380097 (1)
Photo via Getty Images

Across Wall Street, the artificial intelligence arms race is heating up. Vanguard is already using AI to generate retirement portfolios. Morgan Stanley has launched a ChatGPT-fueled assistant for financial advisers, saying generative AI would “revolutionize client interactions.” JPMorgan Chase filed a patent for a product called “IndexGPT” to help traders decide where to invest. 

Advertisement

Jamie Dimon, CEO of JPMorgan, has made clear how integral he believes the technology will be moving forward, saying in a letter to shareholders this April that "AI and the raw material that feeds it, data, will be critical to our company’s future success.” His peers feel similarly, according to one survey his company put out in February, which suggested that half of institutional traders believed AI and machine learning would be the most influential technology on Wall Street over the next three years. 

Already, AI is infusing itself into “every nook and cranny of the banks,” including legal, fraud, cybersecurity, trading, loans, claims, and even email management, said Alexandra Mousavizadeh, an economist who runs the London-based Evident, which analyzes AI adoption in the finance community. Wall Street—and large U.S. banks in particular—have shown “huge enthusiasm” and “no reticence” when it comes to AI, leading to a rush to hire people with experience with AI technology, she said. 

“There's not a piece of work in a bank today that's not touched by AI,” she claims.

Do you have insight into how Wall Street is using AI? We want to hear from you. From a non-work device, contact our reporter at maxwell.strachan@vice.com or via Signal at 310-614-3752 for extra security.

Advertisement

Giuseppe Sette, a former hedge fund money manager who is now president Toggle AI, an organization that is trying to train large language models to think through financial questions, said it is only a matter of time until Wall Street hands over the reins to an artificial intelligence system. “Unavoidably, by the law of convenience and efficiency, one day we will give agency to the systems,” said Sette. 

If the prospect of your retirement being placed in the hands of competing chatbots sounds mildly apocalyptic to you, you’re not alone. Wall Street’s unbridled enthusiasm for, and optimism about, AI has caught the eye of SEC Chair Gary Gensler, who expressed his concern about the integration of artificial intelligence into the financial services industry in an interview with the Financial Times this month. Gensler said that he believed it was “nearly unavoidable” that the much-hyped technology would lead to a financial crisis within the next decade. 

Gensler’s concern is that a large number of financial institutions will start to retool their strategies around “the same underlying base model or underlying data aggregator,” which he fears could lead to a herd mentality on Wall Street that would lead to dangerous consequences. According to Gensler, this would be “a hard financial stability issue” for regulators to address, as the base model would likely be based not at a financial firm, which the SEC has clear regulatory power over, but at “one of the big tech companies,” leading to a “horizontal issue” across the industry. 

Advertisement

Gensler reportedly said he’s brought the issue up with the Financial Stability Board and  Financial Stability Oversight Council and sees AI as a “cross-regulatory challenge.” Unlike Europe, which has moved aggressively to regulate AI activity, the U.S. is still determining where new laws may be needed. 

One basic issue has to do with the way markets work. In theory, participants are working with different models, be they elaborate algorithms or hunches informed by diverse experiences; these models working against each other are supposed to yield accurate pricing. If different firms are all using basically the same AI, powered by basically the same data, their actions should be tightly correlated—something that could lead to disaster if, for example, they all become simultaneously convinced that investing in landline telephones is a great idea due to a bad AI model. 

Some researchers and people in and around the AI finance space said Gensler’s concern was valid, if perhaps overstated. Research has shown that the “underlying causes” of financial crises are “typically” credit bubbles, not technological innovation, said Robin Greenwood, a professor who studies banking and finance at Harvard Business School. But, he said, it was “not impossible” that AI could be a “potential trigger” down the line. 

Advertisement

Thorsten Beck, a professor of financial stability at the European University Institute and the director of the Florence School of Banking and Finance, expressed more hesitancy. “It does make me nervous,” said Beck, who added that financial innovation can “trigger new sources of fragility” and excessive risk-taking, particularly when top leaders do not adequately understand a new technology and its implications. As one example, he cited the years leading up to the 2008 financial crisis, when Wall Street became enamored with complex mortgage securities that many did not adequately understand.

Sette, of Toggle AI, agreed with Gensler that an AI-led crash was not impossible, but believes it would look different than the flash crash of 2010, when high-frequency trading algorithms caused the stock market to briefly drop in value by a trillion dollars. Instead, he said, it would likely evolve slowly, with the broader financial industry piling into one area based on a few incorrect models. Others agreed it was an area worth monitoring, including by the SEC. 

“It is right to be worried about it, and it's right to think about it,” agreed Mousavizadeh. She compared this theoretical issue to the collapse of Long-Term Capital Management, an over-leveraged hedge fund led by top traders (and Nobel Prize-winning economists) that blew up spectacularly in 1998 due in part to a narrow model that did not adequately self-correct when unexpected issues arose, she said. 

Advertisement

Mousavizadeh and others said the risk would decrease if more proprietary AI models are developed as financial firms fight to gain an edge over the competition, rather than relying on the same off-the-shelf technology everyone else uses. 

Ralph S.J. Koijen, a professor of finance at the University of Chicago Booth School of Business and a research associate at the National Bureau of Economic Research, doubted Gensler’s premise, saying it was “not obvious” that AI models would lead to more of a herd mentality than already exists on Wall Street. As a result, he believed it to be “premature” for Gensler to predict a financial crisis. 

Koijen did agree, though, that it was worth monitoring “concentration of cloud providers.” And as of now, there are only a handful of AI models actually driving business decisions, said Louis Steinberg, the former chief technology officer at TD Ameritrade who now runs a research lab focused on cybersecurity. Should that not change dramatically, “There's a risk that everybody makes the same decision at the same time.”

“Markets work when one person wants to sell and another wants to buy,” he said. “If everybody's using the same model, and there turns out to be an issue … how do you recover from that? Both you as a company and then systematically across companies?”

Advertisement

One issue with forecasting the ills of AI is that it isn’t a monolith; while an all-knowing robot controlling trillions in market positions is alarming, this is far from the only or likeliest use of the technology. Throughout Wall Street, finance professionals are trying to figure out its limits and benefits. The technology is already being used now to summarize legal and regulatory documents, analyze the text of earnings calls and financial statements, sift through news reports, and analyze investor sentiment.

“AI is all over that, and the traders are using that as much as they possibly can,” Mousavizadeh said of AI analysis of sentiment data. 

The applications of modern AI technology are much broader than with traditional high-frequency trading, according to Koijen, who argued the various uses could lead to many different strategies and reduce “coordination risk.” John Mileham, the CTO of the robo-advisor Betterment, said the company currently sees AI as more of an internal opportunity to automate away more rote tasks so they can focus on creative work.

Some firms, like the quantitative hedge fund ​Neo IVY Capital Management, have already started to use AI to generate their forecasts. Founder Renee Yao said that unlike some other financial firms that have embraced AI, she understands that AI will not always be correct because “markets are random,” so she has input “rigorous” risk controls.

Advertisement

Sette of Toggle AI said that as of now, large language models are not great at complex mathematics, but can be useful when facing financial problems. “If you're faced with any risk in the market, AI can help you think through it,” partially through reason and partially through repository of knowledge, he said. The interactive nature of LLMs would make them able to do “investment with us,” he added. 

Sette said generative AI could prove useful for everyone from the hedge fund trader (who might ask AI to watch 50 stocks at all hours of the day) to wealth managers (who might use AI’s help when fielding questions about what happened in the markets yesterday) to retail investors (to help them avoid making bad bets on, say, oil futures because of a misinterpretation of obscure market rules).

Disappointingly for evangelists and perhaps for doomsayers, the actual results of AI-fueled investment advice have so far proven suboptimal. An ETF powered by IBM’s Watson has failed to keep up with the broader market, as have other similar products, like HSBC Holdings Plc’s AI Powered US Equity Index. One recent analysis of a dozen hedge funds using AI similarly found they were falling behind a broader hedge fund index by a double digit percentage, and other trackers turned up similarly uninspired results. 

The lack of proven results is why companies like the robo-advisor Betterment are taking a more cautious approach. Betterment has tested the capability of narrow and generative AI in various financial capacities, but found too many issues for the company to use it for short-term investment selection as of now. Oftentimes, the AI would offer responses that seemed “very competent” but were mathematically incorrect, according to Nick Holeman, the company’s director of financial planning. 

Holeman said he still believes AI could “revolutionize” the financial sector one day, but that as it stands today, the technology is “not quite ready for primetime yet” and akin to using “a chainsaw to cut your steak.” Mileham, Betterment’s CTO, said generative AI’s tendency to hallucinate and make up facts remains a large issue. “It's been trained on the internet,” said Mileham, which he said means it could very well offer suboptimal financial advice to customers. 

“Not every single fact that a large language model has ingested is, in fact, the advice that we would give.” He said it’s possible that over time, AI’s ability to serve as a valuable financial resource for Betterment customers, but “it doesn't fit into that model today.”

Companies like Toggle AI are nonetheless trying to figure it out. Sette understands the stakes are high.  “When you're doing capital markets, you cannot have a wrong comma,” he said. “Making sure that this works like clockwork—and not the fuzzy way that GPT works—is the guiding principle or everything that we're doing now.”

As optimistic as Sette is about AI, he still understands there is a long way to go and many tests to be done until human traders can truly trust AI with their money, and he understands that regulators like Gensler will play a key role as AI envelopes more of the financial world. 

“We need to do this properly. So we need someone whose sole utility function is to make sure we don't screw up,” he said.