Artificial intelligence will be a revolutionary technology, transforming all manners of business, but S.E.C. chairman Gary Gensler is warning that it also has the potential to wreak havoc on the economy.
In an interview with The New York Times, Gensler said he expects the U.S. to end up with two or three foundational AI models, making people more reliant on the same information (and acting in a herd mentality). And that’s where the trouble could start. “This technology will be the center of future crises, future financial crises,” Gensler said. “It has to do with this powerful set of economics around scale and networks.”
Herd mentality in finance has been a big driver of meme stocks. But a key difference between that and AI is that retail investors are just a small subset of the overall market. If AI is giving bad advice, Gensler argues, it could affect a much larger group of people.
That brings up questions of responsibility and liability. And, as of right now, there’s no real legal framework for that sort of thing, but Gensler made it very clear where he stands on the matter.
“Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients,” Mr. Gensler said. “And whether you’re using an algorithm, you have that same duty of care.”
This has been on Gensler’s mind for some while. At a National Press Club speech last month, the S.E.C. chair said that AI could produce dangerous “monocultures” that “heighten financial fragility,” adding that current regulations are not “sufficient” and will need to be “updated.”
“Many of the challenges to financial stability that AI may pose in the future … will require new thinking on system-wide or macro-prudential policy interventions,” he said. He also said at the time that it could “heighten financial fragility,” and could end up being a key feature in “after-action reports” on the next financial crisis. He likened it to huge technological breakthroughs before, specifically the emergence of the internet in the mid-1990s—and the invention of the automobile in 1886.
His warnings stretch back even further, before the explosion of ChatGPT brought AI to public consciousness. “Broad adoption of deep learning may … increase uniformity, interconnectedness, and regulatory gaps, leaving the financial system more fragile,” Gensler co-wrote in a 2020 paper about deep learning and its impact on financial markets. “Existing financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”
Not everyone is quite as worried about AI’s impact on the markets. Greg Jensen, co-chief investment officer at Bridgewater Associates—the world’s largest hedge fund and who has also been one of the leading AI bulls—said last month that using chatbots to trade equities was a fool’s mission.
“If somebody’s going to use large language models to pick stocks, that’s hopeless, that is a hopeless path,” he said.
A study from Morgan Stanley is more neutral on the matter, finding that while AI financial advisors will lead to big changes in the investing world in the future, most clients prefer a human touch for now.
It’s the longer term, of course, that Gensler is warning about—and he’s been thinking on the matter for years now.