In my last op-ed, I commended the Securities and Exchange Commission on its effort to protect investors by identifying and penalizing managers who make misleading AI claims.
However, my praise is conditional because I am concerned about possible regulatory overreach that could discourage the adoption of a type of AI that could lead to better investment outcomes for clients. Specifically, SEC Chairman Gary Gensler has made numerous written and public comments about how investment managers’ deployment of deep learning threatens the financial system’s stability.
Chairman Gensler and MIT professor Lily Bailey write in a joint paper that “Presenting potential benefits of increased efficiency, greater financial inclusion, enhanced user experience, optimized returns, and better risk management, we hypothesize that deep learning, as it moves to a more mature stage of broad adoption, also may lead to increased systemic risk of the financial sector.”
The co-authors of the paper, called “Deep Learning and Financial Stability,” cite specific risk factors related to the “future” use of deep learning that could drive financial instability: “uniformity of data, monocultures of model design, network interconnectedness with data aggregators, ‘AI-as-a-Service’ providers, regulatory gaps in the face of limited explainability, and possible algorithmic coordination.”
I find their argument linking deep learning to financial instability to be a false causal nexus. (The recently released report by the United States Senate Committee on Homeland Security and Governmental Affairs, called “AI in the Real World: Hedge Funds’ Use of Artificial Intelligence in Trading,” contains similar misunderstandings.)
From experience, I know that designing, developing, and deploying a deep learning-based investment model is as much an art as a science.
Individual managers use deep learning to solve specific investment problems, and each deep learning-based solution reflects a manager’s particular culture, history, talent, investment objective, and resources.
More specifically, deep learning is not a monolithic term. Instead, deep learning is a specialized subset of machine learning that uses artificial neural networks with multiple layers (hence “deep”) to model complex patterns in data. There are numerous types of deep learning algorithms — convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer models — that make it unlikely that all managers will use the same algos in their investment processes.
However, even in the unlikely case that two managers used the same kind of deep learning model and imposed the same constraints, such as signal generation, trading frequency, and risk parameters, to solve the same investment problem, the solution would be unique. That’s because the managers must make scores of critical design and development decisions that determine and differentiate their outputs.
The most obvious decision is the manager’s choice of traditional and alternative datasets. Managers also need to determine the type, frequency, scope, sources and structure, and, importantly, the technique used to preprocess the data — normalization, feature scaling, and PCA. All of these potential choices make the “uniformity of data” an unlikely source of financial instability.
Managers also differentiate models when they choose the specific model architecture and topology, such as the type and size of the neural network and the definition of hyperparameters like learning rates; the model’s training methodology; and the frequency of retraining. The list of human decisions is endless, and each decision will fundamentally differentiate one manager’s deep learning model from another, making a monoculture unlikely.
Chairman Gensler’s comments about deep learning’s potentially destabilizing effect on financial markets place him on a slippery slope, revealing a blind spot in his regulatory vision.
The five future fear factors that he attributes to deep learning are not unique: They are present in traditional quant investing today and they present a persistent threat to financial stability:
Uniformity of data: Today, traditional managers generally use the same types of data — price, economic, and financial data at various levels of granularity and temporal slices — as inputs to their models.
Monoculture of model design: Traditional investment managers’ quant models use the same few decades-old, threadbare investment methods (some iteration of multifactor linear regression- laden with a heavy dose of mean-variance optimization or static models like Black-Scholes-Merton option pricing model). This results in a genuine “monoculture of model design.” The 2007 quant quake is just one historical example of how this monoculture caused the herding behavior Chairman Gensler fears. (Chairman Gensler and Professor Bailey recognize in the paper that this existing homogeneity causes herding, but fail to acknowledge that, unlike traditional quant models, models using deep neural networks are idiosyncratic, self-learning, and adaptive — all of which significantly reduces the likelihood of such behavior.)
Limited explainability: As I have previously argued, traditional quant and discretionary models have, at best, only incomplete explanations, and often those are simply narratives that lack any causal evidence.
Network interconnectedness with data aggregators and ‘AI-as-a-Service’ providers: The contractual and operational risks of network interconnectedness exist today. Many managers use the same short list of providers because there are only a limited number of telecommunication, Internet, and cloud providers; data aggregators, such as Bloomberg, FactSet and Refinitiv; and Software-as-a-Service contractors. A prime example of this existing “concentrated infrastructure” is BlackRock’s Aladdin. This technology platform is used by more than 1,000 pension funds, asset managers, banks, and corporations around the world for some or all of their investment processes.
I would imagine that Chairman Gensler finds it troubling that BlackRock’s has launched its first generative AI co-pilots. The Gen AI co-pilots are ultimately black boxes built on deep neural networks.
Chairman Gensler calls AI the “most transformative technology of our time, on par with the Internet and mass production of automobiles.” Surely, he’s not talking about classical machine learning algorithms like random forest and Support Vector Machines, which have been around for years. He’s referring to deep neural networks, the same technology that has achieved superhuman results in health sciences, robotics, and transportation.
Some investment managers will undoubtedly deliberately misrepresent their use of deep learning, and Chairman Gensler should use the full force of his agency to protect clients from this and all types of AI washing.
However, hypothesizing how managers’ use of deep learning could destabilize the financial system will discourage the widespread adoption of this truly transformative technology.
Responsible adoption of deep learning, with prudent safeguards in place, is a once-in-a-generation opportunity that could materially improve outcomes for millions of American investors.
Indeed, regulators should monitor and regulate managers’ use of deep learning and AI in general. Yet this requires them to first correctly understand these systems and their use cases.
However, if the SEC truly wants to protect investors and maintain fair markets as its mandate states, it must look beyond deep learning’s hypothetical risks and address quantitative investing’s very real, existing structural weaknesses that have persisted for decades.
Angelo Calvello, Ph.D., is co-founder of Rosetta Analytics, an investment manager that uses deep reinforcement learning to build and manage investment strategies for institutional investors.