On June 13, 2023, the U.S. Securities and Exchange Commission (SEC) acknowledged that it was considering new rules governing the use of artificial intelligence (AI) by brokers interacting with clients.
The SEC’s stance on the use of AI
The SEC’s Office of Information and Regulatory Affairs released the Spring 2023 Unified Agenda of Regulatory and Deregulatory Actions – a semiannual publishing of short- and long-term regulatory actions that various federal agencies are developing. In that agenda is a proposed rule prohibiting “conflicted practices” for broker-dealers using certain technologies. Per the SEC:
“The Division is considering recommending that the Commission propose rules related to broker-dealer conflicts in the use of predictive data analytics, artificial intelligence, machine learning and similar technologies in connection with certain investor interactions.”
“Technology, markets and business models constantly change,” SEC Chair Gary Gensler said in a prepared statement following the agenda’s release. “Thus, the nature of the SEC’s work must evolve as the markets we oversee evolve.”
The SEC also said it is considering proposed amendments to the rule exempting some internet advisors (robo-advisors) from registering as money managers.
Gensler has previously shown skepticism and urged caution about the potential for AI platforms to cause “fragility” in the U.S. financial system. During a May 2023 conference hosted by the Financial Industry Regulatory Authority (FINRA), per a Wall Street Journal report, the SEC chair said that future observers might look back and say “the crisis in 2027 was because everything was relying on one base level, what’s called [the] generative AI level, and a bunch of fintech apps are built on top of it.”
Wider restrictions around generative AI technology
The SEC isn’t the only financial agency eyeballing restrictions on the use of AI. Bloomberg reported that after the release, Consumer Financial Protection Bureau Director Rohit Chopra said “that if left unchecked … AI could usher in more fraud and discrimination in finance.”
In prepared remarks made in April, Chopra discussed an interagency statement calling out other AI issues, such as the potential for discrimination.
“Generative AI, which can produce voices, images and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms – from consumer fraud to privacy to fair competition. Today, several federal agencies are coming together to make one clear point: there is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.”
Financial threats outlined at the time included discrimination in algorithmic appraisals, “AI advertising” and black-box credit models.