top of page

The AI Risk Blog

A place for users to talk about our platform.

Search

Five Reasons We Need the SEC Predictive Analytics Rule

Alec Crawford Founder & CEO of Artificial Intelligence Risk, Inc.


In a world increasingly governed by algorithms and data, financial institutions are rapidly adopting predictive analytics and artificial intelligence (AI) technologies. For example, JP Morgan is rumored to have over 2,000 people working directly on Artificial Intelligence. While these technologies promise significant efficiencies and insights, they also carry potential risks that necessitate regulatory oversight. The SEC’s proposed Predictive Analytics Rule seeks to address these concerns by implementing guidelines to clarify the legal use of AI in financial services.



Nevertheless, it does appear that certain aspects of the SEC proposed rule (found here) are likely to be dialed back in the final version of the SEC Predictive Analytics Rule given push back in the comment period. For example:

 

  • The ability to make certain communications to customers is protected by the first amendment.

  • The ability to disclose rather than mitigate conflicts of interest should be allowed in many circumstances.

  • A higher bar for materiality may be needed to avoid even an Excel spreadsheet falling under the rule.

 

Nevertheless, SEC Chairman and others have recently opined that it is not a matter of if the rule will be finalized, but when and how. Here are five reasons why this rule is so important and what it does for financial services consumers and other customers.

 

1. Avoiding Conflicts of Interest

One of the primary concerns that the SEC addresses with its Predictive Analytics Rule is the potential for conflicts of interest. AI systems, when not properly monitored, can inadvertently create scenarios where the interests of the financial institution are placed above those of their clients. For example, an AI-driven advisory platform might recommend investment products that generate higher fees for the firm rather than those that best meet the client’s needs. This could happen without the company even realizing the bias, so in this case the SEC rule is requiring extensive testing of AI and predictive analytics systems for conflicts of interest.

 

The SEC's rule mandates that firms set up, maintain, and follow policies and procedures to detect, mitigate, and disclose any potential conflicts of interest created by their AI tools. Transparency in how AI models are developed, trained, and utilized is crucial.

 

2. Protecting Vulnerable Clients

The use of AI in financial services has the potential to manipulate or "nudge" clients in subtle ways that benefit the financial institution. For example, the SEC has expressed concern about the "gamification" of investing, where platforms might encourage excessive trading through game-like features and rewards, potentially impacting vulnerable clients. Such tactics can lead to poor investment decisions and increased costs for investors who may not be fully aware of the risks involved.

 

3. Ensuring Accurate Customer Communications

With AI being increasingly used to communicate with clients through chatbots, emails, etc. there is a significant risk of misinformation. The SEC wants firms to disclose if customers are receiving communication from an AI. In addition, the SEC is concerned that AI-driven systems might generate communications that are misleading or outright incorrect if not properly managed. We have already seen examples of chatbots selling trucks for $1, so this concern is not far-fetched. (See here.)

 

4. Promoting Transparency

Transparency is a key element of the SEC’s proposed rule, as well as the broader Biden Executive Order on AI. The rule requires firms to be transparent about why an AI system made a particular decision, such as denying a loan application. This transparency may eventually extend to providing clear, actionable feedback to consumers on how to rectify any issues identified by the AI. For example, is your FICO score too low? Or just your proposed down payment amount?

 

Transparency is also critical for identifying and mitigating biases in AI algorithms. By scrutinizing the data and testing inputs and outputs on predictive models, firms can uncover and address potential disparities that could lead to biased decision-making. This not only helps in complying with ethical standards but also enhances the fairness and inclusiveness of financial services.

 

5. Maintaining a Compliance Database

To ensure accountability and support regulatory oversight, we believe firms will need to keep a permanent, tamper-proof compliance database for AI. This database should include all interactions with AI, including models, model versions, machine learning model inputs and outputs, user prompts, responses, data employed, and user comments. The ability to maintain and audit this information is critical for identifying patterns, resolving client complaints, and responding to regulatory requests.

 

Such a database ensures that if issues arise, there is a comprehensive record that regulators can examine. This goes beyond simple changeable logs and provides a deeper layer of transparency and accountability.

 

Conclusion

The SEC’s proposed Predictive Analytics Rule is a necessary step towards safeguarding the interests of investors in an increasingly AI-driven financial landscape. By addressing conflicts of interest, protecting vulnerable clients, ensuring accurate communications, promoting transparency, and requiring permanent records, the rule aims to create a more transparent, fair, and accountable financial system. These guidelines not only protect clients but also help in building a more resilient and trustworthy financial industry.

 

We use our AI platform, AIR-GPT, to help write and edit our blog articles.


Copyright © 2024 by Artificial Intelligence Risk, Inc. All rights reserved 

This paper may not be copied or redistributed without the express written permission of the authors.

149 views0 comments

Comments


bottom of page