top of page

The AI Risk Blog

A place for users to talk about our platform.

Search

AI is the Future for RIAs

Updated: May 17

Alec Crawford Founder & CEO of Artificial Intelligence Risk, Inc.


AI and large language models (LLMs) like Chat GPT can save RIAs time and money. LLMs provide awesome tools today and the models are improving every day. LLMs will become foundational to the RIA business within years, if not months. AI can help with emails, summarize meetings, research, and more. With an enterprise version, AI can become the central hub for your documents, CRM, portfolios, and other applications as an easy way to ask and answer questions. However, the SEC has proposed new rules for RIAs that will require enhanced cybersecurity around AI as well as some more robust compliance recordkeeping requirements that most systems cannot handle yet. We show you how to be prepared. 


 

Saving Time and Money with Chat GPT 

What you can do and what you should do with LLMs are often two different things. Using public-facing versions of LLMs may violate customer confidentiality agreements or open you up to a third party leaking your data, or worse. You should not use an LLM to pick investments by itself today. Nevertheless, once you are using a secure (or enterprise) version, you can do amazing things with AI safely and in compliance with SEC rules: 

 

  • Summarizing meeting notes and creating action items. 

  • Drafting email responses inside Outlook. 

  • Giving specialized advice for sales, advertising, marketing and search engine optimization (SEO). 

 

More sophisticated private cloud AI implementations can link into your CRM system, investment portfolios, documents and databases, cutting across different platforms to answer questions such as “Which customers did I forget to call this quarter?” or “What were the action items from my most recent meeting with the Smiths?” or “Who are my largest holders of NVDA?” Additional options might include links to other key software or investment capabilities such as searching the SEC EDGAR database. 

 

How to use an “AI Agent” 

An AI agent is an AI designed for a specific job. For example, summarizing meeting notes and list the action items. An agent is not a general purpose AI, but it does its specific job pretty well. Examples of other agents useful for RIAs are: 

 

  • Personal assistant 

  • Research assistant 

  • Business strategist 

  • Social media specialist 

  • Marketing strategist 

  • Legal disclaimer writer 

  • Sales consultant 

  • Computer programming assistant 

 

These AI agents can save time and money versus tracking down, asking, and potentially paying a human expert. While not a full substitute for a true expert on a tricky topic, specialized AI agents can nail the answers to everyday questions in seconds for minimal cost. Because they are geared towards a specific area, they will typically give better answers than asking a generic LLM, like Chat GPT, on the web. 

 

What does the SEC have to say about AI and Cybersecurity? 

Some key rules for RIAs are slated to be finalized in the second quarter of 2024, according to the White House Fall 2023 Regulatory Agenda. Some of these may not be on the radar yet for companies that outsource their compliance team. 

 

Rule 1: Predictive Analytics 

Otherwise known as the “artificial intelligence” rule, this rule received lots of comments saying it was too broad. For example, the current version of the proposed rule applies to all RIAs with $5 million or more in AUM, but perhaps the SEC raises that threshold. One way to think about it is that AI is another employee that you need to keep track of.  

 

While the initial rule was proposed in July of 2023, the final version of the predictive analytics rule is expected to be passed in the second quarter of 2024 with compliance starting in 2025 for larger institutions, 2026 for smaller. 

 

The rule will probably require: 

 

  • Writing policies and procedures for AI and predictive data analytics (PDA). 

  • Testing and recording AI and predictive analytics used to select or research investments. 

  • Checking and recording AI used to communicate with clients

  • Identifying and disclosing or possibly mitigating conflicts of interest. 

  • Recording testing and all “communications” with LLMs, including all the model details. 

 

This rule will require software and expertise to facilitate compliance. Best practices also require a third-party to validate the testing and make sure that communications with the LLM and all tests are recorded and saved in an immutable database. 

 

Rule 2: Cybersecurity Enhancements for Investment Advisers, Registered Investment Companies, and Business Development Companies 

This rule focuses on enhancing cybersecurity for affected organizations. While it applies broadly across the RIA technology stack, we will focus on the AI-specific impact:  

 

  • RIAs must establish written policies and procedures regarding cybersecurity for LLMs 

  • RIAs must install cybersecurity that specifically addresses LLMs 

  • RIAs must monitor and manage 3rd party provider risks including LLM providers like OpenAI 

  • RIAs must regularly review cybersecurity practices for increased transparency and disclosure 

 

Conclusion 

AI will become foundational to the RIA business. We are all busy, and using AI will help make us more efficient, more effective and more successful at our jobs.  However, we need to use these tools safely in order to gain the benefits for people, companies, and society. Artificial Intelligence Risk, Inc. already has software written to comply with these AI rules. Please see https://www.aicrisk.com. We would be more than happy to help. 

 

References 


96 views0 comments

Recent Posts

See All

Comments


bottom of page