The Financial Industry Regulatory Authority (FINRA) has released a comprehensive guidance document addressing the use of generative artificial intelligence (GenAI) and large language models (LLMs) within the financial services industry. The document underscores the importance of regulatory compliance and the need for robust AI governance to safeguard investor interests and ensure market integrity.
As AI technologies rapidly evolve, their integration into financial services offers promising opportunities for enhancing products, services, and operational efficiencies. However, these advancements also bring significant regulatory challenges. FINRA’s new guidance, published on June 27, emphasizes that existing regulatory obligations apply to the use of AI tools just as they do to other technologies. This reiteration aims to maintain a technology-neutral regulatory framework, ensuring that AI implementations comply with federal securities laws and regulations.
FINRA’s guidance outlines several critical areas where member firms must focus their efforts to ensure compliance when deploying AI technologies. One of the primary points is the need for regulatory neutrality. FINRA’s rules are designed to be technology-neutral, meaning that the same regulatory standards apply to AI tools as to other technologies. This approach ensures that firms remain compliant with existing laws, including those related to anti-money laundering (AML), communications with the public, customer information protection, cybersecurity, model risk management, and vendor management.
Another crucial aspect is the regulatory obligations that firms must adhere to when using AI tools. These obligations include ensuring that AI systems are used in a manner that complies with all relevant regulations. For instance, firms must conduct thorough evaluations of GenAI tools before deployment to ensure they meet compliance standards. This includes addressing issues related to data privacy, accuracy, bias, and intellectual property. Furthermore, firms are required to maintain robust supervisory systems that incorporate technology governance, model risk management, data privacy and integrity, and the reliability and accuracy of AI models.
The guidance also highlights the importance of evaluating AI tools rigorously before their deployment. This evaluation process should be thorough, taking into account how the technology will be used within the firm and ensuring ongoing compliance with FINRA rules. Depending on the specific use case, the application of these rules can vary, making it essential for firms to seek interpretive guidance from FINRA as needed.
FINRA encourages member firms to engage in continuous dialogue with their Risk Monitoring Analysts regarding AI-related issues. This proactive engagement can help firms navigate the complexities of AI compliance and address any ambiguities in the application of FINRA rules. Additionally, firms are invited to provide feedback on how FINRA’s rules might be modernized to better accommodate emerging technologies while ensuring investor protection and market integrity.
The guidance acknowledges the rapid evolution of AI technology and the need for ongoing collaboration between FINRA, member firms, regulators, policymakers, and other stakeholders. This collaborative approach aims to address the potential supervisory and compliance implications of AI, fostering a regulatory environment that supports innovation while safeguarding the interests of investors.
Need Help?
Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.
Photo by Rafapress on depositphotos.com – June 28, 2022, Brazil. In this photo illustration, a silhouetted woman holds a smartphone with the Financial Industry Regulatory Authority (FINRA) logo displayed on the screen