New York DFS Adopts Guidance to Prevent AI-Driven Discrimination in Insurance
New York continues to be a hotbed of artificial intelligence (AI) regulations and laws. Adrienne A. Harris, Superintendent of the New York Department of Financial Services (DFS), adopted new guidance aimed at protecting consumers from unfair and unlawful discrimination by insurers using AI. This move underscores New York’s commitment to supporting responsible innovation while ensuring consumer protection in the financial sector.
“New York has a strong track record of supporting responsible innovation while protecting consumers from financial harm,” said Superintendent Harris. “Today’s guidance builds on that legacy, ensuring that the implementation of AI in insurance does not perpetuate or amplify systemic biases that have resulted in unlawful or unfair discrimination, while safeguarding the stability of the marketplace.”
The new guidance specifically addresses the use of external consumer data and information sources (ECDIS) and AI systems (AIS) by insurers. While these technologies can simplify and expedite insurance underwriting and pricing processes, they also carry the potential for significant consumer harm if not properly managed. The guidance mandates that insurers establish robust governance and risk management frameworks to mitigate these risks.
Key Requirements for Insurers
Under the new DFS guidance, all insurers authorized to operate in New York must comply with several core requirements:
1. Prevent Unlawful Discrimination
Insurers must rigorously evaluate how they use ECDIS and AIS to ensure these technologies do not result in unfair discrimination. This includes testing systems against both state and federal laws and identifying potential sources of bias.
2. Prove Actuarial Validity
Companies must demonstrate that their AI-driven systems are actuarially sound and based on credible statistical principles. In doing so, insurers can verify that their AI decisions are not only accurate but also fair and justifiable.
3. Strengthen Corporate Governance
Each insurer must implement a corporate governance framework that oversees AI and data use. This structure ensures accountability and transparency while maintaining alignment with company values and DFS regulations.
4. Increase Transparency and Risk Management
Insurers are expected to maintain strong internal controls and manage risks related to third-party vendors. They must also provide clear disclosures to consumers about how personal data is collected, analyzed, and protected. These efforts aim to build trust and prevent misuse of sensitive information.
Collaborative Development and Industry Engagement
DFS developed this guidance after an extensive public engagement process. The department consulted with regulated entities, trade associations, advisory firms, universities, and the general public. This inclusive approach reflects the agency’s goal of crafting balanced and effective AI oversight that supports both innovation and consumer protection.
Building a Fair and Transparent AI Future
With this guidance, New York continues to lead the national conversation on AI governance in financial services. The framework not only safeguards consumers but also promotes trust in AI-driven insurance models. By prioritizing transparency and accountability, New York sets a strong precedent for other states seeking to modernize their insurance regulations.
Need Help?
If you have questions or concerns about navigating New York’s AI regulations, or any other U.S. and global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.