New Regulation on Algorithmic Discrimination Raises Concerns among Healthcare Providers

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/02/2024
In News

UPDATE — JULY 2025: The article below accurately summarizes the final rule issued by the U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) under Section 1557 of the Affordable Care Act, which addresses algorithmic discrimination in healthcare. The rule was published in the Federal Register on May 6, 2024, and takes effect 300 days later, on March 2, 2025.

The regulation applies to healthcare payers and providers receiving federal financial assistance and explicitly prohibits discriminatory use of clinical decision support tools—whether automated or manual—based on race, color, national origin, sex, age, or disability. Covered entities are required to identify, assess, and mitigate algorithmic bias in tools used for diagnosis, risk scoring, or treatment recommendations.

OCR guidance emphasizes the importance of proactive governance, staff training, and documentation of mitigation measures. While flexibility is afforded to smaller entities, larger organizations are expected to implement comprehensive compliance programs due to their greater resources and higher impact.

As of mid-2025, enforcement preparation is ongoing, and the rule has not been delayed or overturned, though industry stakeholders continue to raise concerns about cost, clarity, and compliance burdens—particularly for smaller providers.

However, since returning to office in January 2025, President Trump has issued executive orders reversing several nondiscrimination policies from the Biden administration, including rolling back protections based on gender identity under Section 1557. The legal and regulatory landscape is evolving rapidly, with some of the Biden-era rules—especially those related to gender-affirming care and LGBTQ+ protections—currently stayed or subject to nationwide injunctions.

ORIGINAL NEWS STORY:

New Regulation on Algorithmic Discrimination Raises Concerns among Healthcare Providers

The Office of Civil Rights (OCR) within the Department of Health and Human Services (HHS) recently unveiled its final rule on algorithmic discrimination by healthcare payers and providers, raising significant concerns among industry stakeholders. The rule, based on section 1557 of the Affordable Care Act, is set to take effect 300 days after publication, requiring healthcare organizations to address algorithmic biases within their patient care decision support tools.

 

Outlined in Title 45 of the Code of Federal Regulations, the new regulation, which will be officially published in the Federal Register on May 6, imposes strict guidelines on the use of patient care decision support tools to prevent discrimination based on race, color, national origin, sex, age, or disability. Healthcare entities are required to identify and mitigate potential risks associated with the use of these tools, both automated and non-automated, within their health programs or activities.

 

However, industry experts have raised concerns about the rule’s potential impact, particularly regarding the disparity in compliance standards between larger, more sophisticated organizations and smaller entities. Critics argue that the rule creates a double standard, placing greater compliance burdens on larger organizations with greater resources while potentially overlooking smaller entities.

 

The scope of the final rule encompasses a wide range of patient care decision support tools, including automated algorithms, flowcharts, calculators, and diagnostic tools, used in clinical decision-making and patient care management. While the rule applies to both automated and non-automated systems, OCR emphasizes the importance of mitigating discriminatory risks across all tools, regardless of their nature.

 

Key Challenge

 

One of the key challenges highlighted by industry stakeholders is the requirement for covered entities to identify potential discrimination risks within their decision support tools. While OCR acknowledges the complexities involved in this process, it expects healthcare providers and payers to exercise due diligence in identifying and addressing algorithmic biases. To mitigate these risks, OCR suggests that covered entities develop comprehensive policies and procedures governing the use of clinical algorithms, provide staff training, and implement governance measures to monitor and address potential impacts. However, critics argue that the broad scope of the rule and the lack of specific mitigations may pose challenges for healthcare organizations, particularly smaller entities with limited resources.

 

Furthermore, the rule’s emphasis on proactive risk mitigation places greater scrutiny on larger organizations, which are expected to implement specialized compliance programs tailored to address algorithmic biases. While OCR acknowledges the need for flexibility in compliance efforts, it encourages covered entities to take proactive measures to ensure compliance with the new regulations.

 

Conclusion

 

Despite the challenges posed by the final rule, healthcare organizations are required to comply within the stipulated time frame, with larger providers and payers facing increased pressure to implement comprehensive compliance programs. The implementation of these programs is seen as crucial in demonstrating due diligence and mitigating potential risks associated with algorithmic discrimination.

 

Need Help?

 

Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter