UPDATE — JULY 2025: The article below accurately describes the final rule issued by the U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) under Section 1557 of the Affordable Care Act. OCR published the rule in the Federal Register on May 6, 2024. It takes effect on March 2, 2025, 300 days after publication.
The regulation applies to healthcare payers and providers receiving federal financial assistance. It explicitly bans discriminatory use of clinical decision support tools—whether automated or manual—based on race, color, national origin, sex, age, or disability. Covered entities must identify, assess, and mitigate algorithmic bias in tools used for diagnosis, risk scoring, or treatment recommendations.
OCR guidance stresses proactive governance, staff training, and documentation of mitigation efforts. Larger organizations are expected to create robust compliance programs due to their resources and broader impact, while smaller providers receive more flexibility.
As of mid-2025, preparation for enforcement continues. The rule has not been delayed or overturned, although stakeholders still raise concerns about cost, clarity, and compliance burdens. Smaller providers, in particular, argue the standards remain challenging.
Since returning to office in January 2025, President Trump has issued executive orders that roll back several nondiscrimination protections from the Biden administration, including those tied to gender identity. Court challenges have created further uncertainty, with some Biden-era rules—especially those on gender-affirming care—currently stayed or under injunction.
ORIGINAL NEWS STORY:
New Regulation on Algorithmic Discrimination Raises Concerns among Healthcare Providers
The Office of Civil Rights (OCR) within the Department of Health and Human Services (HHS) recently unveiled its final rule on algorithmic discrimination by healthcare payers and providers, raising significant concerns among industry stakeholders. The rule, based on section 1557 of the Affordable Care Act, is set to take effect 300 days after publication, requiring healthcare organizations to address algorithmic biases within their patient care decision support tools.
The rule, codified in Title 45 of the Code of Federal Regulations, creates strict guidelines for tools used in patient care. Healthcare entities must identify and mitigate risks of discrimination in both automated and manual systems. Protected categories include race, color, national origin, sex, age, and disability.
Industry Concerns
Experts have raised alarms about the rule’s impact. Larger organizations face heavy compliance burdens, while smaller entities may lack the resources to meet expectations. Critics warn the regulation could create a two-tiered system.
The rule covers a broad range of tools: algorithms, calculators, flowcharts, and diagnostic software. OCR makes clear that bias mitigation applies to all systems, not just AI-driven ones.
One major challenge lies in identifying discrimination risks. OCR expects providers and payers to exercise due diligence by developing policies, training staff, and monitoring tools. It recommends comprehensive governance programs to track potential impacts. Critics argue the rule’s scope is too broad and lacks detailed mitigation steps, leaving smaller organizations at risk of falling behind.
Compliance and Enforcement
Larger entities face greater scrutiny. OCR expects them to design specialized compliance programs and provide extensive documentation. While the rule offers some flexibility, the emphasis is on proactive risk management.
Despite concerns, all covered entities must comply within the required timeline. Regulators expect organizations to demonstrate due diligence and implement safeguards against algorithmic discrimination.
Need Help?
Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

