The Office of Civil Rights (OCR) within the Department of Health and Human Services (HHS) recently unveiled its final rule on algorithmic discrimination by healthcare payers and providers, raising significant concerns among industry stakeholders. The rule, based on section 1557 of the Affordable Care Act, is set to take effect 300 days after publication, requiring healthcare organizations to address algorithmic biases within their patient care decision support tools.
Outlined in Title 45 of the Code of Federal Regulations, the new regulation, which will be officially published in the Federal Register on May 6, imposes strict guidelines on the use of patient care decision support tools to prevent discrimination based on race, color, national origin, sex, age, or disability. Healthcare entities are required to identify and mitigate potential risks associated with the use of these tools, both automated and non-automated, within their health programs or activities.
However, industry experts have raised concerns about the rule’s potential impact, particularly regarding the disparity in compliance standards between larger, more sophisticated organizations and smaller entities. Critics argue that the rule creates a double standard, placing greater compliance burdens on larger organizations with greater resources while potentially overlooking smaller entities.
The scope of the final rule encompasses a wide range of patient care decision support tools, including automated algorithms, flowcharts, calculators, and diagnostic tools, used in clinical decision-making and patient care management. While the rule applies to both automated and non-automated systems, OCR emphasizes the importance of mitigating discriminatory risks across all tools, regardless of their nature.
One of the key challenges highlighted by industry stakeholders is the requirement for covered entities to identify potential discrimination risks within their decision support tools. While OCR acknowledges the complexities involved in this process, it expects healthcare providers and payers to exercise due diligence in identifying and addressing algorithmic biases. To mitigate these risks, OCR suggests that covered entities develop comprehensive policies and procedures governing the use of clinical algorithms, provide staff training, and implement governance measures to monitor and address potential impacts. However, critics argue that the broad scope of the rule and the lack of specific mitigations may pose challenges for healthcare organizations, particularly smaller entities with limited resources.
Furthermore, the rule’s emphasis on proactive risk mitigation places greater scrutiny on larger organizations, which are expected to implement specialized compliance programs tailored to address algorithmic biases. While OCR acknowledges the need for flexibility in compliance efforts, it encourages covered entities to take proactive measures to ensure compliance with the new regulations.
Despite the challenges posed by the final rule, healthcare organizations are required to comply within the stipulated time frame, with larger providers and payers facing increased pressure to implement comprehensive compliance programs. The implementation of these programs is seen as crucial in demonstrating due diligence and mitigating potential risks associated with algorithmic discrimination.
Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.