
The Section 1557 Final Rule Overview
On January 10, 2025, the U.S. Department of Health and Human Services, Office for Civil Rights (OCR) issued a “Dear Colleague” letter on how covered entities, like healthcare practitioners and insurers, can safely integrate and use artificial intelligence (AI) tools in their operations. Unlike other federal agencies that regulate the tools themselves, [1] OCR regulates the use of these tools when providers use them to make healthcare and benefits decisions. In its letter, OCR recognizes the potential vast benefits of AI, including reducing clinician burnout and increasing access to quality care. To name a few, uses of AI in healthcare include screening, risk prediction, diagnosis, prognosis, clinical decision-making, treatment planning, healthcare operations, and resource allocation, all of which impact patient care.
Section 1557 of the Affordable Care Act's final rule [2] prohibits covered entities from discriminating against individuals through patient care decision support tools, [3] including AI. Specifically, the final rule requires covered entities to take reasonable steps to identify and mitigate the risk of discrimination when they use AI and other emerging technologies in patient care that use race, color, national origin, sex, age, or disability as input variables. In OCR’s view, a practical, market-driven approach to AI in healthcare that protects against discrimination and safeguards patient privacy aligns with the values of fairness, accountability, and responsible innovation. To this end, OCR has developed a strategic plan.
Although not the focus of this Article, privacy and security considerations are equally important for overall quality patient care. For this reason, covered entities and business associates must comply with the Health Insurance Portability and Accountability Act (HIPAA) when using AI.
This Article begins by outlining the relevant portions of the Section 1557 final rule and its application to AI in healthcare. It then explains the two core regulatory requirements: reasonably identifying risks of discrimination and taking reasonable steps to mitigate them. This Article explores practical measures highlighted by OCR, such as adopting policies, implementing oversight mechanisms, and training staff to ensure compliance. It concludes with a forward-looking perspective on fostering transparency and accountability to promote equitable and responsible AI use in patient care.
Identifying and Mitigating Risks of Discrimination
The Section 1557 final rule general prohibition against discrimination under § 92.210 took effect July 5, 2024, providing that “[a] covered entity must not discriminate based on race, color, national origin, sex, age, or disability in its health programs or activities through patient care decision support tools.” This applies longstanding civil rights principles to using these patient care decision support tools to clarify that these protections still apply even when the technology changes. [4] The final rule’s affirmative requirements to make reasonable efforts to identify and mitigate discrimination risks in using patient support tools like AI and emerging technologies take effect May 1, 2025. OCR encourages all entities to review their use of such tools to ensure compliance with Section 1557 and to implement measures to prevent discrimination.
Whether a covered entity took reasonable efforts to mitigate discrimination risks may differ depending on several factors, including the context in which the tool was used, steps taken to understand the risks, size of the entity, and policies used to address complaints. A covered entity’s mitigation efforts under § 92.210(c) may vary based on the input variable or factor and the purpose of the tool in question. For example, OCR acknowledges that some input variables, such as race, may generate greater scrutiny, which is more suspect than others, such as age. The latter is more likely to have a clinical and evidence-based purpose and may not require as extensive mitigation efforts.
The Section 1557 final rule has 2 regulatory requirements for covered entities using patient care decision support tools. First, it places an ongoing duty to make reasonable efforts to identify the risk of discrimination when the tools they use contain inputs that measure race, color, national origin, sex, age, or disability. It requires that a covered entity make reasonable efforts but does not prescribe which steps are required to identify such risks. Second, after covered entities identify the risk of discrimination, the final rule requires them to make reasonable efforts to mitigate such risk. [5] Again, as with the former, it does not require any specific actions in their mitigation efforts. Let’s delve into each one in turn.
Reasonable Efforts to Identify Risk of Discrimination
As mentioned, the final does not require any specific ongoing mitigation efforts when covered entities use tools that contain inputs measuring protected classifications such as race. But, OCR does provide the following non-exhaustive list of such efforts:
Review OCR’s discussion of risks in the use of such tools in the Section 1557 final rule, including categories of tools used to assess risk of heart failure, cancer, lung function, and blood oxygen levels; [6]
Research published articles of peer-reviewed studies in medical journals or from healthcare professional and hospital associations; [7]
Use, implement, or create AI registries for safety that are developed by nonprofit AI organizations or others, including the use of internal registries by the covered entity to determine use cases within an organization; and
Obtain information from vendors about the input variables or factors included in existing patient care decision support tools.
Policies & Procedures to Identify Discrimination Risks
Covered entities should consider implementing policies and procedures to identify whether using a patient care decision support tool risks discrimination. A covered entity could adopt a policy to determine whether it uses any of the patient care decision support tools discussed in the preamble to the Section 1557 final rule (e.g., the race-adjusted estimated glomerular filtration rate (eGFR) equation, pulse oximeters, and Crisis Standards of Care plans). [8] The covered entity’s policy could also require its procurement personnel to get information from vendors about the input variables or factors included in existing patient care decision support tools and for patient care decision support tools that the entity intends to procure or implement. The entity’s policy might indicate a preference for procuring tools for which vendors provide information about the input variables and factors included in the tools. The entity could also provide staff training on its policy and review of uses of patient care decision support tools to ensure nondiscrimination.
For covered entities that internally develop patient care decision support tools, an internal policy could require developers to document whether the tools under development measure any protected classifications. Such a policy could also require appropriate staff to determine whether an existing tool’s output varies depending on the tool’s measurements of race, color, national origin, sex, age, or disability. For technological tools not yet introduced to an entity's production environment, the entity's policy could require IT staff to develop tests to identify whether the tool includes input variables or factors that measure protected classifications. Such a test might be helpful in determining whether the output of a technological tool varies depending on the tool's measurement of a protected classification.
Reasonable Mitigation of Identified Risk of Discrimination
After covered entities identify the risk of discrimination, the final rule requires them to make reasonable efforts to mitigate such risk posed by using these tools. Still, again, it does not require any specific actions. Instead, OCR provides a non-exhaustive list of such efforts, which include:
Establish written policies and procedures on how patient care decision support tools are used in decision-making, as well as governance measures;
Monitor potential impacts and develop ways to address complaints of alleged discrimination;
Maintain internal AI registry or reference AI registries developed by nonprofit AI organizations or others to provide the covered entity with information regarding what is being used internally and to facilitate regulatory compliance;
Utilize staff to override and report potentially discriminatory decisions made by a patient care decision support tool, including a mechanism for ensuring a “human in the loop” review of a tool’s decision by a qualified human professional;
Train staff members, including on how to report results and interpret decisions made by the tool, including factors required by other Federal rules;
Establish a registry of tools identified as posing a risk of discrimination and review previous decisions made by these tools;
Audit performance of tools in “real world” scenarios and monitor the tool for discrimination; and
Disclose to patients the use of patient care decision support tools that the entity has identified as posing a risk of discrimination. OCR notes that transparency in how AI systems are developed and deployed can drive patient confidence without stifling innovation.
In addition to the Section 1557 final rule, OCR also recommends reviewing the National Institute for Standards and Technology’s (NIST's) AI Risk Management Framework (RMF) [9] and AI RMF: Generative AI Profile. [10]
Charting a Path Forward: Responsible AI Adoption
Ultimately, the future of AI in healthcare hinges on an intricate balance between innovation and equity. Integrating AI in healthcare has transformative potential to improve clinical outcomes, enhance decision-making, and reduce barriers to quality care. However, as OCR notes, this potential must be harnessed with reasonable caution to uphold civil rights and prevent discriminatory practices. The Section 1557 final rule provides a framework for covered entities to identify and mitigate risks, ensuring that the benefits of AI are equitably distributed.
Compliance with these requirements demands more than meeting regulatory obligations. It calls for a paradigm shift toward accountability, transparency, and ethical innovation. The path forward requires a collective commitment towards responsible innovation, ensuring that every patient benefits from the promise of emerging technologies without compromising their rights or dignity. The healthcare sector can help set global standards for ethical AI use by aligning technological advancements with foundational civil rights principles.

[1] See e.g., Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, Final Rule, 89 FR 1192 (January 9, 2024) https://www.federalregister.gov/d/2023-28857 issued by the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology;
https://www.fda.gov/media/166704/download, issued by the Food and Drug Administration; and
https://ai.cms.gov/assets/CMS_AI_Playbook.pdf, issued by the Centers for Medicare & Medicaid Services.
[2] On May 6, 2024, OCR published the final rule implementing Section 1557 (“final rule”) (codified at 45 Code of Federal Regulations (C.F.R.) part 92). Section 1557 prohibits discrimination on the basis of race, color, national origin, age, sex, and disability in health programs or activities that receive Federal financial assistance from HHS, health programs or activities established under Title I, such as State-based Exchanges, and HHS-administered health programs or activities, including the Federally-facilitated Exchanges.
[3] A patient care decision support tool is “any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.” 45 C.F.R. § 92.4. OCR notes that though using patient care decision support tools could also implicate other civil rights laws, such as Titles II and III of the Americans with Disabilities Act and Section 504 of the Rehabilitation Act, it only addresses nondiscrimination obligations under Section 1557 of the ACA.
[4] 45 C.F.R. § 92.210(a).
[5] 45 C.F.R. § 92.210(c).
[6] See 89 Fed. Reg. 37642-51 (further outlining examples of what constitutes reasonable efforts under the rule).
[7] See 89 Fed. Reg. 37642-51 (further outlining examples of what constitutes reasonable efforts under the rule).
[8] See 89 Fed. Reg. 37644, 45, and 47.
Kommentare