top of page

Navigating HIPAA's New Proposed AI Rule: Key Implications for Health Systems

Writer: Sam KhanSam Khan
Center for Health AI Regulation, Governance & Ethics post on Article

The Department of Health and Human Services (HHS) has proposed critical updates to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security Rule, marking its first major revision since 2013. These updates aim to fortify cybersecurity protections for electronic protected health information (ePHI) in response to evolving healthcare technologies and increasing cyber threats. Notably, this is the first time HIPAA explicitly addresses the regulation of artificial intelligence (AI). While the proposed rule covers a wide range of cybersecurity improvements, this article specifically focuses on the implications of these updates for AI systems in healthcare.


Why the Update?

Healthcare systems face increasing cyberattacks driven by significant technological advancements and growing reliance on digital applications. HHS has identified inconsistent compliance with the existing Security Rule and aims to establish clearer, more robust regulations to ensure all entities maintain strong cybersecurity defenses. These updates are designed to:

  • Adapt to evolving healthcare delivery technologies.

  • Address rising cybersecurity threats and breach trends.

  • Correct common compliance deficiencies uncovered in investigations.

  • Incorporate modern cybersecurity guidelines and best practices.

  • Reflect court rulings impacting Security Rule enforcement.


AI in the Spotlight: New Compliance Mandates

AI technologies are reshaping healthcare, offering significant benefits, such as in diagnostic imaging, personalized treatments, and operational efficiencies. However, these advancements also introduce new vulnerabilities that require stronger oversight. Recognizing these emerging risks, the proposed HIPAA updates include specific measures designed to mitigate AI-related threats and safeguard ePHI throughout the entire lifecycle of AI systems.


Addressing AI-Specific Risks

The safety and security of AI regarding ePHI entail risk mitigation efforts across multiple areas, including cybersecurity, data privacy, and critical infrastructure. AI systems are particularly vulnerable to:

  • Data Poisoning: Manipulation of training data to produce faulty outputs.

  • Adversarial Attacks: Subtle inputs designed to trick AI systems into incorrect predictions.

  • Bias in AI Models: Inherited biases from training data leading to discriminatory outcomes. Section 1557 of the Affordable Care Act provides a regulatory framework for preventing such discrimination.


Key AI Compliance Requirements

  • Inclusion of AI in Risk Analyses: 

    • Regulated entities must assess how AI systems interact with ePHI. This includes evaluating the type and amount of ePHI accessed by the AI tool, to whom the data is disclosed, and who receives the AI-generated output.

    • Risk analyses should identify vulnerabilities in AI algorithms and their data inputs, accounting for threats like data poisoning and adversarial attacks.

    • Regular evaluations must ensure AI models are fair, accurate, and secure.

  • Technology Asset Inventory and Network Mapping: 

    • Entities must create and maintain a detailed inventory of all technology assets, including AI software and solutions that interact with ePHI.

    • A comprehensive network map must illustrate how ePHI moves through electronic systems, including capturing where AI processes data.

    • Inventories and network maps must be reviewed and updated annually or when significant operational changes occur.

  • Verification of Business Associates' AI Safeguards: 

    • Regulated entities must confirm that business associates using AI implement required security measures.

    • Annual written verification of business associates' risk analyses and technical safeguards is required.

  • Implementation of Technical Controls: 

    • Mandatory use of multi-factor authentication (MFA) and encryption to secure AI-driven processes handling or affecting ePHI.

    • Continuous testing and validation of AI models to detect vulnerabilities and ensure data integrity.


Adoption of AI Governance Frameworks

Although the proposed rule does not explicitly mention it, an effective AI governance structure is crucial for conducting comprehensive risk analyses and complying with new AI requirements. Health systems would benefit from establishing an oversight committee—potentially led by a Chief AI Officer—along with implementing organizational policies, procedures, and technical and physical safeguards. Training workforce members based on their AI-related responsibilities and enforcing appropriate access controls are essential. The proposed rule does highlight the NIST AI Risk Management Framework as a valuable resource to help regulated entities understand, measure, and manage AI-related risks, impacts, and harms.


Timeline for Compliance

If finalized, the new rule would take effect 60 days after publication, with most regulated entities required to comply within 180 days. Additional transition time may be granted, such as for updating business associate agreements.


Request for Comment

HHS requests comment on the earlier discussion regarding the Security Rule's protection of ePHI in emerging technologies, including any advantages, disadvantages, or unforeseen effects. Additionally, HHS seeks input on the following specific considerations:

  • Is HHS’s understanding of the Security Rule for new ePHI technologies comprehensive? If not, what additional issues should be considered?

  • Could technologies harm ePHI security and privacy beyond the Security Rule's scope, and if so, what modifications are needed?

  • Are there any additional policies or technical tools to address the security of ePHI in new technologies?


Shaping the Future of AI in Healthcare

HHS's proposed updates represent a significant step forward in securing AI systems within healthcare. While the rule broadly addresses numerous cybersecurity aspects, its focus on AI-specific safeguards highlights the growing importance of responsible AI integration. By adopting these best practices, healthcare organizations can safeguard sensitive data while harnessing the transformative potential of AI. As the healthcare industry moves toward this new regulatory landscape, stakeholders must prioritize AI compliance and actively participate in shaping responsible AI integration. The future of healthcare depends on it.




 

Comments


SIGN UP AND STAY UPDATED ON NEW HEALTH LAW CONTENT!

The views shared on this blog belong to the author and should not be taken as legal advice.

© 2025 Talking Health Law. All Rights Reserved.

bottom of page