CISA releases guidelines on AI Based attacks

CISA releases guidelines on AI Based attacks


The US CISA has published guidelines to protect critical infrastructure against AI-based attacks.

CISA in collaboration with Sector Risk Management Agencies and regulatory agencies to conduct sector-specific assessments of AI risks to U.S. critical infrastructure.

The outcome of the analysis categorized AI risks into three categories:

  • Attacks Using AI
  • Attacks Targeting AI Systems
  • Failures in AI Design and Implementation

AI risk management for critical infrastructure is an ongoing process throughout the AI lifecycle.The guidelines integrates the AI Risk Management Framework into Enterprise risk management programs for critical infrastructure. The AI RMF Core consists of the Govern, Map, Measure, and Manage functions.

Advertisements

The Govern function within the AI RMF establishes an organizational approach to AI Risk Management within existing Enterprise Risk Management. Recommended actions for addressing risks throughout the AI lifecycle are integrated into the Map, Measure, and Manage functions. These guidelines improve AI safety and security risk management practices proposed by the NIST AI RMF.

The foreseen risks are context-dependent, this implies that critical infrastructure operators should consider both sector-specific and context-specific factors when assessing and mitigating AI risks. Specific sectors may need to define their own tailored guidelines for managing AI risk.

Stakeholders should focus on different aspects of the AI lifecycle depending on their sector or role, whether they are involved in the design, development, procurement, deployment, operation, management, maintenance, or retirement of AI systems.

DHS last week revealed a new AI Safety and Security Board whose members include tech sector CEOs Sundar Pichai of Alphabet, OpenAI’s Sam Altman, and Advanced Micro Devices’ Lisa Su.

Advertisements

Critical infrastructure owners and operators can foster a culture of risk management by aligning AI safety and security priorities with their own organizational principles and strategic priorities. This organizational approach follows a secure by design philosophy where leaders prioritize and take ownership of safety and security outcomes and build organizational structures that make security a top priority.

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.