On 30 April 2026, the Australian Prudential Regulation Authority (APRA) issued a letter calling for a step-change in how banks, insurers and superannuation trustees manage AI-related risks as the technology continues to rapidly evolve. In particular, APRA warns that governance, risk management, assurance and operational resilience practices are not keeping pace with the scale, speed, and complexity of AI adoption.

Boards

With respect to boards, APRA observed strong interest and pursuit for AI’s potential benefits and strategic imperatives, but also noted an overreliance on vendor presentations and summaries without sufficient examination of key AI risks such as unpredictable model behaviour and the impact on critical operations.

APRA expects boards, at a minimum, to:

  • Maintain sufficient understanding and literacy with respect to AI in order to set strategic direction and provide effective challenge and oversight.
  • Oversee an AI strategy which is consistent with the entity’s risk appetite and tolerance settings, supported by effective monitoring and reporting (including for third party dependencies), with clearly defined triggers aligned to resilience objectives to enable timely action when not operating as expected.

APRA expects entities to establish consistent governance arrangements that include, at a minimum:

  • Frameworks (policy, standard, guidance) and reporting lines to promote safe, responsible and sustainable adoption of AI.
  • Ownership and accountability across the AI lifecycle, from design and development through to deployment, monitoring and decommissioning.
  • An inventory of AI tooling and AI use cases.
  • Human involvement for high-risk decisions and accountability.
  • Training and education of staff on AI use, misuse, limitations and secure practices.

Supplier risk management

APRA observed some entities heavily dependent on a single provider for multiple AI use cases.

APRA expects entities to manage supplier risks, this would include, at a minimum:

  • Mapping and maintain visibility over the full AI supply chain, including material, third‑party and fourth‑party dependencies.
  • Contractual and governance arrangements which provide sufficient transparency, auditability and assurance over AI services.
  • Entities should have the ability to understand model behaviour, material changes, performance issues and outcomes, and risk management practices across the service lifecycle.
  • Active management of concentration risk.  This would include plausible and systemic failure scenarios, the credibility and feasibility of substitution, portability or exit arrangements for critical AI providers.

Traditional change management and assurance

AI risks can cut across multiple domains at regulated entities and APRA has observed that firms’ existing change and assurance management approaches are often fragmented and may not effectively provide sufficient assurance.

APRA expects entities will adopt effective assurance mechanisms and approaches. This would include at a minimum:

  • Employing globally recognised control frameworks including control libraries and change control for AI implementations.
  • Applying integrated assurance across cyber security, data governance, model performance risk, operational resilience, privacy, and conduct risks.
  • Second line risk management and internal audit functions possess technical capability and tooling to independently assess AI systems including probabilistic models and agentic workflows.
  • Conducting comprehensive risk and information security assessments prior to deployment and throughout the lifecycle. Monitoring should be continuous and proportionate to the criticality of the use case, including consideration of model purpose, limitations, explainability and potential customer impacts.

Cyber

APRA is also engaging across the sector on the potential for increased cyber threats from high capability AI frontier models such as Anthropic Mythos.

APRA expects entities to actively manage information security vulnerabilities and threats. This would include:

  • Assessing the implications of AI reliance for operational resilience and business continuity. Where AI supports critical operations, credible fallback processes are required.
  • Security controls and capabilities that effectively address AI‑specific threats and attack paths. This would include strong privileged access management, timely patching, hardened configurations, automated vulnerability discovery, penetration testing, and controls over agentic and autonomous workflows.
  • Robust security testing across AI‑generated code, software components and libraries.
  • Ongoing consideration of third-party and concentration implications in relation to common platforms, services, and providers.

Next steps

APRA is currently finalising its forward plan with regards to supervision of AI risks, taking a proportional approach to entity prudential reviews, thematic activities and AI supplier engagement. APRA will continue and monitor the use of AI to assess potential prudential risks and consider whether further APRA policy action may be needed.

While the letter provides guidance based on current observations, APRA strongly encourages entities to engage early with APRA’s Non-Financial Risk Team via its supervisors on any unexpected or heightened AI-related risk concerns, including where existing risk management approaches may be challenged.