Framing the Future: Should Cyber and AI Be Standalone Strategic Risks?
Cyber threats are escalating and AI regulation is accelerating globally, meaning Boards are being asked to revisit how these risks are framed. To ensure oversight keeps pace with the speed and scale of technological change, the way we categorise risk must evolve.
I am working with an organisation who are currently updating their strategic risks for their annual report. This led to an interesting debate about whether cybersecurity and artificial intelligence should be stand-alone risks in their own right, or sub-risks of information governance.
This is a strategic choice for boards. The way risks are framed shapes visibility, accountability, and ultimately the level of board oversight.
So, should you elevate cyber and AI to stand-alone risks to signal their importance and current focus, or do you integrate them into information governance to maintain consistency and avoid increasing the number of strategic risks? There are pros and cons of each approach.
The standalone approach
Treating cyber and AI as distinct strategic risks allows them to receive increased attention and prioritisation from the Board, a level of focus frequently required by regulators, auditors, and investors. Accountability is also elevated within the organisation, and there is a greater focus on these specific risk areas rather than being “diluted” by broader information governance concerns.
However, Boards can feel swamped if every emerging risk becomes a stand-alone strategic risk. Then, there is potential duplication with information governance, data protection, and operational resilience unless boundaries are defined. Assurance and controls will need strong coordination to avoid gaps or inconsistencies.
The integrated approach
When cyber and AI are incorporated as sub-risks into information governance, you get an integrated view of the data lifecycle. Cyber and AI risks often result from data quality, access, storage and governance. Keeping them together creates consistency and allows streamlined reporting. Controls, policies, and assurance activities can be consolidated under one framework.
However, there are disadvantages. Given the current climate, integrating the risks can mean cyber and AI become “buried” within the broader category. There may be an underestimate of the level of systemic risk across the organisation. Cyber and AI failures can cause enterprise-wide disruption, reputational damage, and regulatory action and this is often beyond the scope of the traditional information governance risk.
Distinct risks, distinct strategies
Remember, cyber and AI are not the same. They are frequently grouped as if they represent identical risks that require the same strategies for mitigation. Cyber risk focuses on protecting systems, networks, and data from malicious attacks and technical vulnerabilities, while AI risk centres on how automated decision‑making can create ethical, operational, safety, and regulatory issues even when no attacker is involved.
So how do Boards decide?
The Board needs to satisfy itself that the categorisation will result in the right level of visibility and accountability, and allow the right level of assurance to meet stakeholder and regulatory expectations? Three factors are normally considered:
- Materiality – if cyber and AI could threaten organisational viability, they belong as a stand-alone strategic risks.
- Maturity – organisations with CISOs, an AI governance lead, or a comprehensive digital transformation programme may choose to align cyber and AI risks with the operating model.
- Clarity of assurance – if assurance and controls become “muddied” or duplicated, separating the risks often improves clarity and oversight.
Hybrid model
Many organisations now use a hybrid model to keep visibility high, whilst avoiding any unnecessary fragmentation:
- Cybersecurity as a stand-alone strategic risk
- AI governance either as a sub-risk of cyber, or a stand-alone risk if AI is material to operations, safety, or ethics.
Evolution
We all know risk frameworks are not static. As cyber threats evolve and AI regulation accelerates, Boards may need to revisit their positioning. Remember, what matters most is not the label, but whether oversight is strong enough to protect the organisation and its stakeholders.
Our Services
GRC Catalyst supports organisations by shaping strategic risk positioning, designing governance and operating models, developing accessible cyber and AI policies, mapping assurance to highlight gaps and duplication, building board capability, and preparing organisations for emerging regulatory expectations. Our focus is on clarity, proportionality, and practical implementation.
Disclosure
The concepts and ideas in this article are mine or have been referenced; I developed the body of the text and conducted the final editorial check. I used AI as a tool for research, to improve the flow and grammar of the article, and to check for factual inaccuracies.