AI risk isn't just internal - it's outsourced
The risks associated with artificial intelligence (AI) are not just confined to your organisation. Most organisations rely increasingly on a wide range of third parties to provide goods or services that are critical to your business success. Every partner, supplier, and vendor could be embedding AI into their processes and services creating the new dimension of “shadow AI”; systems that impact you indirectly but are outside your direct control.
Your organisation is only as safe as the AI your partners use.
Third party AI risk could take many formats. Think of the following examples. Could they happen to your organisation?
- Data exposure: third party AI models may be trained on your sensitive data. This data could be leaked unintentionally or misused to support another client. Or your third-party chatbot could leak sensitive user information or hallucinate, giving incorrect answers, eroding trust in your brand.
- Regulatory: suppliers using AI may not comply with evolving AI regulations and best practices including the EU AI Act, the US AI Bill of Rights, and NIST AI Risk Management Framework. This could put your organisation at risk. For example, if a supplier’s AI-based quality assessment model mis-classifies defects, it could lead to product failures and costly recalls for your company.
- Model bias and discrimination: vendors’ AI systems could introduce bias in areas where they support your organisation e.g. hiring, customer service, or other data processing services. You organisation remains legally and reputationally accountable for the outcomes of these biased models. Imagine an AI screening tool in a recruitment agency discriminates against certain groups, leading to reputational and legal fallout for you as their client.
- Security vulnerabilities: third-party generative AI integrations can inadvertently expose your organisation to attack. Poorly secured AI tools may be vulnerable to prompt injection (where malicious inputs trick a model into revealing sensitive information) or data exfiltration through insecure APIs (application programming interfaces) that connect to your systems. Weaknesses in a vendor’s AI implementation can expose your data, users, or infrastructure to risk.
- Dependence and opacity: “black box” AI from vendors limits transparency. It is difficult to assess how their models make decisions that affect your customers or operations. This lack of visibility makes detecting errors, bias, or malicious manipulation more challenging.
Managing Third Party AI Risk
So, what can you do to minimise the risk to your organisation? Visibility and accountability are essential. You cannot govern what you cannot see. Contracts and due diligence need to evolve. “AI use clauses” are set to become as common as data protection clauses. Your due diligence questionnaires and reviews will expand to AI and AI governance.
- Map your AI supply chain.
- Identify where third parties use AI in your operations or to support your operations, even if they use it indirectly.
- Update your third-party risk register to include “AI dependency” as an assessment category. Identify core business processes that rely on AI and assess the availability of manual or alternative processes should AI systems fail.
- Deepen Due Diligence.
- What AI systems are they using that could impact your data, your customers, or influence your decision-making?
- How are their models trained, tested, and governed. Specifically, are they using your information for training purposes, and are you sharing proprietary information with them?
- Are they compliant with relevant AI governance standards (e.g. ISO/IEC 42001, NIST AI RMF)? External certifications provide crucial assurance that products and services meet the required standards.
- Update Contracts and Service Level Agreements.
- Make sure your contracts require AI transparency, data handling disclosures, and incident reporting. These clauses safeguard you and build trust in the third-party organisation.
- Include the right to audit and/or require that AI assurance documentation is shared. This provides independent confirmation that appropriate controls are in place moving beyond simple vendor assurances.
- Establish Shared Accountability.
- Treat AI governance like cybersecurity and apply a shared responsibility model. Define what protections and controls each party must have in place.
- Encourage your vendors to align to your AI principles and ethics policies. Assess this alignment early in the selection process and reassess during onboarding.
- Continuous Monitoring and Oversight.
- Require periodic updates from vendors about AI changes or new deployments. These reports ensure you are aware of any changes that may impact your risk profile.
- Include AI-specific elements in vendor risk assessments and audits. It’s important to go beyond traditional compliance checks and evaluate how AI is being deployed, governed, and controlled.
Looking Ahead: AI Assurance and Digital Trust
As AI becomes embedded in every service and system, AI assurance will become the foundation of digital trust. Organisations that build robust AI supply chain governance proactively will not only protect regulatory compliance and embed resilience but will also secure a competitive advantage by building customer confidence.
Where are you on the journey?
Our Services
GRC Catalyst offers specialised Governance, Risk, and Compliance (GRC) Advisory Services designed to help organisations navigate complexity, including challenges like AI adoption. In relation to third-party AI risk, we help clients embed robust governance frameworks, perform GRC maturity assessments, and ensure strategic alignment. Our expertise enables clients to address regulatory scrutiny and stakeholder demands proactively, supporting the establishment of the clarity and accountability needed to govern outsourced “Shadow AI” effectively.
Disclosure
The concepts and ideas in this article are mine or have been referenced; I developed the body of the text and conducted the final editorial check. I used AI as a tool for research, to improve the flow and grammar of the article, and to check for factual inaccuracies.