Page Contents
Key Contacts
Related Services
Artificial-intelligence capability is no longer an experimental add-on; it now drives fraud detection, clinical triage, market analytics and countless day-to-day tasks. Gartner predicts that 40% of enterprise apps will feature task-specific AI agents by 2026 (up from less than 5% in 2005).[1] Yet many commercial agreements still assume a world of deterministic code, not probabilistic models that learn, drift and occasionally hallucinate.
Whether you are supplying or procuring AI systems in Northern Ireland (and the wider UK market), the contract you enter with your counterpart will remain the most important line of defence against the legal, commercial and reputational risks that flow from this technology. It should address both the unique, probabilistic nature of AI systems and the evolving regulatory landscape.
At present, there is no single Northern Irish or UK statute that expressly regulates AI. However, existing legislative touchpoints (most notably the UK GDPR and the Data Protection Act 2018, the Consumer Rights Act 2015, the Equality Act 2010 (as extended to Northern Ireland by secondary legislation), the tort of negligence and sector-specific regulatory frameworks) already create duties that will be triggered whenever an AI system is trained, procured, customised or placed into service.
In addition, while the EU Artificial Intelligence Act does not currently have direct legal effect in Northern Ireland – unless and until the Windsor Framework is interpreted to include it – it is nonetheless likely to exert de facto influence on cross-border transactions, supply chains and customer expectations. This is particularly relevant for providers whose services straddle the Great Britain–Northern Ireland border or are ultimately consumed within the EU single market.
A well drafted contract will help ensure that rights, responsibilities and risks are clearly allocated, reducing the likelihood of disputes and supporting compliance as the legal framework develops.
In this article we consider eight clauses that require particular attention in contracts relating to the provision of AI systems and provide practical drafting tips for both vendors and customers.
1. Definitions
The starting point in any contract which involves the provision of AI systems is to ensure that the definition of “Artificial Intelligence” is clear and technology-agnostic. Incorporating a broad concept aligned, for example, to the definition proposed by the OECD (“a machine-based system that is designed to make predictions, recommendations or decisions influencing real or virtual environments”) avoids the need for iterative renegotiation each time the underlying technology evolves.
The contract should also clearly distinguish between “Provider AI” (systems owned or licensed by the provider), “Third-Party AI” (systems incorporated or embedded within the services but supplied by an external vendor) and “Customer AI” (tools furnished or specified by the client). This categorisation will help underpin the allocation of responsibilities elsewhere in the contract.
2. Performance obligations
Traditional warranties (“accurate,” “error-free” etc.) rarely suit probabilistic systems. Modern clauses should focus instead on process integrity and responsible deployment. Customers are likely to want vendors to warrant that the system has been trained on representative and appropriately curated datasets, that performance has been independently validated using suitable metrics and that any limitations or confidence intervals in the output will be disclosed to the customer.
To guard against discriminatory or biased outcomes (an issue particularly relevant under Northern Ireland’s unique equality legislation) customers are also likely to want to impose a continuing obligation on the provider to monitor model drift, re-train when bias is detected and implement a human-in-the-loop escalation pathway for decisions carrying legal or significant effects.
3. Intellectual Property
Ownership of AI deliverables is seldom binary and relying on legacy IP provisions can create uncertainties and disputes.
In most cases, the contract should confirm that, save for pre-existing IP retained by each party, all rights in any AI-generated output vest in the customer upon payment.
At the same time, providers will want to preserve ownership of the underlying model, algorithms and training data so they can reuse those assets in future engagements.
A commonly accepted position at present is for the customer to acquire a perpetual, royalty-free licence in the generated output, while the provider retains all rights in the underlying model and training pipelines (subject to confidentiality restrictions).
Where third-party large language models or foundation models are integrated, providers will need to ensure that they pass through any upstream licence restrictions, thereby preventing inadvertent infringement and ensuring the customer understands any residual obligations (such as a prohibition on reverse engineering or the requirement to include attribution notices).
4. Data Protection
Data protection and governance provisions warrant particular attention. Most AI systems are extremely data-hungry, but the processing of personal data must still be justified under one of the lawful bases enumerated in Article 6 UK GDPR and any special category data must satisfy Article 9 conditions. Providers should also ensure it has explicit contractual authorisation from the customer to process personal data for training, improvement and support purposes and require the customer to obtain any necessary consents from data subjects.
A well drafted data processing schedule, incorporating the UK Information Commissioner’s standard contractual clauses, remains essential, but it should now also include an annex addressing AI-specific matters, such as the categories of data used for model fine-tuning and the technical and organisational measures deployed to prevent re-identification.
5. Liability allocation
AI has the potential to generate harm at scale. As a result, liability provisions that were suitable in the context of the procurement of a traditional IT system are unlikely to be adequate for AI systems. For each contract, they should be tested against scenarios in which the AI makes an erroneous or discriminatory decision that causes economic loss, personal injury or regulatory sanction.
Customers may wish to introduce a dedicated liability cap for AI-generated harm, potentially linked to the provider’s professional indemnity insurance limits. Providers are likely to want to reduce or exclude their liability where the customer’s data, prompts or configuration causes or contributes to the harm.
Customers are likely to press for indemnities or other protection to limit liabilities associated with infringement of third-party IP resulting from the output of the AI system.
Carve-outs for indirect or consequential losses now need careful consideration given the scale and unpredictability of AI-related losses. Some high-impact AI errors (such as mass mispricing) could be classified as indirect losses.
6. Audit and transparency rights
Regulators and end-users are increasingly demanding visibility into AI systems. As such, audit and transparency rights are rapidly becoming standard in agreements for the supply of AI systems.
To demonstrate regulatory compliance and build trust, providers are often required to commit to maintaining documentation (often referred to as “model cards” or “system cards”) describing the model’s architecture, training data provenance, accuracy benchmarks, bias testing methodology and ongoing monitoring results.
Customers will typically seek a contractual right, exercisable on reasonable written notice, to review this documentation or commission an independent audit. Providers will want to ensure that any such rights are subject to their confidentiality obligations and security requirements.
7. Termination and transition rights
Termination clauses should be updated to recognise AI-specific triggers. Material breach may include the provider’s failure to remediate identified bias within a specified timeframe or a breach of data protection obligations arising from the operation of the AI system.
Providers that rely upon third-party models delivered “as a service” may require a force-majeure-style clause addressing cessation of access to the underlying platform. Customers, on the other hand, are likely to want providers to migrate them to a functionally equivalent alternative if such a situation arises.
Customers are also likely to want the provider to be required to supply model weights, training data (where legally permissible) and export tools to facilitate migration to an alternative solution.
8. Sub-contracting
AI supply chains are often multilayered. As a result, sub-contracting provisions in AI contracts need careful consideration. Customers will typically want the provider flow down all AI, data protection and confidentiality obligations to its sub-contractors, and to accept responsibility for their acts and omissions.
They will also likely want to ensure that they are notified of any planned material change in the identity of the AI sub-contractor, thereby affording the customer an opportunity to conduct its own due diligence. For high-risk sectors, the customer may want the ability to object to the appointment of new sub-contractors.
Conclusion
While the legislative environment is currently playing catchup in respect of the risks and opportunities afforded by AI systems, providers and customers in Northern Ireland can, and should, act now to future-proof their contracts. Doing so will not only avert costly disputes tomorrow, but will signal to regulators, partners and end-users that their approach to AI is transparent, ethical and built to last.
For tailored guidance on managing the legal and commercial risks of AI in your contracts, please reach out to Mark Thompson, Partner, Keith Dunn, Senior Associate, Carrie McMeel, Senior Associate, or a member of our Commercial Contracts team.
Date published: 10 September 2025
[1] https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025