More and more, organisations are realising the potential for artificial intelligence-based systems (“AI”), machine learning (as a sub-category of AI) (“ML”) and, more broadly, the exploitation of ‘big data’ to help optimise business processes and gain valuable, and previously unthinkable, insight into customer and trading patterns, preferences and future trends.
Although some may regard this trend as the latest self-inflating bubble in the cycle of Silicon Valley-inspired ‘disruptive tech’ and some may disregard the seemingly ubiquitous promotion of any remotely analytical or insight-providing software as ‘AI’, it is difficult to see the increased adoption of AI/ML (in whatever form) across all aspects of business and life going away anytime soon. And with the increasing sophistication of the technology itself, but also as well as greater commerciality and business needs-awareness of the developers of the technology, when combined with organisations’ increasing openness to exploring these new technologies, the use of AI/ML across all industries looks here to stay.
Many organisations will find themselves looking to procure AI/ML solutions from third party suppliers, often with limited knowledge or understanding of the specific technologies on offer or the primary legal and commercial considerations which should be factored into the arrangements being put in place with those suppliers. With the unprecedented speed and scale of the adoption of AI/ML, it is unsurprising that it places a strain on the ability of technology contractors, and their existing contracting models, to address the equally fast-paced changes in the risks that those technologies carry and that should be catered for in the relevant agreements. As it stands, in Ireland, the EU and most other jurisdictions, there is no general statutory or other framework regulating the development, supply or use of AI/ML technologies. Although this position will probably change shortly (see our previous note on the European Commission’s proposed ‘AI Act’ [here]) it means that for the most part, AI service providers and their customers are operating in an effective legal vacuum, with the boundaries of the relationship left to the parties themselves to define.
This note sets out, at a very high level, the primary matters that should be at the forefront of the mind of anyone contracting with AI/ML suppliers for the receipt of services in the field. However, given the particular challenges AI/ML poses to previously tried-and-tested contract-based models, consulting with a suitably experienced technology lawyer is the best step to get the comfort you need (something with which RDJ can assist).
Speed Read: To jump ahead to any of the items addressed below click on one of the following:
Perhaps just as important as the terms in place with your AI/ML service provider is the preliminary and background diligence done by you, as the procurer of the AI/ML services, on both the service provider and the technology itself. In the case of the latter, as you are likely coming to the table with far lesser understanding of the sophisticated software, algorithms and techniques underpinning the solution being procured, it is important to get as much of an insight into the technical aspects of the service as possible to appreciate both where the risks to your business may arise and the specific solutions it can offer to your business needs. Transparency from a service provider should be a key prerequisite to entry into any relationship involving complex technology solutions and imbalanced technical knowledge
Equally, a supplier that can clearly explain their product’s design, process and functioning will garner greater trust from its customers.
Appropriate diligence on the service will lead naturally to a better understanding of the output you should be expecting from it, and therefore to better provision for performance standards in the contract. If the AI/ML is designed to bring about a particular result or particular improvement (such as increased revenues, improved customer engagement, etc.), then measurable indicators and concrete outcomes should be set out, with, if desired, consequences for failure to reach agreed standards. In the same way as a traditional services agreement will provide for service levels, KPIs and consequences for failures, AI/ML service providers should remain equally accountable notwithstanding the heightened complexity of the service itself.
This also works both ways – a supplier that can demonstrate what is, and what isn’t, within its control can more effectively set standards that do not leave it liable for outcomes that it could not prevent.
Remember also that audit rights and other controls are vital to not only achieving the service’s intended outcomes, but also to ensuring that your business’s ethical principles and regulatory requirements (including any impending legal controls on AI) are being met.
At the end of the day, AI/ML is still software, albeit more complex and often involving more component parts. Previous principles employed by technology contractors should not simply be abandoned in the face of seemingly opaque and complex new technologies. Consider how the service is being provided – is it cloud-based, in which case principles of SaaS contracts will be relevant? What licences, if any, are required to access and use the product? What support and maintenance requirements will your business have?
One of the most discussed topics in the field of AI contracting is the allocation of liability between parties. Given that, again, we remain without any clear legislative provision, it is up to the formal contract between the parties to determine who would be legally responsible for acts or omissions of the AI/ML service. To an extent this tracks with common principles of liability allocation in contracts (in short, both parties looking to transfer as much potential liability to the other side as possible), with each party’s relative negotiating strength playing a role here. However, once again the customer’s diligence of the AI/ML solution, and its abilities and limitations, will be important.
From the customer’s perspective, the service provider should not seek to exclude liability for matters which are actually in the service provider’s control, either in whole or in part, and for which it should therefore be liable. Determining the extent of what is within the service provider’s control is the difficult piece here, due to the very nature of AI and the ‘black box’ problem it creates when a system can develop its own logic (and, indeed, such logic often being based on the data provided by the customer, and others). In the absence of any all-encompassing solution here, customers should prioritise understanding the technology in order to be able to back up its position when seeking to attribute liability to the service provider in the agreement.
Another significant touchpoint with AI/ML, for which current legal principles do not adequately provide, is the ownership of intellectual property in what goes into, and what comes out of, AI/ML systems. As with any software contract, clear provision should be made for ‘who owns what’ in terms of what is provided by both parties, and what is created. In particular, the data/other outputs generated by the service will, from the customer’s perspective, be something that should be owned by that customer, but the nature of the service may mean that it is restricted in its ability to use that data outside of the confines of the AI/ML system itself.
In addition, there is a more fundamental question around who owns IP which is effectively created by the AI/ML system itself, without human involvement (such as where an underlying platform is utilised by a customer by way of algorithms developed by the platform by way of ML). Current copyright laws cannot adequately address this problem, leaving a gap which needs to be filled contractually and clearly in order to avoid future disputes. We will be exploring the application of current IP principles to AI/ML in more detail later in this series.
If it is the case that personal data is likely to form part of data inputs and outputs, then the ever-present GDPR will need to form part of your diligence and contracting processes. On the former, your understanding of what actions, specifically, are being undertaken with the data by the AI/ML solution, as well as the service provider’s own data protection and security practices, are a fundamental first step. Also bear in mind that the GDPR makes specific provision for decisions based solely on ‘automated processing’ that produce legal effects, as well as for the provision of information to data subjects on any automated decision-making.
Just as fundamentally, from the customer’s own perspective it, as the (likely) data controller will bear sole responsibility for its compliance with its obligations under data protection law. Being able to explain to individuals how and why their data is being processed is vital, as is complying with the provisions of the GDPR specific to automated processing as discussed above. Ensuring an effective legal basis for the processing is an obvious requirement, including for any future uses for data arising from the AI/ML’s services.
If the customer is subject to particular regulatory requirements (such as CBI regulation, operating in the medical technology field, etc.), then ensuring ongoing compliance with those requirements throughout that customer’s use of the AI/ML service is crucial. Such customers should be thinking not only about how their compliance regimes may be affected by the new technology, but also whether or not it remains within that customer’s control to achieve compliance or if, alternatively, it is reliant on the service provider to achieve/maintain that compliance. In the case of the latter, clear contractual obligations will be necessary. This topic will also be discussed in more detail later in this series.
Of course, the above is a list of only some of the primary issues, designed to get businesses and their technology procurement team/lawyers thinking about what to address in order to get full value from the AI/ML service being considered. There will always be others, both general (such as choice of law, potentially implied warranties and other terms, future impending laws, etc.) and circumstance-specific. Although some may hate the term, from a contract perspective AI/ML certainly has been ‘disruptive’ in its effects on traditional IT contracting models, making early and ongoing engagement alongside careful diligence combined with comprehensive legal provisions, all the more important, both for service users and service providers.