Close
What would you like to look for?

23 April 2024

Part 15: What you should legally consider when retaining AI providers

The majority of our clients use generative AI primarily in the form of online services. There are a large number of providers on the market. They all make bold promises, but their contracts are often insufficient. But what should you look out for in such contracts? We cover this in part 15 of our AI blog series.

To get straight to the point: We have compiled the most important aspects to look out for in contracts with providers of AI services in a free checklist. It can be found here in English (and here in German):

The list of relevant aspects is rather long, which is due to the fact that it needs to cover different setups and topics. The various requirements do not play a role in all projects nor are they equally important. We have marked with an asterisk the most important points that should usually be taken into account as a bare minimum.

If a company decides to use an AI service, it must first be aware of the areas in which it could be legally exposed through the use of AI. We roughly differentiate between four areas:

  1. Data protection: Data protection is always relevant if the service will process personal data, i.e. information about employees, employees of customers, consumers and other natural persons who can be identified directly or indirectly. In these cases, a company must conclude at least a Data Processing Agreement (DPA) in accordance with the GDPR or the Swiss Data Protection Act, as applicable. Some providers offer this as standard, while others do not, or only for larger corporate customers. If a provider does not have such a contract, even though it is a processor, this is a sign of a lack of maturity and understanding of (European) data protection and caution is advised. That said, such a contract is usually not required if the login and account data are the only personal data processed by the provider, in which case the provider is usually not a processor. Nevertheless, a company may feel the need to contractually protect the processing of such data, for example to prevent the misuse of data generated by the use of the service. Other typical issues that should be addressed in such a data handling contract are the transfer of data to other countries (especially the USA or other countries without adequate data protection) and whether the provider can also use the customer's data  itself or even sell it to third parties (e.g. for training its own or third party AI models). Companies will usually want to prohibit the latter. When a data processing agreement is signed, it always has to include adequate information security measures. Once these measures are in place, it is usually acceptable under data protection law to use such a provider, even if it were to process personal data. After all, AI service providers are ultimately general providers like all the other cloud and SaaS providers that are on the market; still, we see that some companies hesitate to use them while they have no second thoughts with regard to other cloud and SaaS providers. The risks are generally similar in most cases (e.g. substandard security, the right to use data for own purposes of the provider, the provider's lack of contractual compliance). Nevertheless, we are seeing a reluctance on the part of companies to allow their employees to use such AI services with personal data. We assume that this is a consequence of the general uncertainty that still exists in the market with regard to AI services. Partly, the providers themselves are to blame for this, because many of them are unfortunately not very transparent or clear with regard to their services and data processing, and do also not provide adequate contracts and  partly, companies simply do not yet trust their employees to use AI services properly. Regardless of whether the contract with the provider is suitable from a data protection perspective, it is always necessary to consider whether the service itself is permissible, what data protection measures need to be taken, and what risks are involved. With regard to the risk assessment, we refer you to Part 4 of our blog series and our free tool "GAIRA", which can be used to carry out corresponding risk assessments.
  2. Secrets: Companies need to protect not only any personal data they process with AI services, but also any other possible secret information at issue. These secrets may be their own trade secrets, but also those of third parties entrusted to the company. Certain companies are also bound by traditional professional secrecy, such as banks, lawyers, telecommunications service providers or hospitals – and many of the third parties they commission to provide their IT infrastructure. It is therefore important that contracts with AI service providers include appropriate confidentiality obligations in addition to the usual clauses on data protection (such as the DPA referred to above). Unfortunately, this is less common than one might expect. Most providers promise their customers that they will have certain technical and organisational measures for data security in place (so-called TOMS), at least in the context of data processing agreements (where they act as processors). However, this should not be confused with the (more far-reaching) contractual obligation not to disclose secrets to third parties, nor to process them for their own purposes or for the purposes of third parties, and to protect them from access by third parties, i.e. to keep them secret. These obligations must also be covered by the provider. Moreover, they must apply not only to the employees but also to the provider itself (and its subcontractors) and must survive the termination of the contract, which is not standard in many contracts. In the case of secrets that are subject to professional (or official) secrecy, additional clauses such as a "defend-your-data" provision and a commitment that the provider's employees will not normally access the data in plain text must be agreed. Furthermore, confidentiality clauses do not belong in data processing agreements because such contracts generally apply only to personal data and only to controller-processor setups; in practice, however, other data are also subject to confidentiality, including data processed by the provider under its own responsibility under data protection law.
  3. Ownership, IP Rights: We covered this topic in detail in Part 14 of our blog series, including contractual issues. The protection of the company's own intellectual property, as well as the proprietary content of third parties that the company may wish to use as part of the AI services, needs to be considered in this topic. As with data protection and professional secrecy, care must be taken to ensure that the provider does not use this content for other purposes (e.g. its own training or sale to third parties) and that it is kept confidential. It is also important to check in the contract whether the output of the AI service, e.g. the generated texts, images, videos and sounds, can be used freely by the company or whether the provider imposes restrictions on its customers. Ideally, the provider should state that these AI outputs belong to the customer and may be used freely. However, it is not uncommon for the provider to impose certain rules of proper conduct ("acceptable use policy"), specifying which inputs and outputs are not tolerated, even if the customer manages to generate them. Please note: Some providers allow the use of the outputs for any purpose, but prohibit the use for training their own AI systems and other automated uses.
  4. AI regulations: Even if the provisions of the EU AI Act do not yet apply at the time of this article, companies should still prepare for them – especially if they conclude contracts with providers of AI services whose services and products may be covered by them. Here, too, a number of things can already be regulated in the contract today. On the one hand, this involves obliging the provider to fulfil the necessary obligations that will one day be imposed on them. Even if the customer of a provider is not responsible for its compliance, a breach of the EU AI Act, where applicable, could still have an impact on the provider and the availability of the service or product; even a customer to whom the AI Act does not apply will thus have an interest that its supplier adheres to the AI Act. On the other hand, the customer itself may also be subject to obligations under the AI Act, for which it may in turn be dependent on the support of the provider. This applies in particular if it becomes apparent that the customer is also considered a "provider" under the AI Act. A company can already prepare for this today when concluding its contracts.

In practice, the above points or the points on the checklist can be incorporated into the contract in a variety of ways. We have already seen the first (larger) companies require their suppliers to sign an "AI Annex", in which they are obliged in particular to disclose the use of AI upfront. There is a certain logic to this, as companies are obliged to comply with the relevant requirements for AI systems, e.g. under the AI Act, even if they are not even aware that they are using such systems. Please refer to our detailed article on the EU AI Act in Part 7 of our blog series. It therefore makes sense for companies to also take steps on the part of their suppliers to understand which of the services and products they purchase contain "AI", i.e. systems that work wholly or partly on the basis of training rather than programming.

Another development we are seeing in practice is the sharp increase in the number of companies that do not operate the AI they offer themselves, but have instead outsourced it. There are good reasons for doing so, as it is now much easier for many companies to buy basic AI functionality from one of the large hyperscalers and have it run in their cloud. However, from a contractual and compliance perspective, this makes things more complicated, as a vendor will only want to make contractual promises to its customers that have been confirmed by its subcontractor, i.e. that apply "back-to-back". However, such ISVs will only be able to negotiate their contracts with these large subcontractors to a limited extent. Each additional subcontractor is also another "point of failure" in the delivery of services. 

The third development we are observing is the increasing use of vendors who develop AI solutions and then implement them in their customers' IT infrastructure. In some cases, only the customers operate the systems; in others, the providers do. From a legal perspective, however, the customer is the (primary) provider of the AI solution. The classic example is the chatbot on a company's website that it uses to relieve its customer service. In these cases, there is sometimes debate about the role of these providers and the extent to which they can be held liable. In our view, it makes sense to have agreements on compliance with the legal framework and corresponding contractual obligations, even if these providers are only acting as technology suppliers and implementation partners and not as service providers. This includes information on the AI models used, how the solutions are protected against misuse, and whether the solutions comply or will comply with future requirements, e.g. regarding the labelling of generated text, images and video.

Finally, the checklist also addresses the case where a third party company is not acting as a traditional service provider, but as a partner in a joint AI project. This includes situations where a company wants to integrate a specific solution from another vendor into its own system environment, but does not see itself as being a processor, but as an independent controller. Again, having corresponding contractual arrangements is a diligent measure.

In any case, the topic of AI - and the interest in using data obtained from customers for AI training - is not limited to providers who offer AI services and products. Sooner or later, most companies will think about whether and how they can also use their data from customer projects for AI purposes. Corresponding contractual provisions (as listed in the checklist) can therefore also be useful in other partner and supplier contracts. A "basic protection" is already provided by a confidentiality clause, which also states that confidential information of the contractual partner may not be used for purposes other than the contract.

David Rosenthal

This article is part of a series on the responsible use of AI in companies:

 

We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.

Category: Data & Privacy

Author