We are a Swiss law firm, dedicated to providing legal solutions to business, tax and regulatory matters.
SWISS LAW AND TAX
Services
Intellectual Property
Life Sciences, Pharma, Biotech
Litigation and Arbitration
Meet our team
Our knowledge, expertise & publications
View all
Events
Blog
Careers
23 April 2024
The majority of our clients use generative AI primarily in the form of online services. There are a large number of providers on the market. They all make bold promises, but their contracts are often insufficient. But what should you look out for in such contracts? We cover this in part 15 of our AI blog series.
To get straight to the point: We have compiled the most important aspects to look out for in contracts with providers of AI services in a free checklist. It can be found here in English (and here in German):
The list of relevant aspects is rather long, which is due to the fact that it needs to cover different setups and topics. The various requirements do not play a role in all projects nor are they equally important. We have marked with an asterisk the most important points that should usually be taken into account as a bare minimum.
If a company decides to use an AI service, it must first be aware of the areas in which it could be legally exposed through the use of AI. We roughly differentiate between four areas:
In practice, the above points or the points on the checklist can be incorporated into the contract in a variety of ways. We have already seen the first (larger) companies require their suppliers to sign an "AI Annex", in which they are obliged in particular to disclose the use of AI upfront. There is a certain logic to this, as companies are obliged to comply with the relevant requirements for AI systems, e.g. under the AI Act, even if they are not even aware that they are using such systems. Please refer to our detailed article on the EU AI Act in Part 7 of our blog series. It therefore makes sense for companies to also take steps on the part of their suppliers to understand which of the services and products they purchase contain "AI", i.e. systems that work wholly or partly on the basis of training rather than programming.
Another development we are seeing in practice is the sharp increase in the number of companies that do not operate the AI they offer themselves, but have instead outsourced it. There are good reasons for doing so, as it is now much easier for many companies to buy basic AI functionality from one of the large hyperscalers and have it run in their cloud. However, from a contractual and compliance perspective, this makes things more complicated, as a vendor will only want to make contractual promises to its customers that have been confirmed by its subcontractor, i.e. that apply "back-to-back". However, such ISVs will only be able to negotiate their contracts with these large subcontractors to a limited extent. Each additional subcontractor is also another "point of failure" in the delivery of services.
The third development we are observing is the increasing use of vendors who develop AI solutions and then implement them in their customers' IT infrastructure. In some cases, only the customers operate the systems; in others, the providers do. From a legal perspective, however, the customer is the (primary) provider of the AI solution. The classic example is the chatbot on a company's website that it uses to relieve its customer service. In these cases, there is sometimes debate about the role of these providers and the extent to which they can be held liable. In our view, it makes sense to have agreements on compliance with the legal framework and corresponding contractual obligations, even if these providers are only acting as technology suppliers and implementation partners and not as service providers. This includes information on the AI models used, how the solutions are protected against misuse, and whether the solutions comply or will comply with future requirements, e.g. regarding the labelling of generated text, images and video.
Finally, the checklist also addresses the case where a third party company is not acting as a traditional service provider, but as a partner in a joint AI project. This includes situations where a company wants to integrate a specific solution from another vendor into its own system environment, but does not see itself as being a processor, but as an independent controller. Again, having corresponding contractual arrangements is a diligent measure.
In any case, the topic of AI - and the interest in using data obtained from customers for AI training - is not limited to providers who offer AI services and products. Sooner or later, most companies will think about whether and how they can also use their data from customer projects for AI purposes. Corresponding contractual provisions (as listed in the checklist) can therefore also be useful in other partner and supplier contracts. A "basic protection" is already provided by a confidentiality clause, which also states that confidential information of the contractual partner may not be used for purposes other than the contract.
David Rosenthal
This article is part of a series on the responsible use of AI in companies:
We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.
Category: Data & Privacy
Team Head
According to certain data protection experts, data protection always requires it, the EU AI Act...
In addition to concerns about data protection and confidentiality, the fear of copyright...
The free tool we developed to assess the risks of generative AI applications has been expanded to...