We are a Swiss law firm, dedicated to providing legal solutions to business, tax and regulatory matters.
SWISS LAW AND TAX
Services
Intellectual Property
Life Sciences, Pharma, Biotech
Litigation and Arbitration
Meet our team
Our knowledge, expertise & publications
View all
Events
Blog
In the VISCHER Innovation Lab, we not only work in the field of law, we also develop our solutions ourselves as far as possible from a technical point of view.
VISCHER Legal Innovation Lab
Red Dragon
Careers
Category: Data & Privacy
There is no shortage of ideas for applications, especially with generative AI. But are the legal requirements and risks adequately taken into account? In this article, we explain how corresponding risk and compliance assessments can be carried out. We have also developed a free tool for smaller and larger AI projects, which we will discuss here. This is part 4 of our AI series.
We are hearing from a number of companies that legal departments, data protection departments and compliance officers are being overwhelmed with ideas for new projects and applications with generative AI that they are asked to assess on short notice. In some cases, it is obvious what needs to be done: If a provider is used and granted access to personal data, for example, a data processing agreement is required – as in all cases where data processors are used. But what other points, especially those specific to AI, should be checked for compliance purposes in the case of an AI application? Will the EU's AI Act even play a role? And can this be assessed uniformly for all applications?
Our experience in recent months shows that many of the AI projects that are being implemented in companies do not yet entail really high AI risks. Certain homework has to be done, as with any project, but the applications are usually not particularly dangerous. For reasons of practicability alone, a distinction must therefore be made when assessing AI projects as to how many resources should be invested in their evaluation. As we already do so in the area of data protection, an in-depth risk assessment is only necessary if an AI project is likely to entail high risks for the company. It is true that AI projects still raise many new questions in which compliance and legal departments in particular often have no experience. However, this does not necessarily mean that such projects are particularly risky. Our clients therefore have asked us for criteria that can be used for triage.
The following questions can help to identify such high-risk projects:
If the answer to one of these questions is "yes", an in-depth review of the risks should be carried out or at least considered. In all other cases, at least a summary risk assessment of the project should be undertaken. We describe both approaches below.
For projects where high risks for the company are unlikely – we would refer to them as "normal" projects – we recommend that, in addition to the usual review for data protection compliance, they are subjected to a review with regard to the handling of confidential content or content requiring special protection (e.g. copyrighted works) plus a review of the typical risks associated with the use of generative AI. As most projects work with third-party AI models, the various challenges relating to the creation and development of AI models are not relevant here, which simplifies matters.
We have formulated 25 requirements that companies can use to check the risk situation with regard to the use of generative AI with such "normal" projects (depending on the application, there may be even fewer):
If a company can confirm each of these points for a normal project, the most important and most common risks with regard to the use of generative AI appear to be under control according to current knowledge. Special legal and industry-specific requirements and issues are of course reserved.
If a requirement cannot be confirmed, this does not mean that the planned application is not permitted. However, the implementation of adequate technical or organizational measures should be considered to avoid or mitigate the respective risk – and the internal owner of the application must assume the residual risk.
Of course, depending on the project, applicable law and the company's own requirements (see our previous article on the 11 principles in this series) may give rise to further requirements and risks that need to be taken into account. In view of the large number of possible topics, the relevant rules are unfortunately not easy to identify – and with the additional AI regulation such as the AI Act (which we will cover separately) it will become even more difficult to ensure compliance especially for those offering AI-based products.
As part of data protection compliance, it must also be checked whether it is necessary to amend the existing privacy notices, the records of processing activities (ROPA) and whether a data protection impact assessment (DPIA) must be carried out. Where service providers are relied upon, their role (controller, processor) has to be determined and contracts assessed (see, for example, part 2 of our blog series on what we found out with regard to popular AI tools)
Whether a DPIA is necessary can be checked using the checklist here. This checklist is based on the traditional requirements of the Swiss Data Protection Act and GDPR as well as the recommendations of the Article 29 Working Party (i.e. the predecessor of the European Data Protection Board), as there currently is no better established rule-set (although we believe it should be overhauled, as it creates too many false positive hits). In our experience, a DPIA will only be necessary where personal data is systematically collected for the purpose of having it processed by an AI for purposes related to particular data subjects (i.e. not for statistical or other non-personal purposes), where large amounts of sensitive personal data is being processed, where an AI solution could have significant consequences for the persons whose personal data is processed according to the purpose of the application, where people rely on the output generated by the system (e.g., chatbots on sensitive topics) or where there are considerable risks as per the criteria we described above.
For the implementation of a DPIA itself, we recommend the free template we developed for the Swiss Association of Corporate Data Protection (VUD), which is available here (another template is integrated in both the GAIRA Comprehensive and GAIRA Light worksheet, see below).
If you are looking for assistance in applying the Swiss Data Protection Act to AI, the VUD provides guidance here (only in German).
There are rules that need to be observed and risks that you need to be aware of not just for personal data. Anyone implementing an AI project, especially when using external providers and their solutions, should also check whether (other) confidential or other legally protected content is being used, in particular as input for an AI.
To that end, it should be checked whether the company has contractually undertaken not to include the content at issue for including in an AI application (and, thus, eventually disclose it to third party AI service providers) or to not use it in a particular manner. Such contractual obligations concerning AI will currently be rather rare, but we expect to see them more frequently in license agreements, for example, and less frequently in confidentiality clauses and non-disclosure agreements (NDAs). The reason for this is that companies in the business of making content available will want to protect themselves from such content being used to train third-party AI models. Conventional confidentiality agreements will usually not prevent the use of information for AI purposes or their disclosure to an AI service provider. However, if service providers are used, it must be checked whether their contracts contain the necessary confidentiality obligations and whether the input and output of their services can also be used for their own purposes. The latter may often already be in violation of most current licensing agreements if such AI solutions are used with copyrighted work, as most license agreements will permit companies to use licensed content only for their own internal purposes and, thus, not the training of third-party AI models.
It is also necessary to check whether data is used that is subject to official or professional secrecy or a comparable duty of confidentiality, for which special precautions are necessary, especially if such data must be disclosed to a service provider. In the latter case, special contractual and possibly also technical precautions will be necessary.
In a separate post in thin series we will discuss in more detail the copyright aspects of AI, including the question to which extent the use of AI generated content bears the risk of violating third party copyrights. Usually, machine generated content is not copyrighted, but where such content happens to include original works of humans, the situation is different. And, of course, the use of AI content generators can be used for copyright infringement or violations of other laws (e.g., trademark law, unfair competition) much like any content creation tool. The more difficult questions to be assessed is who becomes liable for such infringements in the value chain.
For "normal" AI projects or AI applications, we recommend the following three steps:
Once these three steps have been successfully completed, the owner of the project or application can decide on its implementation, save for any other preconditions that need to be fulfilled. We will discuss the governance aspects of this in a separate post in our series.
We have developed a template for carrying out and documenting the above process. It is available for free here as an Excel worksheet under the name "GAIRA Light" (in English and German). In addition, we have also created a one-page questionnaire (with the questions for identifying high-risk applications and the 25 requirements for "normal" projects shown above) as well as a quick guide for GAIRA Light (available here):
Here are some practical tips for filling out GAIRA Light (the tool itself also contains instructions and explanatory notes):
If you need support, we are happy to help you.
The assessment of such AI projects and applications is understandably more complex, time consuming and will not be possible without some form of expert support. However, we nevertheless recommend a systematic approach for quality and efficiency purposes.
We recommend the following six steps:
We have also developed a suitable template for carrying out and documenting the above process. It is available to download here free of charge as an Excel worksheet under the name "GAIRA Comprehensive" and contains an example (based on an imaginary AI project). The template also contains a worksheet for checking the legal compliance of AI projects. This, however, is optional and does not need to be completed in our view, as most of these questions will be anyhow addressed during the risk assessment (when discussing the TOM) or as part of the usual compliance procedures.
In practice, experience has shown that the following points must be observed when using GAIRA Comprehensive (although it also contains instructions and explanatory notes):
We are happy to support you in carrying out such risk assessments and especially the workshops mentioned above. This can be particularly helpful when you do them for the first time.
We hope that these explanations will help in practice to efficiently and effectively assess small and large AI projects with regard to their legal and other risks. GAIRA is open source and benefits from feedback from the tool's users. We are happy to receive such feedback, as well as suggestions for improvements – and we thank all those who have already contributed to making the tool better for the benefit of the entire community. However, it is currently only available in English.
We also want to point out to other initiatives that can help organizations to better manage the risks of using AI, such as the widely cited AI Risk Management Framework of NIST, which is available here for free. It has a different, broader scope than GAIRA and is, in essence, a blue print for establishing a risk management system concerning AI (whereas GAIRA is in essence a checklist of risks and points to consider in a specific project and a tool to document the outcome). They do not compete, but work hand in hand. The AI RMF of NIST is, however, very comprehensive and complex and will, in our experience, overwhelm many organizations. We have seen this also with GAIRA, which is simpler and more limited, but in its initial version (today GAIRA Comprehensive) was too much for many projects, which is why we created GAIRA Light. And some would still consider it quite detailed.
In the next article in our blog post series, we will discuss recommendations for effective governance of the use of AI in companies.
Your contact: David Rosenthal
This article is part of a series on the responsible use of AI in companies:
We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.
Sign up for the newsletter to not miss anything.
Subscribe
Team Head
Numerous Swiss companies as well as public organizations rely on Microsoft's cloud services. In...
Which law applies where and how? What needs to be done? Seven short training videos Some like...
Many banks, insurance companies and other Swiss financial institutions are currently working on...