We are a Swiss law firm, dedicated to providing legal solutions to business, tax and regulatory matters.
Life Sciences, Pharma, Biotech
Litigation and Arbitration
Our knowledge, expertise & publications
29 January 2024
Companies need rules when using AI. But what should they be? We have worked through the most important requirements of current law and future AI regulations and drafted 11 principles for a legally compliant and ethical use of AI in the corporate environment. We have also created a free one-page AI policy. This is part 3 of our AI series.
Many call for more regulation of artificial intelligence, even if opinions differ as to what really makes sense in that regard. You should be aware, though, that a lot of rules already exist and further regulations are to be expected. If a company wants to use AI in a legally and reputationally sound manner, it should take into consideration the following three elements:
We have put together the most important points from all three areas in 11 principles. These include not only guidelines on how AI can be used correctly in the company, but also measures to ensure compliance with the rules and the related management of risks (we will address the assessment of the risks of AI projects and issues of AI governance in further articles in this series, i.e. the procedures, tasks, powers and responsibilities that should be regulated in a company to keep developments and related risks under control).
These 11 principles are:
Each of these principles is described above only in an abbreviated manner. For each of them, we explain in a separate paper for our clients what this means in more detail and in more specific terms and how the company should behave accordingly and why. We have described the individual underlying legal sources with references to the applicable Swiss law, the GDPR and the draft of the AI Convention so that it is clear what already applies today and what is still to be expected. The AI Act will follow.
The paper can be obtained from us. We use it when we work with our clients in workshops to develop their own AI frameworks and regulations, i.e. it serves as a basis for an internal discussion about the appropriate guidelines that the company wants to set for its own use of AI in everyday life and in larger projects. For such a discussion, we recommend holding a workshop with various stakeholders in the company in order to define a common understanding of the use of AI in the company. This is important because, in our experience, guidelines are needed according to which AI projects are implemented and also to ensure predictability for those who want to carry out these projects. Our paper can then be incorporated directly into a more detailed directive or strategy paper with the necessary adjustments. This usually occurs in three steps:
Of course, the standards and rules include not only substantive issues (i.e. which behavior, functions or other aspects should be allowed in the context of AI projects), but also formal issues (i.e. how can AI projects be assessed and approved as quickly as possible in terms of law, safety and ethics). We already cover them in our 11 principles.
Alongside these regulations for AI projects, an organization should not simply prohibit its employees from using AI tools, but should actively make them available where possible. In this way, it can control and regulate their use much better. As we showed in part 2 of our blog series, even the popular tools come in very different flavours, for example in terms of data protection, and a company must check very carefully what it offers to its employees (click here for part 2 of our blog series).
In our view, a AI policy paper does not have to be complicated. To inspire people, we have created a template AI directive for all employees that only takes up one page. It contains a summary of the 11 principles on the one hand, and on the other hand it instructs employees which AI tools they are allowed to use in the company and for which purposes and also, above all, with which categories of data (here, data protection as well as confidentiality obligations, own confidentiality interests and copyright restrictions on the use of content must be taken into account). Alongside this, we provide employees with only three core rules for using these AI tools:
This policy complements the short video that we presented in Part 1 of this series, which is available for free in three languages (see here).
Click here for the PowerPoint file
At the bottom right of the policy, to ensure a proper governance, we also stipulate that new AI applications are only permitted if they have an owner and such owner ensures compliance, i.e. ultimately the 11 principles (or what the company has defined as principles).
We provide our one-page policy as a PowerPoint file (in English but also here in German) so that each company can adapt it to its own needs, in particular to indicate which AI tools are permitted internally and for what purpose. We have included some examples. If you need advice here, we will of course be happy to help.
The policy also provides room for indicating the internal contact point who is responsible for handling inquiries concerning AI compliance topics. Where no one has been yet appointed, we suggest that this be either the data protection officer, the legal or compliance department. Even if the law does not formally prescribe such a body, it makes sense for a company to appoint one person or a group of persons to serve as such a contact point, which takes care of the compliant use of AI (see also blog post part 4 of this series).
Employees should not only be instructed to have the conformity of their new AI projects and tools checked out in advance of using or implementing them but we also recommend to have relevant mishaps and errors reported, especially with regard to sensitive applications. This helps a company to improve its use of AI and keep its risks under control.
We further recommend that our clients maintain a register of AI applications, similar to what many companies already do for their data processing activities as set out under the revised Data Protection Act and the GDPR; it can also be combined. This point is included in principle no. 1, which deals with (internal) responsibilities for the use of AI.
In our next article, we will look at how risks are assessed and documented when using AI - in smaller and larger projects. Thereafter, we will explain the governance a company should provide for AI.
David Rosenthal, Elias Mühlemann, Jonas Baeriswyl
This article is part of a series on the responsible use of AI in companies:
We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.
Sign up for the newsletter to not miss anything.
Category: Data & Privacy
Attorney at Law
Date: 27. February 2024 at 3 .00 p.m. (CEST) Speakers: David Rosenthal, Rolf A. Becker Location:...
The flip side of the coin: Where we need to protect AI from attackers
The use of artificial...
AI governance in our company – who is responsible?
Have you, following the DPO and CISO, already...
Opt-in for our regular updates, news, views, insights and more.