
About a year ago, we published an overview of the most common AI tools in part 2 of our series on the responsible use of AI. Since then, a lot has changed, which is why we have updated our overview. The bottom line: there are solutions from well-known providers that are suitable for use in companies. However, this does not apply to all AI tools and not to all data in the company. The key is to use a solution that is designed for the business environment and is suitable for the corresponding data. This post is part 25 of our blog series on the responsible use of AI in the company.
The Use of AI Tools in the company
AI tools continue to spring up like mushrooms. New products are coming onto the market every day, while the functions and possibilities of existing tools are also constantly changing. As a result, companies are increasingly spoilt for choice. In addition to questions regarding the suitability and costs, legal aspects and the associated review of contracts always play a central role. The latter is particularly challenging, which is due on the one hand to legal challenges and on the other hand to the complexity of the contracts themselves – whereby the contracts could sometimes be better described as “confusing” or even “deficient”.
If a company processes personal data with such a tool, a data processing agreement ("DPA") that meets the applicable data protection requirements is required. In addition, confidentiality obligations for the provider, as well as the rights to the data and its use for the provider's own purposes, may play a role. If personal or confidential data is to be processed in the tool, use for the provider's own purposes (e.g. training or further development) within the company is generally not acceptable. Free services or those intended for private customers usually do not meet these requirements, which is why companies should stay away from them.
In order to be able to process data within the application that is protected by official or professional secrecy (e.g. attorney-client, doctor-patient or bank-client confidentiality), additional contractual assurances are required that go beyond the standard contracts and must be negotiated with the providers or additionally requested from them. In these cases, the provider's employees must also be prevented from reviewing prompts and outputs, which the provider reserves the right to do in certain cases (in particular to combat abuse). Furthermore, where (geographically) the AI tools or the models used in them are operated is relevant. With a view to legal protection against foreign lawful access by authorities, data protected by Swiss official and professional secrecy must generally be hosted in Switzerland. This is not possible with all providers, AI tools, and AI models, but from a legal point of view it is not mandatory in all cases either.
There is more to compliance assessment than just choosing the AI tool. Equally relevant is which data is processed when the tool is used, and which version is licensed. This situation makes it clear that companies need to clearly regulate the use of AI tools, including those that are freely available. Ultimately, it must be clear to the company and the people working for it whether an AI tool may be used and, if so, with which data and for which purposes. To this end, companies should issue a policy that regulates these questions. You can find more about this in part 3 of our blog series, including a sample policy for employees; we will also be publishing a blog update on this shortly.
Update 2025
Not only have the AI tools developed functionally in the last year, but there have also been adjustments in almost all the providers' agreements. In these cases, companies need to check whether the tool can continue to be used as before. It is encouraging to note that, in terms of the adjustments made, the contracts have not generally deteriorated and, from the customer's point of view, have in some cases even improved. However, the situation remains confusing overall and unfortunately keeps changing. The fact that the names of the services are constantly changing does not make things any easier.
Click here for a version with clickable links.
In contrast to the last article, we will not explain the individual differences for each provider again below, but only the most important new developments. Further details can be found in the previous article.
OpenAI ChatGPT – nothing new from a legal perspective
In principle, nothing has changed legally with ChatGPT and the OpenAI API, but it has not improved either. For use in a business environment, professional offerings should be used, although we have not yet seen a solution that is also suitable for use with data protected by official or professional secrecy. Further details can be found in part 2 of our series.
More new Names in Microsoft Copilot
Microsoft has made some improvements to the agreements, but overall, the picture is still somewhat chaotic. This is not helped by the fact that Microsoft has changed the naming again. Even before the last blog post, Bing Chat Enterprise became Microsoft Copilot with commercial data protection, which was then sometimes referred to as Microsoft Copilot with enterprise data protection. However, this offering is now called Microsoft 365 Copilot Chat. While Microsoft 365 Copilot Chat is included in the M365 offerings for businesses, it is different from Microsoft 365 Copilot, which in turn can only be used with an additional licence. At the same time, the old terminology can still be found in some Microsoft contracts and service descriptions, which makes the language less stringent and further hinders comprehension.
Microsoft 365 Copilot Chat and Microsoft 365 Copilot
While a year ago, the "Data Processing Addendum" (DPA) and other contractual components for business customers were explicitly excluded from the scope of the former Microsoft "Copilot with commercial data protection" and use in companies was only possible to a limited extent, Microsoft has now changed this: the agreements for business customers, such as the "Microsoft Customer Agreement" (MCA) and the DPA, now apply to Microsoft 365 Copilot Chat. From a data privacy perspective, both Microsoft 365 Copilot and Microsoft 365 Copilot Chat can be used in companies.
There is one important exception, however: if web search queries are made in the context of Microsoft 365 Copilot Chat or Microsoft 365 Copilot, the DPA and the agreements for business customers no longer apply. Instead, the service contract for private customers applies in this case, just as it does when using the Bing search engine. We find such limitations outdated and unnecessary; other providers manage without them.
Although Microsoft does not send the entire prompt to Bing (except for very short prompts), the search query is automatically created by Microsoft based on the user's input, so the user has no control over which data is no longer processed under the contractual agreements for business customers and the DPA. As a result, Microsoft 365 Copilot Chat and Microsoft 365 Copilot should not be used with personal data or confidential company data if web access is activated, as Microsoft may no longer process this data as a processor (i.e. without the protection of the DPA), but as a controller and thus possibly for its own purposes.
Companies can protect themselves from this unwanted disclosure by disabling web access. While it is possible to block web access by default for all business users, this can restrict useful functions, as web access can be helpful for certain queries. Ideally, users themselves should be able to decide when the web is accessed, but this is only possible with the paid tool Microsoft 365 Copilot, and not with Microsoft 365 Copilot Chat, which is included in Microsoft 365 for business customers. In addition, even with Microsoft 365 Copilot, the setting options for web queries are not immediately apparent in the input field, which increases the risk of unintentionally using web searches (and thus unwanted disclosure of data).
Microsoft Copilot for private users
Microsoft's offers for private users are not suitable for companies, and care must be taken to ensure that Copilot is only used with a business M365 account. We have noticed that employees sometimes suddenly find themselves logged out, for example when using Copilot in Edge. In this case, the use is subject to the terms of the offer for private use and not to those of Microsoft Copilot for companies, with the result that the contracts intended for business customers with better data protection do not apply.
In its offers for private use, Microsoft has started (without asking customers) to integrate Copilot into its Microsoft 365 offers for private customers, combined with significant price increases. In our view, it is also not really clear what happens to user data with these offers. On an info page (i.e. without this being part of the contractual agreements), Microsoft promises: "Prompts, answers and your file content when using Copilot in Microsoft 365 apps will not be used to train base models" (see here). In the Copilot AI Experiences Terms, which Microsoft makes an integral part of the contract, Microsoft reserves a comprehensive licence to the input and output for its own purposes, which in particular authorises Microsoft to use the corresponding content for its own business purposes – including the right to copy, reproduce, publish, edit, translate and authorise the licensing of these rights to other providers. In general, these offers can therefore normally not be used in a compliant manner and even private users must ask themselves whether they can and want to accept the corresponding clauses. It is unclear whether and how Microsoft will ever exercise these extensive rights, especially in view of the statements on the information page mentioned above. What is clear is that customers should consider agreed what is written in the contract and not what is described on information pages. This applies all the more to any legal disputes. A clearer starting point, for example through clear contractual assurances, would be desirable from the customer's point of view and is not too much to ask.
Microsoft Azure OpenAI Service
Another option for using AI services in a company is to use tools that can access large language models ("LLMs") from certain providers via interfaces (known as APIs). One example of this is the AI office assistant "Red Ink", which we developed for ourselves but which we are currently making available to others free of charge; this will be discussed in more detail below. With the Azure OpenAI Service, Microsoft also offers a solution that provides access to generative AI models from OpenAI. It can be obtained under contracts including a DPA for business customers, which is why it is possible to use it in a company with personal data and confidential business data.
If Azure OpenAI Services are also to be used with data subject to official or professional secrecy, certain agreement addenda are required (as is also the case with Copilot). However, Microsoft reserves the right to carry out an "abuse monitoring" check with human review, whereby the prompts and outputs are stored and checked for this purpose. While such controls may be acceptable for many companies, they are generally incompatible with the requirements of official or professional secrecy. One solution to this problem is to have abuse monitoring disabled by submitting a request to Microsoft. However, this option is currently only available to large companies, and not to most SMEs. We have not received it and therefore do not use Azure OpenAI Online Services for our purposes in the firm; furthermore, gpt-4o, which we consider to be the most important OpenAI model to date, is not offered on servers in Switzerland. Should Microsoft change its policy, we will be happy to use their services in this area as well.
Google Bard becomes Gemini
A lot has happened at Google since the last overview. For example, the AI model is no longer called Google Bard, but Gemini. The overview has been updated to include the new AI tools. From a customer perspective, the challenges with regard to product names and contract design are similar to those faced with other providers: due to similarities among the offers, it is important to pay close attention here as well to which tool is used and on which contractual basis. And given the complexity of the agreements, customers also have to invest a considerable amount of effort here if they want to get an overview and carefully compare their requirements with the clauses actually agreed.
Google Gemini for Google Workspace and Gemini API in Vertex AI
Google Workspace is a productivity solution for collaboration in companies. Google has integrated its Gemini AI model into Google Workspace. Their use is governed by the contracts suitable for business use. In contrast to Microsoft, however, Google also grants the contractual assurances for business customers when accessing web search. At least we could not find any corresponding exception in the contracts, which is why it can be assumed that the business contracts, including the DPA, also apply when users make a web query with Gemini in Google Workspace.
In addition, Google offers an API service for Gemini models in the Vertex AI Platform for companies. This can also be obtained under the terms of the agreements for business customers, which includes the DPA, and is therefore generally suitable for use in companies. The API service can access Google Search as well and thus supplement the output with results and knowledge from the internet ("Google Search Grounding"). Google also provides for an exception to the contracts and the DPA for business customers here, but this only applies as soon as the user clicks on a result from the AI, for example to access the source, and not when the query is made and the answer is generated.
If Google Workspace or the Gemini API in Vertex AI are to be used with data subject to official or professional secrecy, Google contract addenda are necessary and an exception to the human review by Google is required. We were able to conclude agreements with Google to use their AI models (which are also hosted in Switzerland) which meet our requirements as a law firm including for professional secrecy data.
Google AI Studio
In addition to the API in Vertex AI, Google offers a Gemini API in Google AI Studio. Google AI Studio is a development environment for developing apps with Gemini. It can be used to create API keys for accessing Google's language model Gemini so that Gemini can be used in your own apps and environments. A different contract applies to Google AI Studio and the use of the Gemini API, which is concluded with the US company Google LLC. According to the information on Google websites, Google AI Studio is intended in particular for hobby users, students and developers. It can be used free of charge to a certain extent (unpaid service). In this case, Google reserves the right to use the data comprehensively for its own purposes and there is no DPA (since Google acts as the controller when processing the data). At least Google explicitly points out in a highlighted text passage in the Gemini API additional terms that users should not process confidential data or personal data within the scope of the unpaid services.
The situation is somewhat different for the paid API services. Here, Google contractually agrees to process prompts and responses in accordance with the DPA and not to use them to improve its own products. This is a different DPA from the one for the Workspace and Vertex AI offerings mentioned above. Users in the European Economic Area, Switzerland, and the United Kingdom are subject to these more favorable provisions for the protection of customer data , even when using unpaid services.
Care should be taken when using certain functions with personal data and confidential data: if users make use of "Google Search Grounding" via Google AI Studio, the DPA and assurances regarding the prohibition of use do not fully apply. When Search Grounding is used, the contract permits Google to use data can be used to troubleshoot and test the supporting systems – and thus for Google's own purposes. Although a DPA continues to apply to such processing by Google for this feature as part of the paid Google AI Studio service, the authors believe that the fact that the processing can (also) be carried out for Google's own purposes is crucial and problematic with regard to confidential data and personal data in the company. Accordingly, a company can use Google AI Studio with personal data at least to a limited extent, provided that the paid service is used without Search Grounding.
It is important to note in this context that while the DPA contains provisions requiring Google to impose confidentiality obligations on the persons involved, there is no confidentiality obligation for Google itself (i.e. for it as a provider). Furthermore, there is no general confidentiality obligation for data other than personal data. Considering this, we advise against using Google AI Studio with confidential data.
Google Gemini-Apps And Google Gemini Advanced
The principle that offers for private users are not suitable for companies is also true for Google's Gemini Apps and for Google Gemini Advanced. On the one hand, the necessary contractual agreements are missing and on the other hand, Google generally reserves the right to use prompts and outputs for training and service improvements with these offers, although this can be deactivated.
Unlike Microsoft, Google makes it clearer that it is using the data for its own purposes. For example, the following note appears in the Gemini chat for private customers:
"Reviewers look at some of the stored chats. This is to improve Google AI. If you don't want this to happen with future chats, disable the activities in Gemini apps. However, if this setting is enabled, don't enter any information that you don't want reviewers to see or use."
We appreciate this transparency, but that does not change the fact that these offers are generally not suitable for use in a company.
Red Ink
Red Ink is an add-in for Microsoft Office on Windows that provides access to large language models directly in Word, Excel and Outlook. We originally developed this tool for ourselves so that we could use AI in our daily work without having to use tools like ChatGPT or Copilot; Copilot was not good enough for us in terms of functionality either (see our demo video). This saves us a lot of money because using it via API for our applications is much cheaper for us. A typical price is $1-3 per million tokens (one token corresponds to about one short word) of input and $5-10 per million tokens of output, while subscriptions to AI services today typically cost $15-30 per user per month (not including expensive deep research subscriptions). At the same time, we are using the open source product "Open WebUI" to be able to offer a classic chat service via our API access (to Google and OpenAI). This means that we have more functionality for less money, and yet our data is well protected. User feedback is positive.
Red Ink is currently available free of charge to everyone. The source code is published on GitHub. Red Ink can basically be used with any advanced LLM with an endpoint, and it can also be self-hosted. It also works with the new thinking models such as Perplexity, where the deep research function from Red Ink can be used directly in Word, for example. Using it via API offers the advantage that you can use the AI providers and models that best suit your own needs.
Lucian Hunger, Jonas Baeriswyl
We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.