Close
What would you like to look for?
Site search
25 February 2025 Part 23: Switzerland: What to expect in terms of AI regulation

Some praised it, others criticised it: The Federal Council has proposed how Switzerland should regulate AI in the future: it chose the middle way, without an "AI Act", but with individual adjustments to existing laws. But what exactly can we expect now? The reporting on the decision has not really commented on this so far. We will explain it in detail in part XX of our AI series.

As usual, the regulation of artificial intelligence in Switzerland is proceeding at a moderate pace: until the beginning of this year, research was carried out into how other countries regulate AI, what the status is in individual sectors of the economy and how current law deals with AI. The findings were set out in three analyses – the one for the law running to over 180 pages – and summarised in a 38-page report for the attention of the national government. Three possible paths were presented for selection:

  • A mini variant that essentially consists of continuing as before and at most making a few adjustments to Swiss law here and there.
  • A midi variant, which essentially consists of the adoption of the Council of Europe's AI Convention adopted last year with Switzerland's participation, with more or less ambitious adjustments to Swiss law where necessary.
  • A maxi variant, which also consists of the enactment or adoption of the EU AI Act and possibly other regulations on the topic of AI.

As expected, the Federal Council opted for the middle option and instructed the Federal Department of Justice and Police (FDJP) to submit a consultation draft with Federal Department of Environment, Transport, Energy and Communications (DETEC) and the Federal Department of Foreign Affairs (FDFA) by the end of 2026, which will draw up the necessary amendments to Swiss law. Beyond that, the Federal Council's press release on its decision was limited to the usual generalisations with few concrete statements:

  • The AI Convention of the Council of Europe should be ratified.
  • Adjustments should be focussed on sectors wherever possible.
  • General (i.e. horizontal) regulation only in "key areas relevant to fundamental rights, such as data protection".
  • Non-binding self-declaration agreements and industry solutions are to be used.
     

Council of Europe Convention on AI

In essence, the adaptation of Swiss law will therefore be about what the AI Convention requires in terms of amendments to Swiss law. However, the AI Convention itself is formulated in very general terms and leaves states a lot of room for manoeuvre when it comes to implementation (see extract from the AI Convention for an illustration). We are already familiar with this concept from data protection: the revision of the Swiss Data Protection Act (DPA) followed the same pattern, i.e. it was also an implementation not of the EU's GDPR, but of the Council of Europe's Convention on Data Protection, which required the contracting states to implement the principles set out in the Convention in national law in a similarly general manner.

The very generic provisions of the AI Convention have led to corresponding criticism. The same applies to the fact that, in addition to certain research and development applications, the areas of national security and national defence are completely excluded. In terms of content, the AI Convention has certain overlaps with the EU AI Act, which means that the EU member states may also have to make further adjustments to their legislation in order to fulfil the requirements. The AI Act is a product safety law that specifically regulates the placing on the market and use of certain forms of AI, while the AI Convention provides the contracting states with general guidelines for the use of AI where it affects human rights, democracy and the rule of law. However, the definition of AI in the EU AI Act and the AI Convention is very similar – even if it is not really practicable in either case (which we will discuss separately).

Implementation in Switzerland: private sector only affected to a limited extent

It is not yet clear how ambitiously Switzerland intends to implement the AI Convention. However, it is clear that many of the requirements are already fulfilled in Swiss law, with its usually technology-neutral, principles-based enactments, meaning that no adaptation is necessary. It is also clear that these provisions of Swiss law, which are already applicable to AI today, also largely cover the areas excluded from the AI Convention. For example, the DPA also applies in principle to the Federal Intelligence Service and the Swiss military.

It is clear that the AI Convention must be implemented for the public sector. In addition to the Federal Government, which only speaks for itself, the cantons will therefore also have to carry out an analysis if Switzerland ratifies the convention as expected. However, the most burning question will be the extent to which the implementation of the AI Convention will lead to new rules for the private sector. This is only necessary, however, where the use of AI entails risks and impacts for human rights, democracy or the rule of law. At first glance, this eliminates many applications. The Federal Government's legal analysis concludes that the Convention must (only) be applied in the private sector "where a direct or indirect horizontal impact on fundamental rights exists or will be recognised in the future" (p. 23). The AI Convention itself does not require such an introduction in the private sector. Examples of such impacts are the duty of equal pay in the employment relationship, the provisions on racial discrimination, the regulations on data protection and those on surveillance in the workplace. However, many areas will not be affected at all. The legal analysis cites the example of faulty robotic lawnmowers or smart fridges where the AI sets the temperature incorrectly as applications that are not covered and assumes that cases of economic damage due to faulty AI do not a priori constitute a violation of fundamental rights and are therefore not affected.

No sanctions planned?

The legal analysis does not assume that adjustments will be necessary in many areas. For example, the AI Convention also requires privacy protection in the area of AI use. However, the Federal Government assumes that this is already "relatively well covered" by the existing DPA, even if the report (erroneously and without in-depth clarification) assumes that the development of AI in particular "very often conflicts with the general principles" of the DPA.

The AI Convention allows bans on AI, such as those provided for in the EU AI Act, but does not require them. In Switzerland, new regulations for such bans are likely to be discussed in individual areas at most: certain applications, such as a state social scoring based on the Chinese model, are already prohibited in Switzerland today. The legal analysis merely raises the question of whether Switzerland wants to ban AI applications that have not yet been prohibited in principle, such as emotion recognition in the workplace, in line with the EU AI Act, citing emotion recognition as the only example.

With regard to the supervision of AI, the statements in the report to the Federal Council suggest that there are no major ambitions: supervision is to be carried out by the existing players (FINMA, FDPIC, ComCom, etc.), who are, however, to be given additional powers of investigation to the extent necessary in the area of AI; additional sanctioning or decision-making powers, on the other hand, are apparently not intended.

Adjustments we have to expect

The Federal Government's legal analysis assumes that Swiss law already covers many of the AI Convention's requirements. However, the analysis identifies a specific need for adaptation in the following cases in particular:

  • The provisions of the DPA on automated individual decisions should be extended to semi-automated decisions. The DPA currently stipulates that individuals must be informed when a machine makes an important individual decision. They also have the right to request that this decision be reviewed by a human. The right to information is also affected. Even before the Federal Government's report, there were political calls to extend these regulations (which are not very relevant in practice) to cases in which a decision is made by a human but is largely prepared by a machine. One example is the case in which an AI makes an analysis or pre-selection, which a human then confirms or – to put it negatively – merely "rubber stamps". Today, this falls through the gaps of the special regulations restricting machine-based decisions (which, of course, do not only apply to AI-supported systems). We assume that such a regulation will be introduced. We also assume that an obligation to justify such decisions could be introduced where it does not already exist. In the area of private law, however, this is only likely to occur in the aforementioned cases of direct or indirect effect on fundamental rights, for example in certain areas of labour law.
  • In any case, we expect an obligation to report and keep a register of the use of AI in the public sector: everyone should know where state actors are using AI. Some such projects already exist. The canton of Zurich, for example, is working on a register of "algorithmic decision-making systems", which, incidentally, is a much more sensible way to frame problematic use cases than focussing on the unclear term "AI" and thus creating regulatory gaps. If the Federal Council wants to implement the AI Convention ambitiously, it will propose an obligation to keep an AI register or even a reporting obligation for individual areas of the private sector. While keeping a register of relevant AI is already part of good governance, at least in larger or regulated companies, and is already practised in some cases in fulfilment of the DPA (where personal data is concerned), an actual AI reporting obligation for private individuals would be overstepping the mark. We do not expect this, or at most in a few areas.
  • The introduction of an obligation to identify at least certain AI-generated content is to be expected. The legal analysis identifies gaps here, as Swiss law does not generally provide for a identification or disclosure obligation. In contrast, the EU AI Act provides for various obligations in this area, although these go quite far and are therefore likely to prove impractical in this breadth. Certain manifestations, such as deep fakes of real people, are already covered by existing Swiss law, such as Art. 179decies Swiss Penal Code, albeit with a different objective. We could imagine that Art. 3 Unfair Competition Act would be expanded to include a further special offence. However, such a deep fake identification requirement would not apply to disinformation in the political sphere, for example; a regulation would have to be created elsewhere, for example in criminal law, if there is a political will to do so.
  • An obligation to carry out an impact assessment when using AI should be introduced, i.e. an obligation to assess the possible undesirable negative effects of an AI project on affected persons in advance and to consider whether and how these risks can be further reduced or eliminated with appropriate measures. In this context, there is also sometimes talk of "fundamental rights impact assessments". Conceptually, this always means roughly the same thing; the EU AI Act also provides for this in certain cases of high-risk AI systems. The DPA also recognises this type of risk assessment (Data Protection Impact Assessments, DPIA), albeit with a focus on the processing of personal data. This is not the same, but it is similar and, in our experience, can be combined well (we have already provided such an impact assessment for larger or riskier AI projects in our GAIRA tool). The legal analysis also identifies a need for coordination with the DPA so that the obligated bodies are not overburdened. We assume that such an impact assessment will be mandatory for state actors and that, depending on the ambition of the legislator, certain private sector projects will also be subject to such an obligation. This also seems sensible to us; risk impact assessments are now part of the standard practice for the good governance of relevant projects, and they can very easily be expanded to include the dimension of the consequences for affected persons (in addition to those for the company), insofar as they are not already being assessed.
  • Protection against discrimination is likely to be a hot potato because Swiss law does not provide for a general protection against discrimination in the private sector. The legal analysis is correspondingly cautious in this regard and points out that it must be examined whether there is the political will to adapt Swiss law to that end. This would not only affect the use of AI. The analysis points out that one measure to protect against discrimination in view of the AI Convention requirements could be to let discrimination be a mandatory topic as part of the aforementioned impact assessment and thus address it without turning the basic concepts of Swiss law upside down.
  • It can be assumed that an obligation to document AI will be introduced that goes further than what is currently required. The AI Convention requires such documentation and even if Swiss law provides for isolated documentation obligations (such as the register of processing activities in the DPA), these cannot fulfil the requirements if the AI Convention is ratified. While it seems clear to us that state actors must be prepared for a corresponding obligation, it is still unclear whether and to what extent private organisations must also document their AI systems. This is under discussion in particular in cases where the use of AI can have a significant impact on the human rights of affected individuals. There will not be very many cases; it will probably be proposed for certain applications that are considered high risk under the EU AI Act (such as AI assessments by employers).

The legal analysis also mentions sandboxes, i.e. in simple terms a legal regulation that exempts pilot projects from certain restrictions and sanctions under the regulations that would otherwise apply to them. Such exemptions already exist in the (albeit more strictly regulated) state sector, but hardly exist in the private sector. Even if the idea appears to make sense as an instrument for promoting innovation at first glance (the EU AI Act also provides for this), in our experience these instruments have little practical relevance in practice. Unfortunately, as in the case of the EU AI Act, they are often just a smokescreen to soften otherwise excessive regulation and the associated burden for start-ups, SMEs and innovators at first glance. They also require expertise, a willingness to support and courage on the part of the supervisory authorities, which in our experience is unfortunately regularly lacking.

Relevance of the EU AI Act

The legal analysis also examines the EU AI Act and its impact on Switzerland with regard to the need for corresponding adjustments to Swiss law. In particular, it sees trade barriers for Swiss companies that want to supply AI products to the EU and require a conformity assessment and a representative in the EU, at least in the area of high-risk applications. As the update of the agreement between Switzerland and the EU on the mutual recognition of conformity assessments (MRA) has been blocked by the EU for several years for political reasons, Swiss suppliers will have to obtain any necessary conformity assessments from the EU instead of Switzerland.

In our view, however, this does not really appear to be a problem as long as these providers do not also need a conformity assessment for Switzerland and therefore have double the workload. In our current assessment, there will be no need for a conformity assessment in Switzerland in the future simply because of  AI alone, and where a conformity assessment is required (e.g., medical devices), the existing unsatisfactory situation will continue for the time being.

If Switzerland were to extend the MRA to include AI, it would logically have to adopt the corresponding product requirements of the EU AI Act, but this does not seem to make much sense and is not planned by the Federal Council. It therefore remains the case that providers who want to launch their high-risk AI system on the market or use it in the EU will have to comply with the AI Act. Those who only do so in Switzerland or other countries will not. However, while many companies will want to enter the EU market with their product, in the majority of cases they will not be offering high-risk AI systems, which is why the problem can be put in perspective.

Other areas of law

The legal analysis also sheds light on other areas of law, although not much is to be expected here overall:

  • Copyright: The mountain laboured and gave birth to a mouse. The analysis comes to the conclusion that many questions remain unresolved (by the courts) and that there are correspondingly different positions. However, no concrete recommendations or possibilities for adaptation are formulated. Everything is open, and at the same time the impression is created that there is no real will to adapt the current law in the area of copyright, for example to adapt the research exemption to the EU's TDM exemption. In turn, the sectoral analysis shows that the topic of AI is to be excluded from the current revision of the ancillary copyright law because the majority does not want the two topics mixed up.
  • Patent law: The analysis sees no need for adjustment here.
  • Liability law: The analysis looks at the EU's AI Liability Directive but comes to the conclusion that it remains to be seen how this will develop. In any case, the European Commission seems to have buried it (at least for the time being) anyway. There are therefore no signs of any adjustments of Swiss law here either. This also applies to the occasionally discussed amendment of the Code of Obligations with regard to the classification of AI as an "associate" within the meaning of Art. 101 CO. This proposal is rejected in the legal analysis. Product liability for software is also addressed and reference is made to the discussion that has been ongoing for some time, but the issue is ultimately left open.
  • Criminal law: The analysis sees no need for adjustment here.
  • Labour law: In general, the analysis only identifies a need for selective adjustments. Among other things, it discusses the EU's efforts to regulate "platform work" (i.e. online platforms on which employers and jobseekers come together), but this seems to us to be a side issue. The analysis also raises the question of whether the ban on biometric emotion recognition in the workplace and, at best, the regulations on applications classified as high-risk, in which applicants and employees are assessed by AI, should be adopted from the EU AI Act. We do not expect this. In fact, the requirements of the EU AI Act, which primarily affect the providers of corresponding systems, will also have an impact in Switzerland: Swiss companies are unlikely to buy any products that were not also made for the EU market and therefore do not have to be EU AI Act-compliant. Swiss providers will also have to comply with this for the reasons already mentioned.
     

The Federal HR Office's plans and other findings of the sectoral analysis

In addition to these general, horizontal adjustments, the Federal Government's sectoral analysis also lists various other adjustments to adapt current law where necessary with regard to the use of AI or to identify where this is necessary in the first place. Some examples:

  • Survey by the Swiss Federal Office of Energy, which indicates, among other things, that the AI definition in the AI Act is too broad and that guidance is required, especially for the application of existing law.
  • Regulations on the authorisation of systems for autonomous driving, although the legal framework still needs to be developed internationally.
  • In the financial sector, the State Secretariat for International Financial Matters (SIF) is still in the process of drafting a report, whereby the aim is not to regulate technology but to (continue to) work with technology-neutral principles. FINMA is currently developing its AI supervisory practice based on existing law.
  • Among other things, the Federal Tax Administration wants to create new legal bases for profiling, i.e. automated assessments of aspects of natural persons, because this is no longer possible without such a legal basis.
  • Among other things, the Federal Price Monitoring Office is considering to what extent price fixing can be carried out using AI applications and whether there is a need for action here.
  • The Federal Office of Police is addressing AI applications in connection with two projects, one in the area of automated fingerprint identification and one in the area of police information systems, for example in relation to the comparison of images.
  • The endeavour to create a Swiss version of the Digital Services Act is also being sold as an AI topic because the large platform providers being targeted, such as Google or Meta, also use AI on their platforms.
  • The Federal Office of Public Health is still in the process of drawing up an overview for the healthcare sector.
  • An "insider tip" for curious observers seems to be the Federal HR Office. It wants to amend the Federal Personnel Act in order to be able to carry out profiling and high-risk profiling in employment relationships using AI, for example, to determine whether a person is suitable for a particular job. It should even be possible to include information from social networks; a data protection impact assessment has already been carried out. It is also of the opinion that an automated selection of applicants for an initial interview is not an automated individual decision. So, it clearly has big plans for AI and is positioning itself in preparation.

The overview shows how different the sectors and parts of the administration are.

What companies in Switzerland need to do

The Federal Council's announcement does not entail any need for action in the short or medium term for the private sector. It will be several years before the above-mentioned or other adjustments are implemented. However, the use of AI is already not taking place in a legal vacuum, and in regulated sectors in particular, the authorities – above all FINMA – are already working hard to establish the necessary guidelines and ensure a certain level of order and accountability based on existing law.

While the call for the "ethical" use of AI was still being propagated in the initial phase of the current "AI hype" (probably mostly due to sheer uncertainty), many companies have come down to earth: At most, there is still talk of "responsible" AI, and even this is mainly for reputational reasons or because companies are eager to have principles that cover more than just one jurisdiction. In addition, AI projects, like any other project, are scrutinised for their compliance with applicable law and foreseeable regulations such as the EU AI Act in particular. The main difficulty we see here in practice is the poor quality of the EU AI Act, which leads to numerous unanswered questions, contains rules that are sometimes not well thought through and are therefore not practical, forcing companies to take risk decisions they would like to avoid. The initial serious concerns in other areas of law that are particularly affected by the use of AI, such as data protection, has subsided somewhat in the meantime.

However, it is also clear that many companies are still struggling to bring a certain order and discipline to their AI projects, to gain and create an overview and to properly review the performance of their suppliers and internal development teams. Those who do this conscientiously will already fulfil several of the requirements discussed above, namely (i) the list of AI applications, (ii) the documentation of AI applications that are at least somewhat riskier and (iii) impact assessments, which we are carrying out for several clients already today.

The regulation of semi-automated decisions could be far reaching, as there are many of them and the distinction between relevant and irrelevant cases will still pose some challenges in practice. It also has the potential to become the generally unwanted (excessive) Swiss finish, because the EU does not yet have such a regulation. A company can also prepare for this issue by already trying to identify and regulate such decisions in its projects, and subjecting them to quality control.

To the critics of the Federal Council's cautious approach, it should be pointed out that even today we have no legal vacuum in the field of artificial intelligence. The issues that arise in practice today can in many cases already be well addressed by existing law, since Swiss law has always been principle based (as opposed to certain foreign laws). In the discussion, this is all too often ignored or confused with enforcement deficits. In particular, data protection is of great importance. It only protects individuals from AI when it comes to the processing of their data. However, this is often the case with issues of discrimination, the accuracy of the basis for decisions and partially automated decisions. Where this is not the case, for example when it comes to deception by AI-generated content, other provisions of applicable law may apply, such as those of unfair competition law. The EU AI Act, on the other hand, will protect us much less from negative developments than is generally assumed because it is simply not a general AI regulation. For example, it does not help against a growing technological dependency on individual market players, but instead sometimes even encourages it by raising market entry barriers. It therefore seems sensible to gain a little more experience and reflect on whether and where this technology should be regulated on top of what we already regulate.

David Rosenthal

This article is part of a series on the responsible use of AI in companies:

We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.

Author