Regulating AI a la Suisse: Between innovation and uncertainty - No comprehensive AI Act

The key points of the decision are as follows:
- The AI Convention of the Council of Europe will be incorporated into Swiss law.
- Where legislative amendments are necessary to achieve this goal, these should be sector-specific as far as possible. General, cross-sector regulation shall be limited to central areas relevant to fundamental rights, such as data protection.
- In addition to legislation, legally non-binding measures shall be developed to implement the convention. These may include self-declaration agreements or industry solutions.
The decision was based on various studies and reports on the current legal situation, existing or planned sectoral activities and an analysis of the planned or enacted AI regulations in other countries.
The most interesting and relevant is certainly the document titled "Auslegeordnung zur Regulierung von künstlicher Intelligenz" (dated 12 February 2025; https://www.bakom.admin.ch/dam/bakom/en/dokumente/KI/Auslegeordnung%20zur%20Regulierung%20von%20%20k%C3%BCnstlicher%20Intelligenz_def.pdf.download.pdf/Overview.pdf). It is a comprehensive report by the Swiss Federal Office of Communications (BAKOM) in collaboration with the Federal Department of Foreign Affairs. It provides an in-depth analysis of possible regulatory approaches for artificial intelligence (AI) in Switzerland.
The report outlines three possible regulatory options:
1. Status Quo (Sectoral Regulation)
a) Continue current sector-specific regulations.
b) No overarching federal AI regulation.
c) Risks lack of coordination on cross-sectoral issues.
2. Ratification of the Council of Europe’s AI Convention
a) With minimal or extensive implementation.
b) Addresses rights protection, transparency, and risk assessment.
c) Stronger obligations for the public sector, potential soft regulation for private actors.
3. Adoption Aligned with the EU AI Act
a) Introduce a risk-based regulatory framework.
b) Harmonizes with the EU, easing market access.
c) Implies significant regulatory complexity and cost.
The Federal Council has decided to follow the section option. The main goals of the approach taken are:
- Strengthen Switzerland’s innovation landscape.
- Safeguard fundamental rights and economic freedoms.
- Build public trust in AI systems.
Open questions and legal uncertainty
Although the Federal Council's approach is supported by many stakeholders in Switzerland, there are voices, including the authors of this article, who emphasize that implementing this approach will be more complex than it appears on paper.
One fundamental problem is that AI regulation is not merely a matter of product safety law. AI systems must be safe, but that is not all. If it were only a matter of product safety, technical safety standards could simply be established in line with other product legislation, so that compliance with these standards would be presumed to ensure legal conformity. However, AI regulation also concerns fundamental rights and other ethical considerations. These cannot simply be dealt with by technical standards. Building public trust requires a certain degree of regulation. Consumer trust is not solely based on rationality. Why do more people still trust a doctor than a medical robot, even though the doctor may be exhausted or have personal problems and might therefore not be fully focused?
As a result, even with light regulation, companies will still have to weigh up the risks of their AI systems. Companies often ask for a risk-based or principle-based approach, but then, for reasons of legal certainty, demand very detailed guidance from the authorities, which ultimately leads back to a rule-based approach. Consequently, legal certainty requires a certain degree of regulation. It is therefore unclear how big the differences between the Swiss approach will be compared to more comprehensive regulation if the guidelines issued by authorities within the framework of sectoral regulation are also taken into account.
Another problem is the Federal Council's silence on harmonization with specific EU regulations and the further development of the Mutual Recognition Agreement between Switzerland and the EU, which ensures harmonized conformity assessment. It is true that the adoption of the EU AI Act alone would not guarantee easier market access. However, without harmonization, this will definitely not work. The EU has harmonized various product regulations relevant to the Swiss economy with the EU AI Act (e.g., the Machinery Ordinance, the Product Safety Ordinance), and certain technical standards for machinery have been supplemented by AI-relevant standards or are still being supplemented. Without the adoption of these standards and revisions, an update of the MRA is out of the question. This is already causing problems, as the MRA will soon no longer apply to some revised regulations, such as the Machinery Ordinance. As a result, Swiss companies may have to carry out two conformity procedures because the EU no longer recognizes the Swiss conformity procedures. This also leads to legal uncertainty. It may be that the studies did not comment on this because the cooperation between the EU and Switzerland will be clarified politically in the coming years anyway. Swiss voters will vote on whether they want to accept the framework agreement concluded between the EU and Switzerland, which regulates, among other things, the harmonisation of product regulations. The framework agreement is highly controversial and the outcome will certainly be close. If the framework agreement is rejected, (mutual) harmonisation will be off the table anyway.
Legal uncertainty also arises from the fact that the implementation and use of AI in Switzerland must currently comply with legal regulations that were not specifically enacted for AI. On the one hand, it is therefore difficult to assess which regulations need to be observed. On the other hand, although the applicable laws are clear in some cases, their interpretation to date does not necessarily apply to AI. Typical examples are existing tort law – it is, for example, difficult to assess the causal link between the use of AI and a financial damage and it is also very difficult to assess who is the perpetrator – and intellectual property law – is the training of AI with third party IP protected works permitted? How can companies prevent that the output of an AI system contains third party works. It is also widely discussed that compliance between data privacy laws – in particular the data processing principles – and the training, implementation and use of AI systems is not that easy.
Certain authorities are attempting to impose requirements on regulated companies. For example, the Swiss Financial Market Authority (FINMA) has issued guidance on the use of AI. FINMA has opted for a principle-based approach and highlighted important issues (e.g., data quality) that financial companies must take into account. However, FINMA has not issued any detailed or specific guidelines on how financial companies should implement these basic principles. Regulation therefore exists in a gray area between entrepreneurial freedom and legal uncertainty.
For lawyers and in-house counsels advising companies in AI projects under Swiss law this legal uncertainty usually means that risk assessments for clients are based on the risk criteria set out in the EU AI Act, even if it was not applicable – it should be emphasized that many AI use cases of Swiss companies might be subject to the EU AI Act. The risk approach by the EU AI Act is not followed slavishly, but the relevant assessments are used as a benchmark – with view on public trust and also with view on the protection of fundamental rights, prohibited use cases for AI systems under the EU AI Act are generally also not welcome under Swiss law. However, Swiss companies are also taking additional ethical considerations into account and consider potential reputational and liability risks. Swiss companies do also not slavishly follow the EU AI Act when it comes to the implementation of specific obligations (such as transparency obligations) either, especially since it is still quite unclear how certain obligations in the EU AI Act are to be implemented. Many current use cases of clients are not for high-risk purposes. The focus of the legal assessment and measures is therefore more on compliance with existing laws, i.e., whether data protection can be guaranteed, whether sensitive data may be entered, and how things stand from an intellectual property law perspective.
Next steps in the implementation of the Swiss AI approach
- The responsible Federal Offices will prepare a draft for public consultation by the end of 2026. This aims to implement the AI Convention of the Council of Europe by defining the necessary legal measures, particularly in the areas of transparency, data protection, non-discrimination and supervision.
- The consultation might take some times. In addition, the parliamentary consultation of new legislative projects often takes up to two to four years. The new statutes or revised provisions in existing statutes will most likely not enter into force before 2030. This means that until then the companies will need to cope with the mentioned legal uncertainties.
- By the end of 2026, the Federal Offices should also draw up an implementation plan for the other measures not laid down in legislation. According to the press release, this ‘will also take particular account of the compatibility of the Swiss approach with those of its main trading partners.’
Read the full press release here: news.admin.ch/en/nsb?id=104110
Read more information about the strategy including links to the different studies here: Artificial Intelligence
Article provided by INPLP member: Michael Reinle and Lukas Bühlmann (MLL Legal Ltd, Switzerland)
Discover more about the INPLP and the INPLP-Members
Dr. Tobias Höllwarth (Managing Director INPLP)