What would you like to look for?
Site search
12 March 2024 Part 9: AI in marketing: 4 unfair competition law pitfalls

Artificial intelligence can help companies automate their marketing processes and make them more efficient. It can also be used to improve the customer experience online. Examples are chatbots as digital sales consultants and personalised advertising. However, there are various unfair competition law pitfalls to consider when using AI in marketing. An assessment under Swiss law with a special focus on chatbots as part 9 of our AI series.

Artificial intelligence (AI) has long since made its way into the marketing departments of companies. It is about more than just automating processes to increase efficiency. According to Forbes, predictive analytics and the use of the resulting trends and patterns to optimise marketing strategies, will remain important this year. However, the interaction between humans and AI is becoming more important: decision-makers in companies should avoid relying solely on "set it and forget it" automation strategies and instead enforce a more collaborative approach.

AI-supported data analysis (tools for target group analysis such as media monitoring or social listening) can be used to better understand and anticipate customer behaviour. Such systems create the basis for displaying personalised advertising and prices to e-commerce customers fully automatically. Finally, there are also AI-supported tools for analysing key performance indicators (KPIs) that can be used to measure the success of advertising campaigns.

However, the use of AI in marketing goes beyond data analysis. Customer communication can also be automated with AI-supported chatbots. Ideally, customers receive a better experience. For their part, companies first and foremost save costs. Chatbots also allow companies to address customers in a more targeted and subtle way and persuade them to make a purchase than with advertising banners, for example. In the course of a conversation, the chatbot can explain the features of a particular product to the customer in more detail and display further or other products that the customer also likes or even likes better. The company ultimately gets an even better or more authentic picture of its customers.

How far may the use of AI in marketing influence customers and where does Swiss law draw the line at unfair behaviour?

What does Swiss (competition) law say?

The Swiss Federal Act against Unfair Competition (UCA) does not (yet) regulate the use of AI in commercial communication. However, anyone who thinks that marketing and advertising using AI is not yet covered by the law, is mistaken. Even without specific AI regulation, many cases are already covered by existing law.

There is currently still a lack of discussion of competition law issues that arise.

In contrast, data protection and intellectual property aspects in connection with the use of AI have already been and are being discussed more intensively, and a certain sensitivity can already be observed within companies in this regard. These topics are the subject of other articles in our AI series (see part 2, part 4 and part 10 of our AI series).

Click here for the PowerPoint file

Pitfall 1: AI-generated statements

You use a chatbot to address customers directly. You can (and should) ensure that the chatbot accesses a data set that is always up to date. However, you do not have any actual control over the statements it generates.

In terms of competition law, this can be problematic insofar as anyone who makes false or misleading business-related statements is acting unfairly (art. 3 para. 1 letter b UCA). The fact that there is always a risk of inaccurate statements with a chatbot is now generally recognised and must ultimately be accepted by users. To clarify this, it is advisable to include a corresponding disclaimer in a suitable place.

However, the boundary to unfair behaviour will be crossed if the company has demonstrably designed a chatbot to make false or misleading statements. This could be the case, for example, if a company does not set up an interface to its own inventory management system for the chatbot, but instead feeds the chatbot with false or out-of-date data so that it then e.g. falsely states, that only one of a certain product is still in stock.

In order to minimise the risk of the company being bound by certain statements made by the chatbot, it should be made clear to users/customers in the terms of use what the chatbot may – and what it may not – serve as (e.g. chatbot as a marketing tool, but not as a source of legal information).

Ultimately, however, the users/customers themselves are also responsible here: Anyone who deliberately misuses the chatbot to mislead it into making false, misleading or otherwise problematic statements will not be able to hold the company which uses it liable for such statements.

Pitfall 2: Advertising nature and transparency obligation

If a chatbot or a computer-generated image is used for marketing purposes, the advertising nature must be clearly recognisable. If the customer is misled about said nature, this would be an act of unfair competition (art. 2 UCA).

In practice, however, there are hardly any conceivable cases in which this character would not be apparent from the circumstances (so that the advertising would have to be explicitly labelled). If, for example, the chatbot in e-commerce is designed as a type of digital sales consultant or the computer-generated image is associated with the presentation of a new product, the advertising character is evident.

Beyond this, the question arises as to whether companies need to be transparent about the fact that they are using AI.

There is currently no such transparency obligation under Swiss competition law. However, behaviour that is generally deceptive or otherwise in breach of the principle of good faith and that influences the relationship between competitors or between suppliers and customers, is unfair (art. 2 UCA). When designing a chatbot, it is important to avoid giving users the false impression that they are communicating with a human being. The same applies if a computer-generated image is used for advertising purposes. In the latter case, one may think, for example, of a computer-generated, dreamlike landscape shown in an advert for a hotel, which is designed to be realistic, but differs significantly from the actual conditions around the hotel and thus makes the hotel appear more attractive than it really is.

Whether or not a corresponding reference to the AI used (e.g. "Image AI-generated") is required from a competition law perspective, depends on the circumstances of the individual case. As a rule of thumb, it can be stated that such a reference is required if the use of the AI is suitable (or even intended) to have an unfair impact on market conditions. In the example of the hotel advert mentioned above, this would be the case, as the advertiser obviously wants to present itself as more attractive with the computer-generated image than it actually is. In case of doubt, the use of AI should be made transparent.

Regulation in the EU is already at an advanced stage: the so-called AI Regulation, which is expected to come into force still in 2024 (see the EU's legislative tracker here and also part 7 of our AI series), already contains more specific transparency provisions. Article 52 of the current draft of the AI Regulation provides for transparency obligations for certain AI systems. For example, there will now be a legal labelling obligation in the EU for chatbots (unless it is obvious that they are chatbots or not human communication) and also for computer-generated images that resemble real people, objects or places and falsely appear to be real or truthful (so-called deep fakes).

Pitfall 3: Borrowing from competitors and exploiting the work of others

If AI is used to create adverts, the public may think the style and presentation of the advert is that of a particular company, when in fact it is an advert from a competitor.

It is conceivable that the advertiser consciously strives for such imitation. However, it is also conceivable that the imitation is unintentional, namely if the AI system used repeatedly utilises similar or even the same design elements as those already found in the advertising of others (with similar or even identical results). The advertiser must also take credit for such unintentional imitation, especially since it is part of the nature of AI that it learns from or takes existing elements as a basis and is not "creative" on its own.

Under unfair competition law, this can be problematic in three respects. Firstly, anyone who generates their advertising (exclusively) with the help of AI risks creating an unfair likelihood of confusion with competitors and their market presence (art. 3 para. 1 letter d UCA). Secondly, he/she also risks making unfair comparisons with competitors (art. 3 para. 1 letter e UCA). Thirdly, the output generated by the AI system used can come very close to a market-ready work result of a third party without any own reasonable effort having been made. Depending on the specific circumstances, this could constitute unfair exploitation of another person's work within the meaning of art. 5 letter c UCA. The extent to which the processing of the content as part of the training can be regarded as "reasonable own effort" will have to be determined in practice.

In order to minimise the risk of such infringements of unfair competition law, fully AI-generated advertising should be avoided. This also reduces other risks at the same time, namely the risk of infringing third-party copyrights and the risk of inappropriate (e.g. sexist or racist) advertising that exists with certain AI systems which are prone to stereotyping.

Pitfall 4: AI-supported targeting

A chatbot as a digital sales consultant can be designed in such a way that it subliminally persuades the customer not to make the purchase decision based on the service in question, but rather the customer feels compelled to conclude a contract due to the sales method used. One may think of AI-supported data analysis here, which can be used to better understand and anticipate customer behaviour and deliver/show the customer personalised advertising based on this.

Depending on the circumstances, such targeting may constitute a "particularly aggressive sales method" within the meaning of Art. 3 para. 1 lit. h UCA, which unfairly impairs the customer's freedom of choice. In practice, however, the presumed particularly aggressive nature of AI-supported targeting is likely to be rare (for example, if the chatbot would impair the customer's freedom of choice by surprising, pressurising, coercing or harassing them, and it would insofar use so-called dark patterns).

The chatbot should be designed in such a way that the risk of such an impairment of the user's freedom of choice is minimised.

Who is liable for competition law infringements and what are the consequences?

If AI is used in marketing, the question arises as to who is liable if unfair competition occurs.

The advertiser is liable under unfair competition law if it uses AI as a tool in marketing. Individual employees can also be held liable for claims for injunctive relief, removal, determination, correction and publication of judgement (so-called negatory claims), provided that misconduct can be proven in individual cases (art. 11 UCA e contrario).

Anyone who commits an unfair competition law infringement exposes themselves to claims under civil, administrative or even criminal law from third parties such as competitors, consumers or authorities.

Actions for injunctive relief and removal are the most relevant in practice. However, particularly competitors are also increasingly resorting to criminal proceedings. In the case of intentional offences, a prison sentence of up to three years or a monetary penalty may be imposed, the amount of which depends on the culpability and the personal and financial circumstances of the offender (maximum CHF 540,000). In principle, the person responsible for the competition law infringement (e.g. head of marketing) is liable to criminal prosecution and only in exceptional cases the company. A conviction also results in an entry in the Swiss criminal register (art. 16 para. 1 letter a in connection with art. 18 para. 1 Swiss Federal Criminal Register Act).

Various unfair competition law pitfalls lurk when using AI for advertising purposes. Advertisers are well advised to keep an eye on these pitfalls and, in particular, to design the chatbots used in such a way that the risks associated with them are minimised as far as possible.

Author: Jonas D. Gassmann

This article is part of a series on the responsible use of AI in companies:

We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.