Close
What would you like to look for?
Site search
4 March 2025 Part 24: Five trigger questions for better AI compliance

If companies want to keep their AI compliance under control, they need to be informed of new or changed AI applications. But how can employees without in-depth knowledge determine when things are getting tricky? We have put together five trigger questions for you in this blog post no. 24 in our series on the responsible use of AI in companies.

From a legal perspective, when using AI, it is particularly important to check whether the relevant rules are complied with in the areas of data protection, confidentiality, intellectual property and AI regulation (see our blog post no. 18). This basically applies to every AI application, even if it is carried out with existing tools that are authorised within the company. For example, anyone who uses ChatGPT to have an applicant's CV analysed will automatically make their employer a provider of a high-risk AI system as soon as the corresponding provision of the EU AI Act becomes applicable. This entails numerous obligations that an employer certainly does not want.

Where are the red lines?

Companies must therefore ensure that their employees know when they may be crossing a red line when using AI and when an in-depth review of the application is necessary. One option is to provide them with appropriate questionnaires that query the cases regulated by law. There are also corresponding check tools, such as our free "AI Act Check" in our AI risk tool GAIRA. However, this may be too much for many employees.

We have therefore developed five trigger questions that employees can use to recognise whether their AI use case may require further clarification, even without in-depth knowledge of AI:

  • Will we use a new provider and use personal data, own or third-party secrets?
  • Will we use third-party intellectual property to train AI or to let AI generate output we will use externally?
  • Will we use AI to identify or analyse people based on their features or behaviour?
  • Will we use AI to assess people at work or in education or influence people unknowingly?
  • Will we use AI for decisions or functions that could impact people's lives or safety?

If the answer to one of these questions is "yes", then the use case itself should be examined more closely even when using existing tools authorised in the company (and even more so if tools are to be used that have not been approved beforehand). This has the GDPR, Swiss data protection, copyright law, the protection of professional and business secrets and the EU AI Act in mind.

The last three questions are drafted in such a way that they roughly cover prohibited and high-risk AI applications in accordance with the AI Act. The relevant details can be found in the latest edition of our GAIRA tool. There are further explanations for each question as well as the allocation to the individual cases regulated in the AI Act. There is also a further question that is aimed at those cases in which the AI Act prescribes certain transparency provisions. We have deliberately omitted this fourth question here because, in our experience, these cases are usually not of practical relevance in most companies or are already covered by existing transparency requirements.

Further rules are necessary

Of course, these five trigger questions alone are not enough. The company will have to impose a few more basic rules on its employees. The three most important are:

  • Only use approved AI tools, and only with the kind of content for which they have been authorised.
  • Check the accuracy of AI results – each employee remains responsible for them as for their own work results.
  • Ensure appropriate transparency when using AI, for example when letting third parties interact with AI or when using deepfakes (but transparency is not required in every case, see our blog post no. 16)

In our blog series, we will soon be publishing a free template for a suitable directive for small and medium-sized companies that covers these points.

David Rosenthal

This article is part of a series on the responsible use of AI in companies:

We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.

Author