What would you like to look for?
Site search
5 March 2024 Part 8: AI in financial institutions – opportunities and challenges

AI can bring many advantages for financial institutions. Yet it should be remembered that the requirements for its utilization are higher than in non-regulated sectors. In this blog post, we explain what is important and what you should look out for. This is part 8 of our AI series.

Financial institutions deal with a lot of structured and unstructured data on a daily basis. The use of AI can make it easier to deal with this large amount of data or provide access to new data sources. AI promises to evaluate data in a much shorter time and reveal correlations that might otherwise have been impossible or very costly to uncover. As the Financial Stability Board pointed out in a 2017 study, the range of possible areas of application for financial companies is very broad. These include, for example:

  • Risk management and compliance
  • Investment and trading strategies
  • Creation of standard documents such as brochures and KIDs
  • Interaction with customers
  • Optimization of capital investment

AI in Risk Management and Compliance

Compliance requirements are increasing and tying up considerable financial and human resources. Financial institutions are therefore all the more interested in implementing cost-effective and efficient processes. AI-based applications can help to monitor changes in regulation or conduct compliance training.

Transaction monitoring is also an important use case for AI-based applications. This applies, for example, to the monitoring of transactions to combat and detect insider trading and market manipulation. They can also play a particularly important role in the area of fraud detection and prevention, e.g. credit card fraud and payments fraud.

In the area of anti-money laundering, banks are obliged to implement IT-supported monitoring. The effort required to check the resulting alerts is very high. Despite the great effort involved, there is a considerable risk that relevant correlations will be overlooked. AI-based applications can help here.

In risk management and compliance, it is particularly important that processes are set up well from the outset before AI-based applications are used. In addition, employees must understand how these systems work. Otherwise, the process will not be optimized; on the contrary, there is a risk that results will be misinterpreted or that people will trust that "everything" has been checked.

It is also important to understand the limits of AI. AI-based applications are trained using specific data and only access certain sources. They can therefore not completely replace individual case assessments based on individual clarifications by employees. Therefore, it is important to note that AI-based applications can only be used to support, not replace, humans.  

AI in Trading and Investment Strategies

The European Securities and Markets Authority (ESMA) has examined how AI is used in the securities market in the EU. It concludes that there are numerous promising use cases, even if implementation still varies greatly. AI-based applications are increasingly being used, in particular to evaluate large volumes of unstructured data that would otherwise not be available to support decision-making processes. Examples include:

  • Evaluation of data in order to find and assess investment opportunities
  • Evaluation of public announcements to assess the ESG status of companies

In its report, ESMA emphasizes that financial institutions have so far been reluctant to disclose to customers that they are using AI. Experience shows that the AI black box results in critical reactions. This means that financial institutions are reluctant to use such applications for more than just back-office support. In addition to the question of whether it would be profitable, the lack of customer trust is also one of the key limiting factors in the use of AI-based applications in the area of roboadvisory.

AI in Supervision

In addition, financial market supervisory authorities have also begun to use AI to fulfill their own tasks. The Swiss Financial Market Supervisory Authority FINMA, for example, uses AI in data-based supervision, in particular in automated data evaluation for carrying out anomaly analyses. In its 2021 annual report, FINMA stated that it would use this, for example, in its Asset Management Division in relation to certain investment instruments.

Regulatory Requirements for the Use of AI in Financial Institutions

To date, there are no regulatory provisions in Swiss financial market law that explicitly deal with the use of AI. The general rules therefore apply. For banks in particular, the following therefore are relevant:

FINMA expects supervised institutions to treat the risks associated with the use of AI appropriately. It examines the use of AI by supervised institutions using a risk-based approach and the principle of proportionality. It is therefore not the use of AI per se that leads to increased requirements for financial institutions, but the use of AI in areas in which it leads to additional risks.

Additional risks may arise especially from the following:

  • Complexity of algorithms, which makes it difficult to understand and verify the results
  • High scalability and therefore also possible errors
  • Combination of self-learning algorithms and new data, which means that models have to be recalibrated at short intervals

FINMA's focus in its supervisory work is on data management, governance and the control area of applications. According to the 2023 Risk Monitor, FINMA sees particular challenges in connection with:

  • The responsibility for AI decisions
  • The reliability of AI applications
  • The transparency and explainability of AI decisions, as well as
  • The equal treatment of customers.

The use of AI-based applications particularly requires the following:

  • The board of directors and management must have the appropriate know how to decide on the strategies and guidelines for the use of AI-based applications and to monitor their implementation.
  • Control functions such as compliance, risk management and internal audit must have the necessary expertise.
  • Transparent communication within the financial institution, which not only addresses opportunities but also limits and risks in a way that is appropriate for the target group, must be promoted.
  • Analysis of probabilities and the potential extent of damage and risk-mitigating measures.
  • Sufficient documentation so that the use, including results, errors and error correction, can be traced internally, but also by the audit firm or FINMA.
  • Sufficient involvement of employees of the financial institution in the interpretation and implementation of AI-based results.
  • Emergency measures must be possible for business-critical applications.

AI as a Challenge for Financial Stability?

As mentioned previously, AI is not only associated with opportunities, but also with challenges. This applies not only to individual financial institutions, but also to financial markets as a whole. The Financial Stability Board has therefore issued a call for papers on the topic of "AI in Finance and its Financial Stability Implications" (deadline March 10, 2024). The Central Bank Research Association will address this topic at its annual conference in 2024.

The FSB recognizes that AI-based applications can lead to more efficient information processing and thus to a more efficient financial system. The FSB also sees AI as an opportunity to improve compliance and supervision. Risks to financial market stability could result in particular from the following (see FSB Study 2017):

  • Market concentration - the costs of new technologies can lead to many financial institutions (having to) access (a few) third-party providers.
  • AI-based applications can lead to unintended consequences.
  • Unintentional interconnections - If independent financial institutions access the same external data sources as a basis for AI applications, an error or an important change in this data source can influence the behavior of all these financial institutions at the same time. This results in herding behavior, which could have a significant impact on the financial markets due to scalability.


AI offers opportunities for financial institutions, but also presents challenges. Even though the barriers are higher than in non-regulated sectors, the use of AI-based applications is also possible for financial institutions, provided that data management, governance and the control environment are sufficiently established. However, this requires know-how within the company. It is therefore time to promote the testing of AI-based applications to build up the necessary expertise at all levels within the company. This will also strengthen customer confidence. We further recommend that financial institutions establish internal rules for the use of AI-based applications and we would be happy to support you with this.  

If you have any further questions or require more in-depth advice, please do not hesitate to contact us.

Jana Essebier and Maximilian Riegel

This article is part of a series on the responsible use of AI in companies:

We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.