We are a Swiss law firm, dedicated to providing legal solutions to business, tax and regulatory matters.
SWISS LAW AND TAX
Services
Intellectual Property
Life Sciences, Pharma, Biotech
Litigation and Arbitration
Meet our team
Our knowledge, expertise & publications
View all
Events
Blog
Careers
Category: Data & Privacy
AI can bring many advantages for financial institutions. Yet it should be remembered that the requirements for its utilization are higher than in non-regulated sectors. In this blog post, we explain what is important and what you should look out for. This is part 8 of our AI series.
Financial institutions deal with a lot of structured and unstructured data on a daily basis. The use of AI can make it easier to deal with this large amount of data or provide access to new data sources. AI promises to evaluate data in a much shorter time and reveal correlations that might otherwise have been impossible or very costly to uncover. As the Financial Stability Board pointed out in a 2017 study, the range of possible areas of application for financial companies is very broad. These include, for example:
Compliance requirements are increasing and tying up considerable financial and human resources. Financial institutions are therefore all the more interested in implementing cost-effective and efficient processes. AI-based applications can help to monitor changes in regulation or conduct compliance training.
Transaction monitoring is also an important use case for AI-based applications. This applies, for example, to the monitoring of transactions to combat and detect insider trading and market manipulation. They can also play a particularly important role in the area of fraud detection and prevention, e.g. credit card fraud and payments fraud.
In the area of anti-money laundering, banks are obliged to implement IT-supported monitoring. The effort required to check the resulting alerts is very high. Despite the great effort involved, there is a considerable risk that relevant correlations will be overlooked. AI-based applications can help here.
In risk management and compliance, it is particularly important that processes are set up well from the outset before AI-based applications are used. In addition, employees must understand how these systems work. Otherwise, the process will not be optimized; on the contrary, there is a risk that results will be misinterpreted or that people will trust that "everything" has been checked.
It is also important to understand the limits of AI. AI-based applications are trained using specific data and only access certain sources. They can therefore not completely replace individual case assessments based on individual clarifications by employees. Therefore, it is important to note that AI-based applications can only be used to support, not replace, humans.
The European Securities and Markets Authority (ESMA) has examined how AI is used in the securities market in the EU. It concludes that there are numerous promising use cases, even if implementation still varies greatly. AI-based applications are increasingly being used, in particular to evaluate large volumes of unstructured data that would otherwise not be available to support decision-making processes. Examples include:
In its report, ESMA emphasizes that financial institutions have so far been reluctant to disclose to customers that they are using AI. Experience shows that the AI black box results in critical reactions. This means that financial institutions are reluctant to use such applications for more than just back-office support. In addition to the question of whether it would be profitable, the lack of customer trust is also one of the key limiting factors in the use of AI-based applications in the area of roboadvisory.
In addition, financial market supervisory authorities have also begun to use AI to fulfill their own tasks. The Swiss Financial Market Supervisory Authority FINMA, for example, uses AI in data-based supervision, in particular in automated data evaluation for carrying out anomaly analyses. In its 2021 annual report, FINMA stated that it would use this, for example, in its Asset Management Division in relation to certain investment instruments.
Regulatory Requirements for the Use of AI in Financial Institutions
To date, there are no regulatory provisions in Swiss financial market law that explicitly deal with the use of AI. The general rules therefore apply. For banks in particular, the following therefore are relevant:
FINMA expects supervised institutions to treat the risks associated with the use of AI appropriately. It examines the use of AI by supervised institutions using a risk-based approach and the principle of proportionality. It is therefore not the use of AI per se that leads to increased requirements for financial institutions, but the use of AI in areas in which it leads to additional risks.
Additional risks may arise especially from the following:
FINMA's focus in its supervisory work is on data management, governance and the control area of applications. According to the 2023 Risk Monitor, FINMA sees particular challenges in connection with:
The use of AI-based applications particularly requires the following:
As mentioned previously, AI is not only associated with opportunities, but also with challenges. This applies not only to individual financial institutions, but also to financial markets as a whole. The Financial Stability Board has therefore issued a call for papers on the topic of "AI in Finance and its Financial Stability Implications" (deadline March 10, 2024). The Central Bank Research Association will address this topic at its annual conference in 2024.
The FSB recognizes that AI-based applications can lead to more efficient information processing and thus to a more efficient financial system. The FSB also sees AI as an opportunity to improve compliance and supervision. Risks to financial market stability could result in particular from the following (see FSB Study 2017):
AI offers opportunities for financial institutions, but also presents challenges. Even though the barriers are higher than in non-regulated sectors, the use of AI-based applications is also possible for financial institutions, provided that data management, governance and the control environment are sufficiently established. However, this requires know-how within the company. It is therefore time to promote the testing of AI-based applications to build up the necessary expertise at all levels within the company. This will also strengthen customer confidence. We further recommend that financial institutions establish internal rules for the use of AI-based applications and we would be happy to support you with this.
If you have any further questions or require more in-depth advice, please do not hesitate to contact us.
Jana Essebier and Maximilian Riegel
This article is part of a series on the responsible use of AI in companies:
We support you with all legal and ethical issues relating to the use of artificial intelligence. We don't just talk about AI, we also use it ourselves. You can find more of our resources and publications on this topic here.
Attorney at Law
Solicitor (admitted in Ireland (non-practising)) Law Society of Ireland
On 7 June 2024 FINMA published guidance on its findings from its cyber risk supervision and related...
The new responsibility regime as an extension of FINMA's supervisory instruments.
We are pleased to announce that we have once again been recognized in this year's edition of...