article / 10 Jun 2020

Fair treatment of customers in financial services in the new technology era

Responsive image

Introduction

Algorithmic decision-making is widely used across a vast range of businesses, both public and private. Many bank customers demand immediate answers, for instance when making a loan application via the bank’s online banking services. The decision of whether to grant the bank customer a loan or not, will be based on that individual customer’s financial data. In order for banks to meet this demand from customers, banks will need to assess risks and make decisions with the help of new technology. Such technology will likely be based on algorithmic decision-making incorporated as some form of Artificial Intelligence (AI) technology. The customer’s financial data will be inserted to the system and, by using above mentioned algorithms, the banks will be able to provide the requested answer immediately.

When is algorithmic decision making and AI technology used in financial services?

The basic process of AI technology is to give training data to a learning algorithm. Algorithms are, in short, a sequence of instructions used to solve a problem, i.e. algorithms organise large amounts of data based on certain instructions and rules. The technology will then, according to certain algorithms, be able to create a result or an output based on the data it has been provided with.

Using algorithms via AI technology enables financial service providers to make real-time rapid decisions, such as granting loans, without human involvement.(1) Customers use their online bank services to request a loan. The customer is asked to insert relevant (financial) data and the bank’s algorithm tells the customer whether the bank will grant you the loan or not and gives the suggested interest rate. (2)

Legal risks from a data protection law and anti-discrimination perspective

AI technology is being used in an increasing number of sectors, both private and public. Banking and finance, is just one example of the fields where profiling is being carried out to facilitate decision-making.(3) While the use of AI has the potential to lead to efficient, rapid and accurate decisions, there are legal risks that may arise under national as well as EU laws. For instance, although algorithmic decision-making can seem neutral and fair, it can also lead to unwanted discrimination. (4)

There is no AI specific legislation in most national legal systems, but there has been a number of ethical AI guidelines published, such as the European Commission’s High Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI. This article will however only provide a brief overview of the legal aspects from a data protection law and a non-discrimination law perspective and will not touch upon AI from an ethical perspective. (5)

According to EU data protection law, corporations should use personal data in an open and transparent way. The General Data Protection Regulation (GDPR) contains specific rules for certain types of automated individual decision-making, with the goal of protecting people against unfair or illegal discrimination. The GDPR prohibits certain fully automated decisions. An example could be a decision by a bank to deny a loan to a customer. There are however exceptions to the prohibition of certain automated decisions. The prohibition does not apply if the individual has consented to the automated decision; or if the decision is necessary for the performance of a contract, or is authorised by law. Time will tell what the effects will be of the GDPR(6),  but it is certain that banks will need to adapt in order to meet the requirements under GDPR.

Non-discrimination law is another alternative legal instrument to help protect customers against algorithmic discrimination. Using AI technology can, as mentioned above, perpetuate existing stereotypes, by grouping individuals into different categories and thereby restrict them to their suggested preferences. In some cases, AI technology can lead to inaccurate predictions. Under EU law, unfair discrimination is prohibited, for instance through the European Convention on Human Rights. Still, non-discrimination law has several weaknesses when applied to algorithms. The main issue is that it is a burdensome and, in most cases, expensive process for an individual to take a discrimination case to court and it may be very difficult to provide sufficient proof in court.(7)

Conclusion

To conclude, financial services providers that use AI technology in their decision making should not forget that certain legal risks remain with the new technology. Specifically, such companies should consider how to comply with the requirements set out in legislation. Decisions generated as a result of AI technology may be unfair, for example by denying people access to loans, insurances or other financial products. Personal data law and non-discrimination law prohibit many discriminatory effects of algorithmic decision-making but the current legislation has weaknesses. Banks, and other companies using AI technology, should therefore be attentive and give thorough thought to the advantages and justifications for using a particular data set or algorithm.

In the quest for finding a fair and balanced approach to automated decision-making, the legislator should keep in mind that even before AI technology was introduced as a part of decision-making in financial services, there was a risk for human bias. In order to meet regulatory requirements to treat customers fairly, banks must think carefully about the use of data and AI technology where decisions are based only on algorithms. Financial service providers must ensure that the risk of biases embedded in data sets are mitigated in order to avoid unfair decisions generated by AI and focus on the quality of algorithms or data sets to ensure that they use diverse information which provides a fair and balanced view of the customer.

It is important to keep in mind that decisions by algorithms may lead to unwanted discriminatory effects, but AI technology, and the use of algorithms in decision-making, is not necessarily discriminatory. Algorithms are still likely to be more efficient and accurate than human decision-makers. So the question still remains, will AI technology reduce or increase the risk for human bias?

(1) Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, published by the article 29 data protection working party.
(2) Example from the European Commission; https://ec.europa.eu/info/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en
(3)  Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, published by the article 29 data protection working party.
(4) Please see “strengthening legal protection against discrimination by algorithms and artificial intelligence” in the International Journal of Human Rights
(5) Please see for instance (i) the report “Discrimination, artificial intelligence, and algorithmic decision-making” published by the Council of Europe in 2018; and (ii) “strengthening legal protection against discrimination by algorithms and artificial intelligence” in the International Journal of Human Rights for more information.
(6) Please see for more information “Strengthening legal protection against discrimination by algorithms and artificial intelligence” in the International Journal of Human Rights.
(7) Please see for more information “Strengthening legal protection against discrimination by algorithms and artificial intelligence” in the International Journal of Human Rights.

Contact:

Practice areas:

FinTech

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.

  • This field is for validation purposes and should be left unchanged.