article / 06 Dec 2023

EU Artificial Intelligence Act: potential implications for healthcare AI

Responsive image

Background

Technological developments over the past decade have led to an unprecedented increase in data about various aspects of our lives, including healthcare. With the increasing availability of data, artificial intelligence (AI) is rapidly moving into healthcare, making remarkable advances in a range of applications, such as scan and image diagnostics, therapy development, patient triage and care planning.

In April 2021, the European Commission (EC) presented the initial proposal for harmonised rules on AI in the European Union (EU). On 8th December 2023, after a three-day negotiation during the “trilogue” final legislative phase, the European Commission, Council and Parliament reached a provisional agreement on the proposal. From this point, the proposal is expected to be finalised within the next few weeks to be approved by the Member States. After its official adoption, the regulation will become applicable after 24 months, most likely in the first half of 2026. In this article, we review the proposed AI Act to identify when health-related AI systems could be covered by this regulation and what this would entail for the developers of such systems.

 

Introduction to the AI Act

Already in April 2021, the EC presented its Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) (COM/2021/206 final). Since then, several amendments have been made and the provisional agreement contains substantial differences compared to the original proposal. Here, the provisional agreement as of 8th December 2023 is referred to as the AI Act or AIA.

In the AI Act,an AI system is defined broadly, as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments”. The majority of the obligations in the AI Act are imposed on “operators” of AI systems, which include the provider, the deployer, the authorised representative, the importer, and the distributor of an AI system. The regulatory requirements for high-risk AI-systems constitute the main part of the AI Act.

 

A risk-based system

The AI Act delineates three distinct risk levels for AI systems. The highest risk level covers AI systems that are regarded as presenting an unacceptable risk, and consequently are prohibited.
The second risk level, particularly relevant to health-related AI, pertains to AI systems that could negatively affect health, safety, and fundamental rights. Such systems must conform to various legal requirements to enter the EU market. These requirements range from technical standards, detailing mandatory features for any high-risk AI system, to the establishment of organisational structures to oversee risks linked to the system. The provider is expected to comply with these requirements before the system is placed on the market, indicating that compliance requires the provider to analyse and anticipate the risks associated with each system and its intended applications prior to launch.
The third, and lowest, risk level forms a baseline that applies to all other AI systems. At this level, the AI Act merely establishes a set of guiding principles that apply to all AI systems. These systems are considered adequately regulated by other legal frameworks at EU or national level, such as the General Product Safety Regulation.

 

Health-related AI systems considered as high risk

The proposed regulation contains classification rules for high-risk AI systems. Art. 6(1) of the AI Act defines high-risk AI systems as AI systems intended to be used as p safety component of a product or an AI system that is itself a product covered by EU legislation listed in Annex II and requiring third-party conformity assessment related to risks for health and safety per the listed legislation. In addition, Art. 6(2) of the AI Act expands the definition to AI systems falling under one or more of the critical areas and use cases referred to in Annex III, which are also considered high-risk if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.

 

High-risk systems as defined in Annex II

Legislations listed in Annex II that are relevant to the health sector are the EU regulations on medical devices, i.e., the Regulation (EU) 2017/745 on Medical Devices (MDR) and the Regulation (EU) 2017/746 on in vitro diagnostic medical devices (IVDR). In essence, health-related AI systems falling under the MDR or IVDR and mandating third-party conformity assessment are automatically deemed high risk. For the remainder of this article, the MDR will be used as an example to prevent repetitiveness, given the strong similarity in the structure of the regulations.

 

High-risk systems as defined in Annex III

With relevance to the health sector, Annex III mentions AI systems for evaluating and prioritising emergency calls, dispatching first response services, and patient triage systems. To be deemed high risk, a system mentioned in Annex III must represent a significant risk of harm to health, safety, or fundamental rights. This risk is determined by evaluating the severity, intensity, probability, and duration of potential harm, along with its impact on individuals or groups. The EC will provide explicit guidelines to determine when AI systems listed in Annex III pose a significant risk to health, safety, or fundamental rights. The specifics of which health-related AI systems will be categorized as high risk under Annex III will await the EC’s guidelines and are likely to be subject to a case-by-case evaluation. However, it is certain that any AI system used in the applications outlined in Annex III will require consideration under the AI Act.

 

AI systems that qualify as medical devices and require third-party conformity assessment under the MDR

As regards the AI systems deemed high risk under Annex II, the next question is: Which AI systems qualify as medical devices and require third-party conformity assessment under the MDR?

 

Medical device

The MDR defines a medical device as:
any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes:

  •  diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease,
  • diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability,
  • investigation, replacement or modification of the anatomy or of a physiological or pathological process or state,
  • providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations,

and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means.

Also, devices for the control or support of conception, and products intended for the cleaning, disinfection or sterilisation of devices classify as medical devices.
The first step in qualifying an AI system as high risk under Art. 6(1) AIA is thus to assess whether the system qualifies as a medical device. AI systems classified as medical devices are typically considered as falling within the category of software. Software qualifies as a medical device when the manufacturer specifically intends for it to be used for one or more of the medical purposes set out in the definition of a medical device. However, software for general purposes, even when used in a healthcare setting, or software intended for lifestyle and well-being purposes, is not a medical device.

 

Third-party conformity assessment

Given that an AI system qualifies as a medical device, it must also be subject to third-party conformity assessment. However, the MDR is not fully adapted to the risks posed by AI. While both the AIA and the MDR are risk-based, there are differences in assessment criteria. The MDR categorizes medical devices into four classes: Class I (low risk), Classes IIa and IIb (medium risk) and Class III (high risk). For Class I devices, manufacturers bear sole responsibility for the conformity assessment procedure. However, for Class IIa, IIb and III devices, the MDR mandates that the conformity assessment, in whole or in part, must be performed by a notified body. Consequently, as the notified body is a third-party, all medical devices from class IIa and onward are subject to a “third-party conformity assessment”, and therefore deemed as high risk under the AIA.
It is expected that essentially the same notified bodies responsible for evaluating the conformity of medical devices will also assume responsibility for evaluating the supplementary requirements imposed by the AI Act when confronted with a medical device incorporating AI and categorized as high-risk.
In conclusion, AI systems that qualify as medical devices and are categorised as Class IIa, IIb or III are considered as high risk under Art. 6(1) AIA.

 

Determining the risk level of an AI system qualifying as a medical device

Another relevant question is how to determine the risk category under MDR for an AI system that qualifies as a medical device. As stated above, AI systems classified as medical devices are typically regarded as falling within the category of software. The risk classification of a medical device is determined by its intended purpose. In the case of software medical devices, the following risk classification typically applies:

  1. Software intended for decision-making in diagnosis or therapy is typically classified as Class IIa. However, if such decisions may lead to death or irreversible health deterioration, the software is classified as Class III. If the decisions may cause serious health deterioration or require surgery, the software is classified as Class IIb.
  2. Software designed to monitor physiological processes is generally classified as Class IIa. If it monitors vital physiological parameters and variations may immediately endanger the patient, the software is classified as Class IIb.
  3. Software intended for contraception is classified as Class IIb.
  4. Any other software falls within Class I.

According to this, many health-related AI systems will qualify as medical devices and fall within a risk class requiring third-party conformity assessment, thus qualifying as high risk under the AI Act.
On the other hand, numerous health-related AI systems do not fit within this classification. AI systems not intended to be used for humans for one of the specific medical purposes listed in the MDR may still be health-related but will not qualify as a medical device and thus, in many cases, will not be considered high risk under the AI Act. Imaginable examples include AI for drug discovery, clinical trial support, drug promotion or hospital capacity and workforce planning.
Also, the classification of a medical device depends on its intended purpose, meaning that the same system could be classified in either group. For instance, an application that calculates the user’s fertility status based on basal body temperature and menstrual days to track and predict ovulation with the purpose of assisting conception would probably be classified as Class I. However, if its intended purpose is contraception, the application would probably be classified as Class IIb and therefore considered high risk under the AI Act. Consequently, the AI Act leaves a significant number of health-related AI application areas outside the realm of regulation.

 

Regulatory requirements that apply to high-risk AI systems

For the health-related AI systems that are considered high-risk, Chapter 2 of the AI Act sets out the regulatory requirements that apply to these systems. Most of the requirements are imposed on the providers of high-risk AI systems. The requirements are not detailed in this article but include the establishment of risk and quality management systems, conformity assessment prior to placing the AI system on the market, post-market surveillance and registration of the AI system in an EU database. Further, deployers of high-risk AI systems are required to conduct a fundamental rights impact assessment before the system is put on the market. Even from this superficial description, it is clear that the requirements applicable to high-risk AI require the provider to undertake organisational measures in order to comply.

Further, the AI Act imposes some minimal obligations on users of high-risk AI systems, including using high-risk AI systems in accordance with the instructions of use, monitoring the functioning of AI systems, reporting malfunctions to providers, and conducting a data protection impact assessment where applicable.

 

The connection to the GDPR

Lastly, the relationship between the AI Act and the GDPR is noteworthy. “Privacy and data governance” are among the general principles that AI system operators must adhere to, meaning that AI systems must comply with current privacy and data protection regulations while processing data that meets high standards in terms of quality and integrity. Thus, the AI Act expressly acknowledges its connection to the GDPR.
In health-related AI systems, a significant proportion of the relevant datasets relate to the health of individuals, and consequently their processing is subject to specific requirements as outlined in Art. 9 of the GDPR. The increased need for high-quality and unbiased data sets for use in high-risk AI systems, together with the additional processing requirements under the GDPR, means that the provider of an AI system must comply with these requirements simultaneously.

 

Concluding remarks

The proposed AI Act, currently in the trilogue phase, categorises AI systems into three risk levels, with the majority of the regulatory requirements being imposed on high-risk AI systems. In the case of health-related AI systems, the systems categorised as Classes IIa, IIb and III medical devices require third-party conformity assessment, classifying them as high risk under the AI Act, along with systems for emergency calls, first response dispatch and patient triage. When the AI Act enters into force, health-related AI systems that qualify as high risk will operate in a highly regulated environment. However, parallels between the MDR and the AI Act, both rooted in a risk-based approach for EU legislation, suggest potential synergies in compliance measures for providers. Consequently, despite facing new requirements under the AI Act, operational coordination of compliance measures is probable for many health-related AI systems that qualify as high risk. Setterwalls are closely monitoring any further developments and are available to assist you in preparing for and navigating the AI Act.

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.

  • This field is for validation purposes and should be left unchanged.