Article | 23 January 2025

Part III: Key Considerations for the Life Sciences Sector

Responsive image

Introduction

In the December 2023 issue of Setterwalls’ Life Sciences Report, we discussed the potential implications of the AI Act on the life sciences sector, with a particular focus on healthcare AI. At that time, the AI Act was still at the proposal stage. Since then, the final version of the AI Act has been adopted and the integration of AI into healthcare products, both commercially available and at the development stage, has increased significantly. In this final article of our three-part series, we revisit the topic to explore what we have learned since then.[1]

High-risk Healthcare AI

In essence, the final version of the AI Act closely mirrors the initial proposal regarding which healthcare-related AI systems that are considered high-risk systems. Specifically, healthcare AI systems that fall under the Regulation (EU) 2017/745 on medical devices (MDR) or the Regulation (EU) 2017/746 on in vitro diagnostic medical devices (IVDR) and are required to undergo a third-party conformity assessment under the respective regulation are considered high-risk. A more detailed overview of how this can be assessed is provided in the previous article linked above.

Additionally, AI systems intended for evaluating and classifying emergency calls or for dispatching or prioritizing the dispatch of emergency first response services, including medical aid, are classified as high risk according to Annex III of the AI Act. Likewise, AI systems used for patient triaging in emergency healthcare settings are considered high-risk systems.

Key Considerations for the developers of healthcare AI

Introduction

With the final version of the AI Act now adopted, it is evident that providers (i.e., developers) of high-risk AI systems will have to take action to ensure that their systems are designed to comply with the AI Act. As explained in Part I of this article series, the classification of an AI system as high-risk under the AI Act is closely linked to the classification of the product as high-risk under other product safety legislation. For healthcare AI to be considered high-risk according to the AI Act, it is often also classified as a class IIa or higher class medical device according to the MDR.

Many requirements have equivalents or overlaps according to both regulations. While the practice regarding how this is managed is still being developed and will be influenced by upcoming guidelines and recommendations, below we outline some key considerations for developers of healthcare AI.

Risk management

Providers of high-risk AI systems are required to establish a risk management system that covers the entire life cycle of the high-risk AI system and enables the identification of known and foreseeable risks that may arise when the AI system is used as intended or under conditions of reasonably foreseeable misuse. Appropriate measures must be implemented to address these identified risks. In a health care setting, the potential risks associated with a health care-related AI system should be considered at an early stage of development so that the system can be designed to effectively mitigate these risks.

The MDR also mandates the implementation of a risk management system, which align with the system required under the AI Act in several key areas. Both regulations adopt a lifecycle approach and require the risk management system to include risk identification, analysis, and control measures, consider residual risk acceptability, and establish requirements for documentation and post-market surveillance. Despite these common principles, the practical applicability of the risk management systems differs due to the different objectives and context of each legislation. The MDR focuses on ensuring the safety and performance of medical devices in healthcare settings. In contrast, the AI Act addresses broader and more diverse risks associated with AI systems, including data governance, cybersecurity, and the protection of fundamental rights.

The AI Act allows for the integration of necessary testing, reporting processes, and documentation into existing procedures required under other applicable product safety legislation. Therefore, providers must consider how the risk management systems required under the MDR and the AI Act respectively interact and complement each other, to effectively organize their compliance efforts.

Quality Management systems

The AI Act mandates that providers implement a Quality Management System (“QMS”) to ensure compliance. The required AI QMS aims to ensure ongoing quality and compliance and integrates many of the requirements for high-risk systems, such as design, documentation, conformity assessments, duty of information, and post-market monitoring.

Similarly, a QMS is also required under the MDR.

Under the AI Act, providers of high-risk AI systems that are subject to QMS obligations under other legislation may fulfill the AI Act’s requirements as part of their existing QMS. Consequently, medical device manufacturers will be permitted to incorporate the AI Act’s requirements within their existing medical device QMS functions. However, while a QMS for medical devices aims to maintain the quality during the lifecycle of the device, the QMS under the AI Act is broader and can be seen as a system for compliance with the entire AI Act as it shall also include “a strategy for regulatory compliance”.

Often, the risk management system is implemented as a part of the QMS. For medical devices, this is typically the case. Regulatory standards, such as ISO 13485 and ISO 14971 for medical devices, provide detailed requirements on how risk management can be conducted, both in general and within a QMS. As a result, a single QMS may in some cases cover the requirements for both QMS and risk management according to both the MDR and the AI Act. It should however be noted that a provider of a high risk-AI system will need to consider several processes and procedures for design, documentation, etc. that will probably require further considerations around data and how it is handled than is usually the case for medical devices.

Access to and management of data

Providers of high-risk AI systems must ensure that training, validation, and testing datasets are relevant and sufficiently representative. At the same time, they must also guarantee the protection of personal data and adhere to the principles of inter alia data minimization and data protection by design and by default. The AI Act proposes that in the healthcare sector, the European Health Data Space (“EHDS”) will facilitate access to health data, allowing for the training of AI algorithms on these datasets in a privacy-preserving, secure, timely, transparent, and trustworthy manner, with appropriate institutional governance. However, at the time of writing, the EHDS is still in its early stages, and developers working within healthcare AI today are likely dependent on data collected from other sources.

As such, consideration must be given to the AI-specific requirements related to data, which includes assessing the availability, quantity, and suitability of the datasets needed, and evaluating which possible biases might affect the health and safety of individuals. This requires an understanding of how dependent the AI system is on the data it uses, and what is required to allow the data to be used in AI development. In many instances, the data must meet the quality demands of the AI Act while also being compliant with the GDPR. When it comes to health-related data, the GDPR classifies it as a special category of data that requires specific safeguards. The use of health-related data is particularly restricted, and when using health data in the development of AI, it is advisable to pay careful attention to how such data can be used in AI development in a compliant manner from a data protection perspective.

Concluding remarks

In the life sciences sector, the requirements imposed by the AI Act, in conjunction with those already established by regulations such as the MDR and the GDPR, necessitate a practical and hands-on approach by developers to ensure compliance.

Medical device manufacturers that include AI in their medical devices need to determine if and to what extent the AI Act applies. The required risk management and quality management systems impact organizational and technical setups, making it advisable to start considering their implementation in healthcare AI development.

Including AI elements in a medical device will mean an additional regulatory burden. There are overlapping provisions, so it should be considered how additional work can be minimised. This does however not mean that all concerns are solved. Manufacturers/providers should monitor standardization activities and authority guidance, as there still are questions that remain unanswered and probably need to be addressed through standards and guidance.

There are several crucial dates to keep in mind regarding compliance with the AI Act. Certain AI systems within the healthcare industry may be classified as high-risk under Annex III. This classification applies when the system affects access to healthcare or emergency first response services. A common example of such systems is AI used for triaging patients. For these AI systems, the compliance deadline is 2 August 2026, when the AI Act becomes generally applicable. Another key date is 2 August 2027, which is the deadline for high-risk systems, as defined in Annex I of the AI Act, to comply with its requirements. This date is particularly important for AI medical devices whose risk level is determined on the basis of Annex I, i.e. depending on their risk classification according to the MDR. This means, for example, that providers must ensure they have a certificate of conformity from a notified body to place devices categorised as high-risk AI systems on the EU market by these dates.

With extensive experience at the intersection of data protection, life sciences, and AI, Setterwalls is well-equipped to support you in your development!

[1] For the sake of clarity, it should be noted that the previous article states, as the proposal for the AI Act did, that most of the obligations in the AI Act are imposed on “operators” of AI systems, including the provider, user, authorised representative, importer and distributor of an AI system. As such, the determining factor was whether the system as such was considered to be high-risk. As explained in Part I, the final version of the AI Act distinguishes between different roles and clarifies who is responsible for fulfilling the obligations (i.e. primarily the provider of a high-risk AI system).

  • This field is for validation purposes and should be left unchanged.

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.