Article | 29 May 2024
The new Product Liability Directive and AI Liability Directive – implications for medical devices with AI functionalities
Back in September 2022, the European Commission published a proposed package of regulatory measures within the field of product liability. This will have significant implications for manufacturers and providers of medical devices featuring artificial intelligence (AI), including standalone software such as Software as a Medical Device (SaMD) and embedded software featuring AI functionalities. This package comprises: (i) the proposed revisions to the Product Liability Directive (PLD) and (ii) the proposed AI Liability Directive (AILD).
According to the European legislator, the current liability rules are not well-suited to handle liability claims for damage caused by products and services featuring AI. The difficulties arise due to the unique features of AI, including its complex, autonomous and opaque nature (the so-called “black box” effect). The objective of the new PLD and AILD is to close the gaps in existing legislation and to ensure that individuals who suffer damage caused by the use of AI systems can be appropriately compensated. The new legislation is also promised to provide greater legal certainty for businesses developing or using AI, in terms of their possible exposure to liability.
Important changes to the PLD and AILD
The new PLD will, among other things, bring all types of software under the scope of the directive, both stand-alone and embedded, as well as digital services that affect the functionality of a product. This means that products such as drones, robots, navigation systems in cars and software in medical devices will be covered by the new PLD. Furthermore, the new PLD establishes that manufacturers can be held liable for changes made to a product even after the product has been made available on the market, such as through software updates, cybersecurity threats or changes made by adaptive systems (e.g. machine learning). Even the lack of software updates could be considered a defect according to the PLD, placing a greater obligation on manufacturers to monitor the performance of their products once in use.
Under the proposal, both natural persons and legal entities making substantial changes to products beyond the control of the original manufacturer will be considered as manufacturers under the new PLD. Such actors may thus be held liable for safety deficiencies in connection with the change made, which was previously unclear in some jurisdictions. Additionally, the revised PLD obliges the manufacturer to disclose necessary information in court when an individual has presented facts and evidence sufficient to support a claim for compensation. The purpose of this obligation is to address the challenge of establishing a product (especially featuring AI) as defective where the technical complexity and the opacity of the AI system conceals the source of malfunction. A similar obligation exists under the AILD for AI systems classified as high risk under the EU AI Act (AIA). This will be important for manufacturers of medical devices, as AI systems used in medical devices will largely be categorized as high risk under the AIA.
Both the AILD and the PLD also introduce new rules on when a defect should be presumed (i.e. where the burden of proving that a product is not defective lies with the provider/manufacturer). For example, the PLD introduces two circumstances in which a defect shall be presumed:
- where there is non-compliance with relevant EU product safety legislation, making it critical for manufacturers of medical devices to ensure compliance with regulations such as the Medical Device Regulation (MDR), and
- where the product is “scientifically or technically complex”, wherein medical devices and AI systems are given as examples of such complexity.
With respect to AI systems, it may, furthermore be challenging to establish a link between non-compliance with the AIA (or other national laws applicable to AI systems) and the output produced by an AI system that gave rise to the relevant damage. To address this challenge, the AILD establishes a rebuttable “presumption of causality” in respect of such a link, meaning that the court will assume that the non-conformity has caused the damage unless proven otherwise by the provider of the system. However, in the case of high-risk AI systems, the AILD establishes an exception from this presumption, namely that if the provider demonstrates that the user has reasonable access to sufficient evidence and expertise to prove the causal link (i.e. the means to prove the link). This exception shall incentivise providers of AI systems to comply with their documentation, recording and transparency obligations under the AIA.
Further consequences for medical devices with AI functionalities
The implications of the legislative changes for medical devices involving AI systems are significant. Should a medical device cause harm, the manufacturer may be required to disclose relevant evidence upon an order from the court. Furthermore, manufacturers of medical devices featuring AI are particularly at risk of being subject to the rules of presumption. These implications may expose companies within the field of medtech to a greater risk of litigation and liability claims. Additionally, companies should be aware of the risk of trade secrets or other confidential information being disclosed in the context of such litigations.
To address these risks, it will be important for manufacturers of medical devices featuring AI to invest in robust risk assessment and risk management strategies to ensure compliance with the AIA and other relevant product safety regulations, such as the MDR. Furthermore, they must consider the risk of deteriorations in device functionality over time (i.e. including after a product has been placed on the market) due to the expanded concept of ‘defect’.
What is next?
While the proposal for the AILD is still to be negotiated and adopted, the provisionally approved text of the PLD was published earlier this year and will now have to be formally approved before it enters into force. After that, a 24-month transition period begins for the directive to be implemented in national law.
Stakeholders in the medtech industry should keep informed about and engaged in the evolving legal landscape. Setterwalls can help you navigate the field of product liability and the implications for medical devices and AI.