Article | 11 Nov 2022
The EU’s proposed AI Act will constitute further significant regulation of innovative digital solutions. How can our experience of the GDPR help us adapt to new requirements in the life sciences sector?
New digitalisation opportunities offer significant benefits in the life sciences sector, both for patients and health care in general. However, the implementation of such advances must balance potential technical progress and other important interests, such as data protection and similar aspects of fundamental rights. For these reasons, the EU has previously introduced major safeguarding regulations, such as the GDPR, and is in the process of introducing additional fundamental regulation on the use and development of artificial intelligence (‘the AI Act’). This new regulation is expected to be of major importance for almost all digitalisation initiatives in the life sciences sector[1]. The question is whether the industry is sufficiently prepared for these new regulatory requirements. This article takes a closer look at how recent years’ work with the GDPR could be relevant to the implementation of and adaption to the AI Act.
Background
Last year we wrote about a decision by the Swedish Authority for Privacy Protection regarding access to health data by IT support in a country outside the EU and issues related to accessing personal data by persons in countries outside the EU/EEA.[2] That decision is rather telling about the various GDPR compliance challenges that the life sciences sector has had to deal with in recent years.
AI can provide key competitive advantages for the life sciences sector and patient care, but also poses substantial risks and potential harm to public interests and fundamental rights that are protected under EU law. For example, parts of unconnected rudimentary data from health apps, could be assembled through the use of AI systems to create health profiles for unsolicited, potentially high-risk, purposes. The EU has therefore found there is a need for a framework of harmonised rules on the use of AI and the EU Commission has subsequently proposed a new and fundamentally groundbreaking regulation on AI. The AI Act introduces detailed rules on prohibited AI use and substantial administrative penalties and sanctions for non-compliance. The similarities to the GDPR indicate that this new regulation will likely be widely applicable and have a significant impact on the software market and for life sciences businesses. This means that life sciences businesses, which may use or provide advanced digital technologies, will have to consider and adapt to the legal frameworks for both the technology for data processing (such as the AI Act) and the data that is processed using such technology (such as the GDPR).
What is AI and how is it relevant to life sciences?
The AI Act introduces a broad legal definition of artificial intelligence – AI: software that, for a given set of human-defined objectives, can generate output such as content, predictions, recommendations and decisions that influence the environments that the system interacts with. This is provided that the software has been developed using one or more of the techniques and approaches listed in the AI Act, such as machine learning, logic- and knowledge-based approaches and statistical approaches.[3]
Given this broad definition of AI, it is likely that much of the software being used today within life sciences, such as systems for health records, health trackers and systems integrated into other software products or components, such as medical devices, would likely be affected by the regulation.
The AI Act focuses on four specific types of use or practice for an AI system:
- prohibited AI practices and systems, such as exploitive AI;
- high-risk AI, such as systems used as safety components in critical infrastructure or components of products that are regulated by harmonised EU legislation, such as components of medical devices;
- AI systems characterised by transparency issues, such as AI systems intended to interact with humans (chatbots);
- non-risk AI practices, which are covered by the AI Act but not subject to its mandatory requirements.[4]
High-risk AI systems, such as components of medical devices, may only be used under conformity procedures after a review by a notified body through which the system is granted a CE marking.[5] A CE marking is only granted if the system, and to some extent the users, implement and maintain risk management throughout the AI system’s lifecycle, including record-keeping, risk monitoring and human oversight, with a strong focus on robustness, accuracy of data and security.[6]
In light of these requirements, below we look more closely at similarities to the GDPR and how lessons learned from adaption to the GDPR could benefit compliance with the AI Act.
What lessons have been learned from the GDPR?
Similar to the GDPR, the AI Act will introduce new regulatory requirements applicable throughout the supply chain of life sciences products and digital services. It is not a foregone conclusion that the AI Act will create a wave of adaptation similar to that in the wake of the GDPR. However, many within the life sciences sector could be better prepared this time, since the work done for GDPR compliance could be a precursor for compliance with the AI Act (both materially and systematically), particularly since the AI Act and the GDPR infer similar types of requirements and restrictions, in that:
- businesses must assess their data processing;
- businesses must apply a risk-based approach to data processing;
- businesses must sufficiently document and monitor data processing;
- businesses must ensure protection throughout the processing cycle, including when data is processed outside the EU/EEA.
Below we address these four similarities between the AI Act and the GDPR to establish what lessons we can draw from the work carried out for the GDPR.
Assessment of application
An important first step is to assess whether a business is affected by the regulatory requirements. A key issue, which the life sciences business faced during the implementation of the GDPR, was understanding when and how the GDPR became applicable in each individual case.[7] This has been a key component in risk management in relation to the GDPR. The new AI Act infers a similar responsibility on businesses and will most likely require businesses to analyse relevant software (including its various components) to determine whether these will be affected by the AI Act. By doing so, businesses may highlight and implement solutions for software and technologies that approaches prohibited or high-risk practices or ensure that the systems are used in accordance with instructions given by the provider of the AI and with the AI Act.
Risk-based approach
The use of AI will be subject to a risk-based scheme under the AI Act; the more risk the systems impose, the more onerous restrictions apply. This is similar to the risk-based approach under the GDPR, which requires data controllers to perform impact assessments for certain personal data processing associated with risks.[8] In essence, the more risk that is identified for intended processing, the less likely is it that the processing is deemed lawful. Maintaining the risk-based mindset from the GDPR could therefore also aid compliance with the AI Act.
Businesses must sufficiently document and monitor their systems
Under the AI Act, users must maintain risk management throughout, including record-keeping, risk monitoring and human oversight. The GDPR similarly prescribes an obligation to keep a written register of processing of personal data in certain cases, ensure built-in data protection and data protection as standard,[9] and conduct an impact assessment regarding high-risk personal data processing.[10] Both these cases require sufficient knowledge of the products and internal processing, as well as established internal procedures, structures and workflows to facilitate efficient day-to-day compliance. It is likely that lessons learned and structures implemented by life sciences businesses for GDPR compliance will also inform compliance with similar requirements under the AI Act.
Third-country application
The AI Act will have certain third-country applicability for the use of AI whose output affects citizens in the EU. A typical scenario could be a business processing EU patients’ data in a healthtech AI system running on servers outside the EU, and the decisions made by the AI affect these EU patients. Before outsourcing such processing, the business will have to conduct an impact assessment of the intended use of AI. The GDPR prescribes a similar third-country application of the regulation: personal data can only be transferred to countries that guarantee adequate personal data protection and this must be assessed on a case-by-case basis.[11] After the AI Act comes into force, it is likely that any outsourcing of data processing to a country outside the EU will require a thorough impact assessment regarding both data protection and the use of AI. Hopefully, synergies between these assessments can be found to streamline the burden of impact assessments in both cases.
What will this entail going forward?
Today, an ever-increasing number of life sciences companies are looking for ways to apply smart digital solutions to their core business, many of which would likely fall under the AI Act’s broad definition of AI. As a result, many digitalisation initiatives in the life sciences sector will likely face substantial regulatory requirements, not only under the GDPR but also under the AI Act, with parallel sanction regimes under both regulations. To avoid being caught off guard amid the likely significant work required to adapt to the new regulations, the industry needs to start preparing for the new regulatory landscape. In doing so, lessons and insights can be drawn from the work carried out under the GDPR.
[1] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.
[2] Setterwalls Life Sciences Report November 2021; Sweden’s requirements for processing health data in the cloud – key takeaways from the recent IMY decision https://setterwalls.se/en/article/life-sciences-report-november-2021/
[3] Article 3 and Annex 1 to the Proposal.
[4] Title II-IV of the Proposal.
[5] Articles 19 and 49 of the Proposal.
[6] Faktapromemoria 2020/21:FPM109 p. 5.
[7] Articles 5 and 6 of the GDPR.
[8] Article 35.1. of the GDPR.
[9] Built-in data protection entails the controller considering privacy and data protection when designing IT systems, policies and other internal procedures to comply with data protection law.
[10] Articles 25 and 35 of the GDPR.
[11] Article 45 of the GDPR and Judgment of the Court (Grand Chamber) of 16 July 2020. In practice, this requirement means that the data processors in such countries must be regulated by either EU data protection law or laws that provide the same amount of protection of personal data, see also C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems.
Contact: