Artikel | 23 Jan 2025

Navigating the EU AI Act, Part I: The “AI Act Roadmap”

Responsive image

Introduction

Having entered into force on 1 August 2024, the Artificial Intelligence Act, Regulation (EU) 2024/1689 (“the AI Act” or “the Act”), represents the first harmonised legal framework for AI systems in the EU. With the first obligations of the AI Act starting to apply in February 2025 (with more coming into force systematically over the next 30 months thereafter), now is the time to start preparing your organisation and development projects for compliance. Failure to comply with the AI Act can result in significant fines of up to EUR 35 million, or 7 per cent of annual worldwide turnover, whichever is the higher.

In this initial article of a three-part series, representatives from Setterwalls’ Life Sciences team present an “AI Act roadmap”, which provides guidance on identifying one’s role under the AI Act, classifying the risk level of an AI system, and understanding the obligations involve.

This is an article series by Setterwalls’ life sciences team. Please read the next articles published during January 2025. Coming soon.

Scope

The AI Act applies to “AI systems”, defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This definition is relatively broad to avoid the AI Act becoming outdated. However, the fact that an AI system must operate with a degree of “autonomy” and be able to “infer” how to produce an output from the input it receives distinguishes AI systems from simpler software, where the output is typically predetermined by the algorithm (“if x then y”).

Moreover, in response to the sudden availability of general-purpose AI models such as Chat GPT, the final version of the AI Act was amended from its initial proposal to include specific rules for these models. A general purpose AI model is defined as an “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications (…)”.

The AI Act has an extraterritorial scope, meaning that it applies to providers and deployers of AI systems that are established or located in a third country, if the output produced by the AI system is used within the EU. Additionally, the obligations of the AI Act extend to providers of AI systems and general-purpose AI models that impact the EU, whether by placing them on the EU market or by putting the AI system into service within the EU.

Certain AI systems and AI models are excluded from the scope of the Regulation, for example AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

Hereafter, this article will primarily focus on AI systems rather than general-purpose AI models.

The “AI Act Roadmap”

If the software at hand qualifies as an AI system, the next step is to identify the role each actor assumes under the AI Act and determine the risk classification of the AI system. The AI Act takes a “risk-based approach” – the higher the risk of an AI system or model, the stricter the rules. The actors will have different obligations under the AI Act depending on the risk level of the system or model involved.

Roles

The AI Act applies to different actors in different ways. Depending on whether a company is considered a provider, deployer, importer or distributor of an AI system, it has different obligations under the Act.

Providers

The most heavily regulated subjects under the AI Act are “providers” of AI systems. The providers bear the overall responsibility for ensuring the compliance and safety of AI systems. Typically, the developer of the AI system is considered the provider. A company is classified as a provider under the AI Act if it meets the following criteria:

  • The company develops an AI system or has an AI system developed; and
  • The company places the AI system on the market or puts it into service in the EU.

These actions must be carried out under the company’s own name or trademark. Whether these actions are performed for payment or free of charge is irrelevant. As mentioned above, providers do not need to be located within the EU for the AI Act to apply to them.

Deployers

A “deployer” of an AI system is a user of the AI system. Deployers have the critical responsibility of ensuring a safe and compliant use of the AI systems they use. Under the AI Act, a company is considered a deployer if it uses an AI system under its authority, except where the system is used in the course of a personal non-professional activity. The exception for personal, non-professional use aims to disqualify mere end-users from being considered deployers. Deployers are natural or legal persons, public authorities or other bodies that deploy an AI system in a professional capacity. For instance, if a company in the EU purchases a license for an AI system to assist with recruitment, it will be regarded as a deployer of that AI system.

Importers

A company is considered an “importer” if it is located or established in the EU and places an AI system that bears the name or trademark of another natural or legal person located outside of the EU, on the EU market. For example, EU-based subsidiaries of foreign corporate groups will be regarded as importers when they place AI systems developed by the foreign holding company on the EU market to the extent such system bears the name of the foreign holding company.

Distributors

Under the AI Act, “distributors” are companies that are part of the supply chain that makes an AI system available on the EU market, but are neither providers nor importers. For example, if a company based in the US develops an AI system that is imported to the EU by a subsidiary based in France, such subsidiary acts as the importer. If the French subsidiary then uses its own subsidiary in Spain to market the AI system in Spain, the Spanish company would be classified as a distributor.

Risk classifications

As stated above, the AI Act applies a risk-based approach, similarly to the model used in other EU product safety legislation. Thus, in addition to the different roles, the obligations for a particular company are determined by the risk classification of the AI system in question. Under the AI Act, AI systems may be classified into four risk categories:

  • unacceptable risk (prohibited AI practices/systems),
  • high risk (high-risk AI systems),
  • limited risk (AI systems intended to interact with individuals), and
  • minimal and/or no risk (all other AI systems that are outside the scope of the AI Act).

Unacceptable risk

According to Article 5 of the AI Act, certain AI systems are prohibited within the EU. The prohibitions are in place to ensure that AI systems do not violate fundamental rights and to prevent unacceptable risks to individuals. The AI systems that are banned include, for example, systems used for social scoring, cognitive behavioral manipulation and emotion recognition in workplace and educational institutions.

High risk

The majority of the provisions in the AI Act apply to “high-risk” AI systems, that is, AI systems that pose a significant risk to the health, safety, or fundamental rights of individuals. According to Article 6 of the AI Act, an AI system may be classified as a high-risk system with reference to two different annexes of the Act: Annex I and Annex III. In the following, systems categorized as high-risk under each of the annexes will be referred to as Annex I and Annex III systems, respectively.

Annex I systems

Annex I systems include AI systems (i) that are either products or safety components of products covered by any EU legislation listed in Annex I (“Annex I legislation”), and (ii) that mandate a third-party conformity assessment under the applicable Annex I legislation. This means that, to determine whether an AI system is considered high-risk under Annex I, two steps are required.

First, it must be assessed whether the system is a product or component covered by Annex I legislation. The legislation covers products such as machinery, toys, lifts, equipment and protective systems used in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation.[1]

Second, it must be determined whether the system requires a “third-party conformity assessment” as per the applicable Annex I legislation. Annex I legislation, like the AI Act, consists of product safety regulations that outline conformity assessment procedures for each product category. Typically, all products must undergo a conformity assessment, which normally includes testing, inspection and certification to ensure that the product meets all legislative requirements before being placed on the market. In the case of higher-risk products, the conformity assessment must involve a designated conformity assessment body, known as a notified body, rather than being performed by the manufacturer itself. A notified body is an independent third party, which is why a conformity assessment procedure involving a notified body is referred to as a “third-party conformity assessment.”

When determining whether an AI system is considered high-risk under Annex I in the AI Act, the risk level and the associated conformity assessment requirements under Annex I legislation are the deciding factors. Consequently, the level of risk under the AI Act and the level of risk under the applicable Annex I legislation are interrelated, as high-risk products under the latter are typically those requiring third-party conformity assessment.

The detailed assessment under each part of Annex I legislation is beyond the scope of this article. Setterwalls’ life sciences team has extensive experience in product safety and will be happy to assist in classifying your product.

Annex III systems

Annex III systems include AI systems with a use case listed in Annex III. Such systems include: [2]

  • Non-banned biometric systems: Biometric AI systems, provided that their use is permitted.
  • Critical infrastructure systems: Systems used in critical infrastructure, such as critical digital infrastructure, road traffic management and/or the supply of water, gas, heating or electricity.
  • Education and employment systems: Systems used for educational or employment purposes.
  • Essential services systems: Systems affecting access to and enjoyment of essential public and private services, such as healthcare, emergency first response services, life and health insurance and creditworthiness.
  • Law enforcement and border control systems: Systems used in law enforcement, migration, asylum and border control management, provided that their use is permitted.
  • Legal and democratic systems: Systems used in legal or democratic contexts, such as dispute resolution or elections.

The use cases in Annex III may be exempted from being considered high-risk in certain circumstances, for example if they perform a narrow procedural task, improve the outcome of a previously completed human activity or perform preparatory tasks relevant to the use cases in Annex III. However, AI systems will always be considered high-risk if they profile individuals by automatically processing personal data to assess aspects such as work performance, economic situation, health, preferences, interests, reliability, behavior, location or movement. It shall also be noted that the Commission, according to Article 112 of the AI Act, shall evaluate the need of amending inter alia Annex III. Consequently, additional use cases may be added later on.

Limited risk

Limited risk AI systems are not explicitly defined in the AI Act. Rather, limited risk AI systems are those that are subject to transparency obligations under Article 50 of the AI Act. These systems are unlikely to cause significant harm or violate fundamental rights. AI systems that interact directly with humans, such as chatbots, are for example considered limited risk AI systems. In addition, AI-generated and -manipulated content, such as synthetic audio, image, video or text content and deepfakes, are considered limited risk. A “deep fake” is defined as manipulated image, audio or video content that closely resembles existing persons, objects, places, entities or events, and could falsely appear authentic or truthful to a person.

Minimal and/or no risk

Minimal or no-risk AI systems are those that do not fall into any of the risk categories described above.  Their use is considered to have minimal potential to cause harm or infringe upon fundamental rights. Consequently, such systems are not subject to the obligations imposed on higher risk categories. Examples of minimal or no-risk AI systems include AI-enabled video games and spam filters.

However, it is important to note that the risk classification of an AI system may change over time due to various factors such as its use, development or expansion (including as a result of the Commission’s evaluation and review mentioned above). Governance and oversight should be in place for all AI systems, irrespective of their initial risk classification, in order to determine if they transition to a higher risk category and if so to ensure that they remain compliant.

Obligations depending on classification and role

Providers of high-risk AI systems

A majority of the obligations are imposed on providers (in practice, typically developers) of high-risk AI systems. Perhaps most prominently, these obligations influence how high-risk AI systems are permitted to be designed and operated. First and foremost, providers must ensure that their high-risk AI systems comply with all the requirements set out in Chapter III, Section 2 of the AI Act. This includes inter alia designing the AI system to allow for human oversight and to achieve appropriate levels of accuracy, robustness and cybersecurity. In addition, high-risk AI systems must be designed to technically allow for the automatic recording of events (logs) throughout the system’s lifecycle.

Providers are required to establish a risk management system that spans the entire lifecycle of the high-risk AI system. This system should identify known and foreseeable risks that may arise when the AI system is used as intended, or under conditions of reasonably foreseeable misuse. Providers must also adopt appropriate measures to address these identified risks.

Providers must ensure that training, validation and testing datasets are relevant, sufficiently representative, and, to the greatest extent possible, free of errors and complete in view of the intended purpose. While managing these datasets, providers must also ensure that any personal data is handled in accordance with the GDPR. The potential associated challenges are further discussed in the second article of this series.

Providers are subject to significant obligations not only to ensure, but also to demonstrate, compliance with the AI Act, both on their own behalf and throughout the supply chain. They must draft technical documentation to demonstrate compliance with the AI Act and provide the competent authorities with the information necessary to assess such compliance. This includes an obligation to provide instructions for use to downstream deployers to enable their compliance. Additionally, providers must establish a quality management system that includes written policies, procedures and instructions aimed at ensuring regulatory compliance. As with other product safety legislation (e.g. Annex I legislation), the AI Act requires providers of high-risk AI systems to ensure that their system undergoes the appropriate conformity assessment procedure before it is placed on the market or put into service.

The above obligations are not exhaustive, and providers of high-risk AI systems are advised to assess the relevant obligations on a case-by-case basis.

Deployers, importers and distributors of high-risk AI systems

Deployers of high-risk AI systems are subject to certain obligations, although these are fewer compared to those of providers. Deployers must implement technical and organisational measures to ensure that they use high-risk AI systems in accordance with the instructions for use provided. Additionally, deployers are required to assign human oversight to individuals who have the necessary competence, training and authority to oversee the AI systems.

If the deployer has control over the input data, which is typically the case when the AI system is applied to the deployer’s own operations, the deployer must ensure that the input data is relevant and sufficiently representative for the system’s intended purpose. Furthermore, if a deployer intends to use a high-risk AI system in a workplace, it is mandatory to inform employees’ representatives and the affected employees that they will be subject to the use of a high-risk AI system.

For importers and distributors, the obligations in connection with high-risk AI systems are primarily focused on ensuring that the responsibilities of actors preceding them in the value chain have been met and that their own actions do not compromise the AI system’s compliance with the AI Act. Importers are required to verify that the provider has fulfilled its obligations. Distributors must verify that both the provider and the importer have fulfilled their obligations. Both importers and distributors must also ensure that storage or transport conditions do not jeopardise the system’s compliance with the requirements of the AI Act while the high-risk AI system is their responsibility.

Deployers, importers and distributors are all subject to monitoring obligations. This includes reporting to the preceding actor in the value chain and to the competent authority if they have reason to believe that the AI system does not comply with the requirements for high-risk AI systems, such as the establishment of a risk management system, data governance, record keeping and transparency, etc., as set out in Chapter III, Section 2 of the AI Act.

Finally, it is important to note that a distributor, importer or deployer can transition into a provider if they modify a high-risk AI system that has already been placed on the market. This may happen if they put their name or trademark on the high-risk AI system, make a substantial modification to the system or modify the intended purpose of the AI system.

Providers and deployers of low-risk AI systems

Low-risk AI systems are subject to lighter transparency obligations.

Providers must ensure that AI systems intended to directly interact with natural persons, such as chatbots, are designed to inform individuals that they are interacting with an AI system. This requirement is waived if it is obvious to a reasonably well-informed, observant and circumspect natural person, taking the circumstances and context of use into account. When providing this information, providers should consider the characteristics of natural persons in vulnerable groups, such as those affected by age or disability, especially if the AI system is intended to interact with these groups. Providers of AI systems, including general-purpose AI systems, that generate synthetic audio, image, video or text content, must ensure that the outputs of these systems are labelled and detectable as artificially generated or manipulated.

Deployers of AI systems that generate deep fakes must disclose that the content has been artificially generated or manipulated. Deployers must notify natural persons when they are exposed to AI systems that process biometric data to identify or infer emotions or intentions, or to assign them to specific categories. These categories can include aspects such as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and interests. Deployers of AI systems that generate or manipulate text published to inform the public about matters of public interest must disclose that the text has been artificially generated or manipulated.

Concluding remarks

The AI Act will have significant impact on companies developing and marketing AI systems in the EU. Given that providers bear most of the responsibilities under the AI Act, a correct classification of the respective roles of, for example, provider and deployer is vital for managing legal exposure in the event of regulatory challenges. Companies should not assume, for example, that because they are engaging a third party to help them develop the AI system, that the third party will be the provider (and consequently bear all the responsibilities). The third party may not be a provider, or both companies may be separate providers. Another issue for providers will be to ensure that the contractual terms with third-party suppliers of components etc. clearly specify and share the roles and responsibilities between the parties, with respect to the AI system.

While most of the provisions are aimed at providers of high-risk systems, other actors will also need to prepare for the requirements that the AI Act will impose. For example, deployers will be responsible for verifying their provider’s compliance with the AI Act and the AI system’s performance, and they should also ensure that the provider will provide sufficient assistance to support their compliance with the AI Act. Preparing for the requirements is important not only because obligations will be imposed on different actors in the value chain, but also because the role and classification of an AI system is not fixed and may change depending on the actions of the company, the development of the AI system and, last but not least, the outcome of the Commission’s assessment of the need to amend Annex III.

Although most of the provisions will not come into force until 2026, it is advisable to start mapping out the requirements now, especially for developers of high-risk systems. This is because many of the provisions have features that make it difficult to ensure compliance retrospectively, such as requirements for the technical design of the system and the quality of the training data used.

At Setterwalls, we have extensive experience of working with highly regulated sectors and are well equipped to advise on how to navigate compliance with the AI Act and other product safety legislation. Don’t miss the next part of this article series, where we explore the relationship between GDPR and the AI Act.

[1] Please refer to Annex I of the AI Act for the full list of legislation.

[2] Please refer to Annex III of the AI Act for the full list of use cases.

Vill du komma i kontakt med oss?

Fyll i formuläret samt vilket kontor du vill bli kontaktad av, så hör vi av oss inom kort.