Article | 9 October 2025

New requirements for the use of AI tools – how can employers turn the regulatory framework into a competitive advantage?

Responsive image

Many are likely aware that the EU’s AI Act[1] has entered into force. The requirements are being implemented in stages, with some already in effect and others coming into force over the next two years. In this article, we review some of the issues employers should consider when using AI tools in their operations and conclude with a checklist for AI compliance. With the right AI usage and strategy, the AI Act can be seen as a mark of quality and a competitive advantage, rather than a compliance burden.

Brief overview of the AI Act

The regulation aims, among other things, to create harmonized rules for the development and use of AI within the EU. The Act classifies AI systems according to their risk level: unacceptable, high, limited, and minimal risk, with different requirements applying depending on the risk level. In general, the higher the risk a system poses, the stricter the requirements for manufacturers and users. Different requirements also apply to different actors, for example, manufacturers of AI systems are subject to different rules than users.

The following discussion assumes the employer is considered a deployer (user) of an AI system. If you want to know what applies to developers, we recommend reading this article. An employer may, for example, be considered a developer by participating in the development of features or similar aspects during the tool’s development phase.

Key considerations in the workplace

For employers, it is important to be aware that many HR-related AI systems or applications are considered high-risk AI systems. Examples include tools for:

  • Recruitment (e.g., posting targeted job ads, analysing and filtering applications, and evaluating candidates).
  • Performance management (e.g., if the system is intended to make decisions on promotions or evaluate employee performance).
  • Task allocation based on behaviour, personality traits, or characteristics.

The common denominator for the above tools, and the reason they are considered high-risk systems, is that they can significantly impact employees’ future career prospects and livelihoods.

For this reason, employers are required to inform employee representatives and affected employees that they will be subject to the use of a high-risk AI system. Such information must be provided before the system is put into use or deployed in the workplace.

It should also be noted that the regulation does not prevent Member States from maintaining or introducing laws that are more favourable to employees regarding the protection of their rights in relation to employers’ use of AI systems, or from encouraging or allowing the application of collective bargaining agreements that are more favourable to employees. Therefore, it is not sufficient to only review the requirements of the AI Act; employers must also stay informed about any additional rules.

What requirements apply when using high-risk AI systems?

As noted above, many systems may fall into the high-risk category, which entails extensive requirements for both processes and documentation. These requirements include, among others, that the employer, in addition to the information obligation mentioned above, must:

  • Take appropriate technical and organizational measures to ensure that the AI systems used are operated in accordance with the manufacturer’s instructions/user manuals.
  • Appoint individuals with the appropriate expertise to monitor and control the AI system.
  • Ensure that the data input into the AI system is relevant and sufficiently representative for the intended purpose of the AI system (to the extent the employer controls this).
  • Monitor the operation of the AI system and report serious incidents or risks to the provider, distributor, and authorities.
  • Retain automatically generated logs from the AI system for at least six months (to the extent the employer controls these logs and unless otherwise provided by law).
  • Conduct a data protection impact assessment where necessary (applies if personal data may be processed in connection with the use of the system).
  • Cooperate with relevant authorities during investigations and supervision.

Checklist for AI compliance

To ensure compliance with the AI Act, we recommend that you take the following steps:

  • Step 1: Inventory and classify your AI tools (high risk, minimal risk, etc.), map each area of use and purpose, and identify the risks associated with each system.
  • Step 2: Determine your role in relation to each tool (i.e., whether you are a deployer or may be considered a provider), as this affects which requirements apply.
  • Step 3: Conduct a GAP analysis against the regulation’s requirements by comparing your current routines, processes, and systems with the regulation’s requirements. Note where you do not meet the requirements, e.g., regarding information to employees, data quality, or human control of the system. Initially focus on the systems and tools you have classified as high risk.
  • Step 4: Minimize risks and address deficiencies identified in previous steps, e.g., by establishing and implementing necessary governance documents, processes, and routines. Examples include an AI strategy outlining how you intend to work with AI and how you ensure it is used ethically and correctly. You should also develop internal instructions on how AI may be used in the workplace, routines for human oversight of your AI systems, incident reporting procedures, and procedures for log retention.
  • Step 5: Train your organization and ensure that your staff have sufficient training and understanding of the AI regulatory framework, the technology, and its risks. Note that the requirement for AI competence has already entered into force and that it is important to tailor training to the different parts of the organization to ensure relevance.
  • Step 6: Ensure transparency and information by informing your staff about AI use and their rights.

By working proactively with AI compliance according to the checklist above, you can strengthen both your organization’s competitiveness and employer brand. Having a structured and well-developed AI strategy is seen as a mark of quality and not only reduces your risk exposure; it can also help you build stronger trust with current and future employees, improve the data quality in the tools you use (and thus the accuracy of your decision-making), and create a differentiated position with customers and investors who are increasingly demanding responsible AI.

We therefore recommend that you start your work now. At Setterwalls, we are happy to help you set up a structure for your AI work or review your existing structure. We can also assist with risk and classification assessments as well as the development or review of governance documents, routines, and other information. If you would like to know more about the AI Act or would like us to give a presentation for you and your staff about the rules, you are of course very welcome to contact us.

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).

Contact:

Practice areas:

Employment and pensions

  • This field is for validation purposes and should be left unchanged.

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.