article / 25 Apr 2023

Setterwalls’ Tech Regulatory News Series – AI Act

Responsive image

What: The EU AI Act (AI Act) is an EU regulation that aims to establish a legal framework for artificial intelligence (AI) that is trustworthy, safe, and respects fundamental rights.

Who: The AI Act will apply to all AI systems either 1) developed in the EU, 2) offered on the EU market and/or 3) used in the EU (i.e. also applicable to AI developed outside the EU if used in the EU).

When: The European Commission proposed the AI Act in April 2021, and it is currently being reviewed by the European Parliament and the Council of the European Union. Q3/Q4 2024 is the earliest that the AI Act’s requirements can become applicable.

Actions: Some of the key actions that companies covered by the AI Act are expected to need to undertake include:

  1. Classification: Companies must start an inventory as soon as possible of its AI systems and classify them in accordance with the AI Act.
  2. Conducting risk assessments: Companies must perform risk assessments for AI systems that are deemed high-risk according to the AI Act. This assessment will evaluate the potential impact of the AI system on fundamental rights, including safety and privacy, and determine whether additional safeguards are necessary.
  3. Ensuring transparency: the AI Act requires companies to ensure transparency in the development and use of AI systems, including disclosing the types of data used, the algorithm used, and how decisions are made by the AI system.
  4. Establishing human oversight: High-risk AI systems must have human oversight, and the possibility of human intervention, to ensure that the AI system operates safely and accurately.
  5. Complying with technical standards: the AI Act establishes technical standards for AI systems, and businesses must ensure that their systems comply with these standards.
  6. Reporting incidents: Companies must report any incidents or malfunctions of AI systems that could impact fundamental rights or safety.
  7. Ensuring data quality: Companies must ensure that the data used to train AI systems is accurate, representative, and free from bias.
  8. Providing information to users: The AI Act requires businesses to provide clear information to users about how AI systems operate, their limitations, and any potential risks associated with their use.

Enforcement: If an organisation violates the AI Act, it may face fines of up to 30 million EUR or 6% of the organisation’s gross annual global revenue (more than a GDPR fine for a serious violation). Additionally, the AI Act also supports a ban on the use of an AI system that violates the regulations, and the company may be required to take corrective action to address the violation. In some cases, a product recall may be required if the AI system poses a significant risk to health, safety, or fundamental rights.

Do a Setterwalls EU AI Act Gap Analysis and find out what your organisation needs to do to be compliant.

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.

  • This field is for validation purposes and should be left unchanged.