article / 14 Nov 2023

Generative AI is here – what to consider for companies when using generative AI technology

Responsive image
Introduction

In recent years, artificial intelligence (AI) has generated significant interest and has ignited debates and discussions amongst experts, as well as the general public. For many, AI is a concept of the future and commonly associated with advanced technologies. More recently, AI has become associated with chatbots such as ChatGPT, which is an example of an AI tool falling within the realm of generative AI.

In this report, we take a closer look at generative AI, shedding light on its applications, risks, and potential legal implications. Our goal is to enhance the reader’s understanding not only of what generative AI encompasses and its diverse applications, but also to provide insights into managing potential risks from both commercial and legal standpoints. Additionally, we aim to offer practical approaches to navigate these challenges whilst also staying up to date with the rapidly evolving landscape of AI generally, and with generative AI specifically. But, in order to understand generative AI, one must first establish a foundational understanding of AI in a broader context.

What is AI?

In the European Commission’s proposal for a new Artificial Intelligence Act (AI Act) (as amended by the European Parliament on 14 June 2023), the term “AI systems” is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”.

An immediate reflection to make is that the above EU legal definition of AI might seem broader than what is commonly associated with the term AI. For instance, the legal definition likely includes numerous current IT solutions in the market, going beyond advanced technologies and robots to also encompass systems used in everyday business and daily life.

Generative AI
What is generative AI?

Generative AI constitutes a subset of AI which involves creating models capable of producing new data, including text, images, or sound. These models mirror patterns and structures present in their training data, learning from existing datasets and subsequently employing this knowledge to create original content. A common example of generative AI is GPT (Generative Pre-trained Transformer), a series of AI models that includes GPT-3, developed by OpenAI. These models possess the capability to generate cohesive text based on the content they have been exposed to during training.

Challenges with generative AI

With a foundational understanding of Generative AI in place, we can now explore the specific issues and challenges that may arise in its various use cases.

Potential intellectual property issues

A highly debated issue revolves around the commercial utilization of AI-generated content produced through the training of AI on potentially copyrighted material obtained from web scraping. This practice has faced criticism, particularly when the AI tools are trained on copyrighted internet content without proper authorization. At the same time, the training of AI tools on a diverse range of works is vital to generate results effectively. Consequently, this practice has prompted numerous legal disputes, involving entities engaged in data collection as well as in the sale of AI-generated content. One discussion within this context is whether it conforms to existing copyright laws to use legally acquired images in databases, which are not subject to scraping prohibitions, for the AI training.

Another aspect related to generative AI and copyright, pertains to ownership of the outcomes produced by AI models. Whilst this matter remains far from resolved, the prevailing consensus today suggests that neither the AI model, the creator/user, nor the company that possesses the AI model can claim copyright over, for instance, an image generated by the AI, e.g., according to Swedish law, such a copyright claim would not be possible as no independent creative decisions are involved (as user rights are dictated by the terms and conditions established by the company that owns the AI model), and as a copyright creator must be a physical person.

Improper training of AI and additional commercial and legal implications

Another highly debated issue revolves around the challenge of preventing the intentional and improper training of generative AI. At the time of writing this article, there are few, if any, safeguards in place to prevent a group, nation, or company from intentionally training an AI system in a deceptive manner to further their own objectives. This scenario highlights the need to explore potential countermeasures. In theory, prospective solutions might encompass a variety of measures, such as:

  1. Automatically “ranking” the reliability of the sources of information in an AI model;
  2. restricting the AI model to gathering data solely from certain “authorized” sources and individuals; and
  3. manually reviewing all information before allowing the AI model to utilize it as training data.

However, these suggested solutions are not without limitations. For instance, one might raise concerns about the feasibility of implementing solutions (i) from a privacy/GDPR perspective (as a substantial volume of data may be necessary to effectively and realistically evaluate the trustworthiness of the AI’s information sources and as such information might include personal data which would in turn make the GDPR applicable), solution (ii) as it could, in some respects, seem counterintuitive and contradictory to the overarching objective of AI systems, which is to collect and process a vast amount of information, and as restricting an AI model to limited factual data, even if of high quality, would thus most likely have competitive drawbacks compared to a model trained in a more open manner, and (iii) as it may not be practically feasible, as actors might not be inclined to allocate the necessary resources, nor see the value in manually reviewing all information before the AI utilizes it as training data.

AI legislation

Although AI is currently largely unregulated, there are regulatory initiatives in progress. Foremost amongst these initiatives is the European Commission’s proposition for a new Artificial Intelligence Act (the “AI Act“).

The AI Act

The AI Act’s objective is, e.g., to address potential AI related risks and establish a stronger framework for the use of AI in the EU. The regulation will apply to the provision of AI systems to the EU, regardless of whether the system providers are established within the EU or in a third country (i.e., any country outside of the EU or the EEA). It will also cover the use of AI system output in the EU even if the providers and users of the AI systems are located in a third country. In this sense, the AI Act bears a resemblance to the far-reaching scope of the GDPR.

The expansive reach of the AI Act is designed to prevent potential circumvention of the regulation, such as scenarios where personal data is collected within the EU, processed by a high-risk AI system in a third country, and then imported back for use in the EU as resulting output, all without the AI system being implemented within the EU market.

The regulation will apply not only to standalone AI systems, but also to AI systems integrated into other software products. Much of the software that is generally used on the market today may thus be covered by the AI Act to some extent.

Thus, if approved as suggested, the AI Act can be expected to have a large impact on the market for the provision of both advanced and everyday IT solutions and software.

Different Rules Depending on Risk

The AI Act adopts a risk-based approach, emphasizing various categories of AI system uses or practices over specific techniques for AI system operation. This approach can be likened to a four-tiered pyramid. Within this pyramid, the uppermost fourth tier categorizes and prohibits several AI practices considered unacceptable. These practices include the exploitation of vulnerabilities amongst specific individuals, which might be based on factors like age or physical and mental capabilities.

The third tier encompasses specific practices associated with high-risk AI applications. Whilst not explicitly banned, these practices are recognized as high risk and subject to particular requirements. Examples of AI systems falling into this high risk category include those used as safety components within critical infrastructure, such as road traffic, water and electricity supply, as well as AI systems used in the administration of justice and democratic processes.

When assessing a generative AI tool’s compliance with the AI Act, a company should thoroughly evaluate all risk levels. However, for generative AI, the second tier, which underscores transparency requirements, is typically the most relevant. At this level, AI systems must, for instance, disclose that the content was AI-generated. Further transparency requirements, particularly applicable to generative AI, involve designing the model to proactively avoid producing illegal content and sharing concise summaries of copyrighted data used for training.

Additionally, the AI Act will enable EU member states to implement voluntary codes of conduct for AI systems not falling within the aforementioned risk pyramid.

Like the GDPR, the AI Act outlines substantial penalties for violations, with administrative fines ranging from 10 million to 30 million euros or 2-6% of the entity’s total global annual revenue, whichever is higher.

Conclusions and takeaways

The development of AI is fast-paced, dynamic, and for some, either thrilling or daunting. Similar to many technological advancements, expecting the law to promptly and effectively adapt to AI’s progress is likely an unrealistic prospect. Despite regulatory efforts and lawmakers’ aims to maintain technology-neutral legislation, AI’s development is likely to outpace legal adaptation. Consequently, it is inevitable that legally unregulated gaps will emerge, as well as that uncertainties surrounding the application of existing laws will persist. In summary, these circumstances lead to significant challenges and risks, such as those described above.

However, there are some practical measures that can be incorporated into daily routines in order to mitigate risk. To preserve confidentiality and protect sensitive data, it is essential to avoid feeding such information into AI systems. Additionally, it is important to recognize that AI-generated outputs may contain errors and thus require additional verification and scrutiny. Actors looking to implement or use an AI tool should ensure that routines are established, both before and after implementation. Understanding both the AI system and applicable regulations is further crucial for effective system training and compliance. In contemplating the use of an AI tool, these considerations should guide the decision-making process. Finally, experience has taught us the significance of developing a thorough understanding of the technology itself, as well as the acute need to stay updated on regulatory developments. The AI Act may enter into force as soon as in 2024, and can be expected to extend beyond advanced technologies, to encompass systems used in everyday business and daily life. This means that covering all developments in this sector will be vital to stay competitive.

Contact:

Practice areas:

FinTech

Do you want to get in touch with us?

Please fill out the form and we will contact you as soon as possible.

  • This field is for validation purposes and should be left unchanged.