act legal Germany 29. července 2024

The European Regulation on Artificial Intelligence (AI Act)

In order to shape Europe’s digital future the European Commission has prepared and issued a large amount of legislation that forms the legal framework for digital transformation. In our last newsletter we gave you an overview of the most important innovations. We would now like to introduce you to the Artificial Intelligence Regulation (AI Act) and explain how it will impact your business. If you have any questions, please feel free to contact us.

Introduction

Regulation (EU) 2024/1689 (Artificial Intelligence Regulation) was published on 12 July 2024. It will come into force on 1 August 2024. The AI Regulation has direct application in all EU Member States.

Some of the transitional periods are quite short. For example, the provisions in Chapter I (General Provisions) and Chapter II (Prohibited AI Practices) will apply as from 2 February 2025. Chapter V (General-Purpose AI Models), Chapter VII (Governance), Chapter XII (Penalties) and Art. 78 (Confidentiality), among others, will apply from 2 August 2025 onwards. The longest transitional period is for the provisions in Art. 6(1) (Classification rules for high-risk AI systems) and the corresponding obligations under the AI Regulation, which  will apply as from 2 August 2027.

Since the AI Regulation contains complex legal and technical requirements you should already be taking them into account and applying them in all your AI planning and implementation within your business.

What are the core elements?

The AI Regulation takes a risk-based approach whereby AI systems are assessed based on the risk to human safety, health and fundamental rights. The higher the risk, the more comprehensive the obligations. A distinction is made between four risk levels, each of which entail different requirements:

AI systems with an unacceptable risk are generally prohibited (Art. 5 AI Regulation). These include, for example, AI applications for the social evaluation of individuals and employees (social scoring) and databases for facial recognition or emotion recognition in the workplace.

AI systems that are used in the areas of health or security or in areas that are sensitive to fundamental rights (e.g. biometric identification, critical infrastructure, employment and human resource management, law enforcement, migration and border control) are considered high-risk AI systems (Art. 6 AI Regulation). These systems are bound by strict requirements, such as the establishment of a risk management system, data governance to avoid possible bias, the preparation of technical documentation, comprehensive transparency and information obligations, and the establishment of human oversight and cybersecurity measures.

For systems that are not regarded as high-risk AI systems but which are intended to interact with natural persons or generate content (e.g. chatbots for customer information, applicant selection, talent management), transparency and information requirements have specific application. Users must be informed that they are interacting with an AI system (Art. 50 AI Regulation). Their use must be necessary and appropriate.

A newly created European supervisory authority will monitor compliance with the rules and have the power to impose penalties in the event of non-compliance.

To whom do the rules apply?

The AI Regulation’s target group are providers who place AI systems on the market or put them into operation in the EU, as well as deployers of AI systems and importers and distributors.

Companies that use AI systems can be both deployers and providers within the meaning of the AI Regulation.

For example: Where a company has its own chatbot developed and uses it on its corporate website.

What are the risks?

In the event of infringements of the AI Regulation companies face fines of up to EUR 35 million or 7% of global annual turnover. Lawsuits brought by competitors and claims for damages by those affected are also a possibility.

What should be done?

Companies are responsible as providers or deployers for ensuring that their AI systems are used lawfully. Before developing or deploying AI systems all businesses must carry out a detailed examination and documentation of the AI Regulation risk level applicable to their systems, together with the resulting legal requirements. Regulatory requirements, especially mandatory transparency and information rules, must be observed before the AI systems are implemented.

If third-party AI systems are used it is necessary to conclude contracts to protect company data and comply with legal regulations.

In  addition to planning, continuous monitoring and improvement of measures taken will also be expedient to ensure comprehensive AI compliance management.

Furthermore, according to Art. 4 of the AI Regulation businesses must ensure that employees who deal with AI systems have sufficient AI literacy. This includes clear internal company guidelines and training.

Other legal aspects, such as data protection, IT security, protection of trade secrets and know-how, industrial property rights and employment-law provisions, will regularly come into play when operating and using AI systems. We therefore recommend that you always draw up internal company guidelines (AI policy) and prepare a data protection impact assessment in order to ensure that AI is handled in accordance with the law, thereby avoiding associated liability risks both for the company and its responsible personnel.

If you have any questions, please feel free to contact us at any time!

Pro více informací prosíme kontaktujte

Dr. Florian Wäßle, LL.M.

Attorney at law
act legal Germany Frankfurt, Germany
Telefon: +49 69 24 70 97 46

Dr. Thomas Block, MBA

Attorney at law
act legal Germany Frankfurt, Germany
Telefon: +49 69 24 70 97 36