Publikacje act legal
Status quo vadis: Limits in the use of artificial intelligence
A concrete legal framework for the use of artificial intelligence in Europe does not yet exist. Nevertheless, there are legal hurdles to consider when using it.
What do companies have to consider when using artificial intelligence?
„The imagination knows no bounds.” When it comes to the use of artificial intelligence (AI) technology also seems to set hardly any limits to the imagination. This is the impression This is the impression given by the daily media reports about ChatGPT, a generative Kl, which can write texts and poems, do quick researches and take exams at elite pass exams at elite American universities.
Many entrepreneurs are asking themselves how they can use the seemingly limitless technical benefits of „thinking” software and algorithms in a meaningful way in their companies, especially in their working lives. Four areas of application stand out people analytics” is used for performance evaluation and aptitude testing for new jobs in the and suitability testing for new jobs in the search for candidates or the further qualification of talents. Kl for „algorithmic management” is aimed at the planning and control of employee activities. Kl for automation of tasks („task automation”) takes on simpler activities independently. tasks independently. Finally, employees increasingly use ChatGPT in order to receive templates for texts, programming or the like. But do imagination and technology really know no bounds or are there legal hurdles that make the that make their use risky?
There is still no concrete legal framework for the use of artificial intelligence in Europe. Europe does not yet exist. What is exciting is that in April 2021, the EU Commission will present a presented a first draft regulation on the use of artificial intelligence (AI Regulation), which – as which – as things stand at present – could possibly be adopted this year. could be adopted this year. It is worth taking a closer look at this draft in order to understand in the direction in which the legislative efforts are going. The For example, the legislator uses a very broad Kl definition as a basis. According to this, simple automation processes could fall within the scope of the AI Regulation. A large number of systems that are already in use would have to be be reviewed on the basis of the AI Regulation.
The AI Regulation defines four classes of risk (unacceptable, high, low and minimal), which are subject to varying degrees of regulation. Cl practices, which are considered unacceptable, for example because they violate fundamental values of the EU, are prohibited (Art. 5 AI Regulation). Example: Evaluation of social behaviour (social scoring). For high-risk cl systems, minimum requirements apply. (Art. 8 ff. AI Regulation), which providers and users of the systems must fulfil (Art. 1 6 ff. AI REGULATION). In addition, irrespective of the risk class, the following apply in particular Transparency requirements (Art. 52 AI Regulation). Kl systems with a low or minimal minimal risk, on the other hand, are not subject to any special regulation.
Providers of such systems may voluntarily adhere to codes of conduct. (Art. 69 AI Regulation). Forward-looking entrepreneurs could already validate planned validate the planned and existing use of Kl on the basis of the regulations. which should make the use legally more secure. The AI Regulation provides important and up-to-date and up-to-date information on how the EU Commission envisages the use of AI in legal terms.
But what legal hurdles currently apply to the seemingly limitless possible possible use of AI currently apply? The most applicable are the General Data Protection Regulation (GDPR) (DSGVO), anti-discrimination rules, regulations on consumer protection and product safety and co-determination. For example, if a company uses the speech analysis software from Precire to automatically analyse the suitability of applicants on the basis of their language skills, or does it rely on the video analysis function of Hirevue? Hirevue’s video analysis function to create personality profiles, a prior data protection analysis and documentation is recommended. After the Section 26 of the German Data Protection Act (BDSG), which may be in breach of EU law, data data processing must be (i) suitable to achieve the predefined purpose, (ii) it must be necessary, i.e. there must be no milder interference in rights, and (iii) the interests of the parties involved must be weighed against each other. be weighed against each other.
GDPR and anti-discrimination
As a rule, the creation of personality profiles is inadmissible, unless, justified in individual cases on the basis of specific requirements in the job profile. requirements in the job profile. According to Art. 22 of the GDPR, the software may not be used to make personnel decision (e.g. hiring, promotion, dismissal) itself, but only to itself, but only provide assistance in the decision-making process. In order to ensure the non-discriminatory use of analytics software, the software provider should software provider should explain what precautions it has taken to avoid AGG risks. Finally, the current rulings of the ECJ of 30.03.2023 and 04.05.2023 must be taken into account in order to ensure that the correct legal basis for personnel data processing in each individual case.
Fines in the millions threatened
In conclusion, it can be said that, in addition to imagination, technology is setting ever fewer limits on the use of personal data. technology is setting ever smaller limits on the use of Kl and the more practical. However, the use of Kl should be well thought out beforehand in order to be able to profit from technical progress in the long term. Otherwise there is a risk of unwelcome mail from the supervisory authorities. Against the background of fines in the millions that have been imposed for data protection violations in Germany, it is urgently millions for data protection violations in Germany, it is urgently recommended that a data protection and document a data protection impact assessment before using artificial intelligence.