Status quo vadis: Limits in the use of artificial intelligence

What do companies have to consider when using artificial intelligence?

“The imagination knows no bounds.” When it comes to the use of artificial intelligence (AI) technology also seems to set hardly any limits to the imagination. This is the impression This is the impression given by the daily media reports about ChatGPT, a generative Kl, which can write texts and poems, do quick researches and take exams at elite pass exams at elite American universities.

Many entrepreneurs are asking themselves how they can use the seemingly limitless technical benefits of “thinking” software and algorithms in a meaningful way in their companies, especially in their working lives. Four areas of application stand out people analytics” is used for performance evaluation and aptitude testing for new jobs in the and suitability testing for new jobs in the search for candidates or the further qualification of talents. Kl for “algorithmic management” is aimed at the planning and control of employee activities. Kl for automation of tasks (“task automation”) takes on simpler activities independently. tasks independently. Finally, employees increasingly use ChatGPT in order to receive templates for texts, programming or the like. But do imagination and technology really know no bounds or are there legal hurdles that make the that make their use risky?

There is still no concrete legal framework for the use of artificial intelligence in Europe. Europe does not yet exist. What is exciting is that in April 2021, the EU Commission will present a presented a first draft regulation on the use of artificial intelligence (AI Regulation), which – as which – as things stand at present – could possibly be adopted this year. could be adopted this year. It is worth taking a closer look at this draft in order to understand in the direction in which the legislative efforts are going. The For example, the legislator uses a very broad Kl definition as a basis. According to this, simple automation processes could fall within the scope of the AI Regulation. A large number of systems that are already in use would have to be be reviewed on the basis of the AI Regulation.

The AI Regulation defines four classes of risk (unacceptable, high, low and minimal), which are subject to varying degrees of regulation. Cl practices, which are considered unacceptable, for example because they violate fundamental values of the EU, are prohibited (Art. 5 AI Regulation). Example: Evaluation of social behaviour (social scoring). For high-risk cl systems, minimum requirements apply. (Art. 8 ff. AI Regulation), which providers and users of the systems must fulfil (Art. 1 6 ff. AI REGULATION). In addition, irrespective of the risk class, the following apply in particular Transparency requirements (Art. 52 AI Regulation). Kl systems with a low or minimal minimal risk, on the other hand, are not subject to any special regulation.

Providers of such systems may voluntarily adhere to codes of conduct. (Art. 69 AI Regulation). Forward-looking entrepreneurs could already validate planned validate the planned and existing use of Kl on the basis of the regulations. which should make the use legally more secure. The AI Regulation provides important and up-to-date and up-to-date information on how the EU Commission envisages the use of AI in legal terms.

But what legal hurdles currently apply to the seemingly limitless possible possible use of AI currently apply? The most applicable are the General Data Protection Regulation (GDPR) (DSGVO), anti-discrimination rules, regulations on consumer protection and product safety and co-determination. For example, if a company uses the speech analysis software from Precire to automatically analyse the suitability of applicants on the basis of their language skills, or does it rely on the video analysis function of Hirevue? Hirevue’s video analysis function to create personality profiles, a prior data protection analysis and documentation is recommended. After the Section 26 of the German Data Protection Act (BDSG), which may be in breach of EU law, data data processing must be (i) suitable to achieve the predefined purpose, (ii) it must be necessary, i.e. there must be no milder interference in rights, and (iii) the interests of the parties involved must be weighed against each other. be weighed against each other.

GDPR and anti-discrimination

As a rule, the creation of personality profiles is inadmissible, unless, justified in individual cases on the basis of specific requirements in the job profile. requirements in the job profile. According to Art. 22 of the GDPR, the software may not be used to make personnel decision (e.g. hiring, promotion, dismissal) itself, but only to itself, but only provide assistance in the decision-making process. In order to ensure the non-discriminatory use of analytics software, the software provider should software provider should explain what precautions it has taken to avoid AGG risks. Finally, the current rulings of the ECJ of 30.03.2023 and 04.05.2023 must be taken into account in order to ensure that the correct legal basis for personnel data processing in each individual case.

Fines in the millions threatened

In conclusion, it can be said that, in addition to imagination, technology is setting ever fewer limits on the use of personal data. technology is setting ever smaller limits on the use of Kl and the more practical. However, the use of Kl should be well thought out beforehand in order to be able to profit from technical progress in the long term. Otherwise there is a risk of unwelcome mail from the supervisory authorities. Against the background of fines in the millions that have been imposed for data protection violations in Germany, it is urgently millions for data protection violations in Germany, it is urgently recommended that a data protection and document a data protection impact assessment before using artificial intelligence.

Update on international data protection: new standard contractual clauses for international data transfers

Almost a year ago, the ECJ declared the EU-US Privacy Shield as invalid (Schrems II) and also raised some questions regarding the EU Standard Contractual Clauses (SCC), the most important instrument for international data transfers. The EU Commission has now adopted new SCCs, with which it has adapted the previous SSCs to the General Data Protection Regulation (GDPR) and also taken the ECJ’s specification into consideration. However, it quickly becomes clear that the EU Commission has not created a carte blanche for data exchange with the new SCCs. Companies that transfer personal data to third countries such as the USA on the basis of SCC now have some work to do.

To read the full newsletter, please click here.

How R U CEE? All you need to know about real estate in CEE

As a result of the collaboration of BNP Paribas Real Estate with act legal, Hays, the French-Polish Chamber of Commerce, the Dutch-Polish Chamber of Commerce and the Belgian Chamber of Commerce, the report provides an overview of the commercial real estate market, broken down into specific segments, from the perspective of developers and investors in the Central and Eastern Europe.

act legal professionals from Poland, the Czech Republic, Hungary and Romania share their views on current trends in the CEE commercial property market and legal aspects of COVID-19’s impact, while also commenting on recent amendments to legal and tax regulations, which are relevant for investors.

Handbook for Supervisory Board Members

Dr. Thomas Altenbach provides a comprehensive guide to the key issues for supervisory boards. In particular, he also takes into account the current draft legislation on association sanctions and whistleblower protection. The handbook provides a comprehensive presentation of the tasks of the supervisory board as well as the rights and duties of its members and imparts the knowledge necessary for the diligent performance of office.

From the election of supervisory board members, to the competencies and duties of the supervisory board, the work of the supervisory board in detail, remuneration and reimbursement of expenses, internal investigations, conflicts of interest, liability and damages including D&O insurance.

Esports – a booming industry with unique legal needs

Through its rapid growth in the last few years Esports has shifted into the public focus and has become a new relevant industry generating revenues of $1.1 billion globally in 2019.

Home to one of the Esports leagues ESL – Electronic Sports League, and other relevant Esports companies, leading teams and players, Germany has played a vital role in this development. Besides its rapid growth, the esports ecosystem has also evolved in terms of professionalism, economically as well as legally.

A significant milestone in this regard was the establishment of the German esports association ESBD – ESPORT-BUND DEUTSCHLAND e.V. in 2017 which was initiated in the offices of act legal Germany. Since then, major legal steps have been taken, such as the introduction of the German esports visa in 2020, a dedicated visa category for esports.

Quando sono i robot a decidere!

E se il tuo capo fosse un robot?

“Il mio capo è un robot”. Questo è ciò che devono aver pensato i dipendenti della società d’investimento Deep Knowledge Ventures di Hong Kong quando un algoritmo informatico è stato nominato membro a pieno titolo del consiglio di amministrazione. Ed è proprio così. I robot, infatti, stanno prendendo piede nel mondo del lavoro a velocità record. Ordinano merci, lavorano fianco a fianco con i loro colleghi umani nelle catene di montaggio, distribuiscono prodotti su Internet e dettano le regole dell’high-frequency trading. Il progresso tecnologico sta consentendo ai datori di lavoro di utilizzare sistemi intelligenti non solo per integrare il lavoro umano, ma in alcuni settori anche per sostituirlo, compreso in posizioni di supervisione. È quindi davvero giunto il momento di occuparsi del quadro giuridico (diritto del lavoro).

Si conviene che la nomina di un algoritmo informatico nel consiglio di sorveglianza o nel consiglio di amministrazione di una società per azioni tedesca non è (ancora) possibile (sezione 76 par. 3 capoverso 1, 100 comma 1 capoverso 1 Legge tedesca sulle società per azioni – AktG). Tuttavia, l’assunzione della funzione di supervisione da parte di un robot non rappresenta una possibilità così remota nel futuro. Ad esempio, la multinazionale giapponese Hitachi fa eseguire istruzioni di lavoro da sistemi intelligenti nelle sue filiali tedesche. Dal punto di vista giuridico, nulla impedisce che la delega del diritto di impartire istruzioni sia trasferito da un essere umano a una macchina, se il datore di lavoro assicura, mediante un’adeguata programmazione, che le circostanze essenziali di un singolo caso siano equilibrate prima di impartire l’istruzione, e che gli interessi coinvolti siano presi in considerazione in modo appropriato (art. 315 del Codice Civile tedesco – BGB, 106 del regolamento commerciale tedesco – GewO). L’unico limite apparente per il “robot capo” risiede nell’articolo 22 comma 2 del GDPR, secondo il quale le decisioni che hanno “effetti giuridici” o “significativi effetti ” sulla persona interessata non devono basarsi su un trattamento automatizzato di dati personali. Pertanto, il robot capo non può (ancora) funzionare. Già nella realtà e dopo un’approfondita valutazione in termini di legge sulla protezione dei dati, è legalmente consentito all’uomo prendere decisioni personali preparate da algoritmi, soprattutto nel caso di procedure di candidatura e strumenti di selezione completamente automatizzati.

Quindi sì, i robot sostituiscono i lavoratori. Ma possono anche essere qualificati come dipendenti ai sensi del diritto del lavoro e delle società? Contano ai fini delle soglie che determinano il numero di dipendenti necessari per l’istituzione di un consiglio di sorveglianza co-determinato, l’entità del consiglio di fabbrica o l’applicazione della legge tedesca sulla tutela dei licenziamenti? I robot devono essere considerati su un piano di parità con i dipendenti umani durante la selezione sociale? Anche se alcuni autori lo richiedono, questo deve essere negato in ogni caso. Tutti i valori di soglia (ancora) presuppongono una persona fisica e la selezione sociale verrebbe presa ad absurdum. Invece del licenziamento di due dipendenti, il “giovane” robot senza obblighi di alimenti è probabilmente meno degno di protezione sociale. Ciò annullerebbe la libertà imprenditoriale, tutelata dalla costituzione, di ottimizzare i processi lavorativi utilizzando le moderne tecnologie. I robot sono cose, e rimangono tali.

Quanto sono „affidabili” le macchine?

Di conseguenza, anche i robot umanoidi non sono responsabili per gli infortuni ai dipendenti/clienti e/o per i danni che causano alla reputazione dell’azienda. Determinare la persona responsabile dipende dal fatto che il danno causato sia dovuto ad una programmazione errata o ad un funzionamento errato del robot. Il legislatore non ha ancora preso provvedimenti. Attualmente si sta discutendo se l’utilizzo di sistemi intelligenti debba essere soggetto alla riserva legale di stipulare un’assicurazione obbligatoria. Si raccomandano in particolare le seguenti precauzioni: Esecuzione di una valutazione dei rischi (art. 5 della legge tedesca sulle condizioni di lavoro – ArbSchG), rispetto delle misure di sicurezza menzionate nell’ordinanza sulla sicurezza sul lavoro, nonché della norma ISO 10218-2011 e coinvolgimento del comitato aziendale (art. 90, 87 cpv. 1 n. 7 della legge tedesca sulla costituzione dei lavoratori – BetrVG).

Robot policy all’appello

Ma come fa un consiglio di amministrazione / consiglio di sorveglianza a controllare i sistemi di autoapprendimento che funzionano secondo il principio di tentativi ed errori e si modificano automaticamente? Come agisce un consiglio di amministrazione sulla base di “informazioni appropriate” se non ha alcuna visione della scatola nera che si trova tra il livello di input e quello di output? Il management dell’azienda deve bilanciare i rischi potenziali in ogni singolo caso, monitorare attentamente gli sviluppi dei robot che utilizza, e implementare sistemi di conformità per proteggersi da violazioni (legali). Sembra anche necessario, dal punto di vista legale, stabilire le suddette condizioni quadro legali in una “robot policy ” scritta. Il management dell’azienda fa bene a comunicare l’uso dell’intelligenza artificiale in modo trasparente, soprattutto agli investitori, ai dipendenti, e ai partner commerciali. A nessuno viene impedito di investire il proprio denaro in progetti rischiosi, di lavorare con i robot e/o di fare operazioni di trading con loro. Ma devono saperlo.

New guide of the US Department of Justice – Requirements for a Compliance Management System

On April 30, 2019, the U.S. Department of Justice (DOJ) published new guidance on the requirements for compliance management systems, a revised and expanded version of a document of the same name published in February 2017, which collected more than 120 test questions to identify the criteria that DOJ prosecutors should use to assess compliance programs when assessing penalties or entering into a deferred or non-prosecution agreement. This article provides a brief overview.

#MeToo? – Relevant answers

Social interaction at work or sexual harassment?
What employers must do and when.

A short, too tight hug for a birthday, a quick follow-up mail after a nice lunch, a random touch while looking at the screen together: Are these scenes examples of good social interaction at work or sexual harassment? What employers must do and when.

Bridge part time: right to return full time

The legislator has once again provided companies with a provision whose unclear interpretation could in practice be contested in court. In addition to “normal” part-time employment, bridge part-time employment will apply from 2019.