Document Actions

No section

The Evolution of the Perception of Artificial Intelligence in the EU: The Case of Judicial Administration

  1. Kalliopi Terzidou

Abstract

Efficiency of judicial administration is one of the priorities of justice systems, it acts as a means to achieve effective administration of justice and wider access to courts through minimum spending of resources. One element associated with a satisfactory level of court efficiency is the integra- tion and use of digital technologies by judicial staff. Artificial Intelligence (AI) stands out as a superior al- ternative to traditional digital technologies due to its use of Machine Learning (ML), to achieve desig- nated goals. This article will trace the evolution EU policymakers’ understanding of AI in the context of EU Member States’ courts integrating AI systems to efficiently automate their judicial administration. By comparing AI definitions provided by EU bodies, specifically referencing the proposed AI Act, this article highlights the commonly accepted characteristics of AI. Additionally, it examines arguments put forth by leading computer scientists regarding the interpretation of “intelligence” in artificial artifacts. We will find that AI systems are perceived as systems employing ML and logic and knowledge-based approaches that are capable of mimicking basic human cognitive functions to autonomously automate manual tasks. These findings will be followed by remarks on the necessary steps for the integration of AI-based applications in EU justice systems.

Keywords

1. Introduction*

1

The advent of the COVID-19 pandemic functioned as a magnifying glass into the internal operation of courts and their inefficiencies in handling incoming applications and ongoing proceedings. Questions of prioritization of cases, selection of judges, and realization of (online) hearings had to be considered by national authorities competent for the organization of courts. Important factors for consideration included the protection of the rights of individuals, the resources available to courts for technical equipment, and the training of judicial staff to learn how to use digital systems. [1] Due to the suspension of physical presence in courthouses, the use of digital technologies was important in ensuring that the judicial branch would remain accessible to citizens applying for court proceedings.

2

This response to the health crisis highlighted not only the contribution of digital technologies in the effective administration of justice but also the lack of their systematic integration and use by judicial staff. Firstly, digital systems were not tailored to the remote conduct of judicial administration and hearings. Courts preferred online videoconferencing platforms, such as Zoom or Skype, over their own systems to conduct virtual hearings due to the former’s user friendliness, despite the risks of data protection breaches. [2] Secondly, judicial staff do not always possess the necessary digital skills to operate the systems due to their lack of training, therefore resorting to paper-based processes that might have been inadequate in dealing with remote proceedings during the health crisis. Thirdly, digital systems currently in use by courts are not interoperable to enable the exchange of information among national or even international judicial authorities. However, there are efforts to enhance interoperability among European states’ justice systems: The e-CODEX project (e-Justice Communication via Online Data Exchange), was launched to facilitate the secure cross-border exchange of judicial information. This is achieved through the communication of encrypted data between connected gateways installed in the legal authorities of Member States, including a validation tool for electronic signatures. [3] Currently, though, these projects may not be as widely employed as necessary to achieve a satisfying level of interoperability throughout the EU.

3

Artificial Intelligence (AI) is a digital technology that is considered superior to traditional alternatives in automating manual tasks. Artificial agents have been characterized as autonomous in optimizing their performance, interactive with their environment by receiving input data and producing output values, and adaptive by altering their parameters to adjust to their current environment. [4] These characteristics can compensate for disadvantages of traditional digital systems by offering customized digital solutions for judicial staff and interoperability with external systems, further enhancing the efficiency of courts.

4

"Efficiency" is an economic concept that can be applied to courts to indicate the successful accomplishment of their objectives, particularly the administration of justice within a specific society, while utilizing minimal financial resources, time, and effort. Automation of tasks through technological means theoretically allows for minimum processing time of cases and administrative tasks, leading to less efforts by judicial staff in the execution of manual tasks. But this might not necessarily be the case, especially when considering the significant funds required for the procurement, purchase, installment, monitoring, and maintenance of the system, along with the training sessions necessary for the staff to familiarize themselves with its operation. Pending empirical studies, this article considers automation of judicial administration through the integration of AI systems as something that improves courts’ efficiency.

5

The article will trace changes in the perception of AI technology by EU bodies overtime, in particular regarding attempts to increase the efficiency of judicial administration through the introduction of AI applications. This is achieved by collecting and comparing selected definitions of AI produced by EU bodies to determine the common understanding of the technology’s characteristics, as well as some of its applications in the judicial administrative field. In this context, the proposed AI Act will be reviewed with a focus on the regulatory provisions on high-risk AI systems for the safety and fundamental rights of EU citizens. To further delineate the characteristics that render AI technology a factor towards a more efficient judicial administration, the meaning of “intelligence” is explored through a review of arguments made by leading authorities in the computer science field. The article concludes with thoughts on the successful integration of AI systems in EU Member-States’ courts.

2. DEFINING AI IN THE JUSTICE FIELD

6

There is no single definition of AI. Many actors, including international bodies, private corporations, and civil society organizations, have attempted to provide a definition to inform their policies, develop their products, or pursue their mandate, respectively. However, no matter the type of actor, a working definition is important to ensure a common perception of AI systems by all members of the given organization. Especially on an international level, policies to regulate the development and use of AI must define early on what this technology entails, so Member States entering in relevant agreements are aware of the scope of the regulations and align their interests accordingly. This section begins with an overview of AI definitions given by EU bodies to determine the general understanding of its features, moving to an overview of AI-based applications for the automation of judicial administration.

2.1. Understanding of AI by EU Policymakers

7

EU bodies are becoming gradually more interested in regulating aspects of AI use in the public and private sectors, considering not only the growing use of its applications but also its reported risks. AI systems have been accused, most notably, of the “black box” effect due to the opaqueness of their internal processes and/or the inability to explain these processes in an intelligible manner. Another observable risk is the production of biased outputs that lead to discrimination of certain protected groups in society, either due to the use of bias-charged data for the training of the system or the correlation of data that can indirectly reveal information on protected grounds, such as race or religion. During policymaking processes, EU bodies define the subject-matter of the legal act, resulting in diverse definitions of AI.

8

The High-Level Expert Group on Artificial Intelligence (“The Group”) of the European Commission published a definition of AI in 2018, with the aim of establishing a common understanding of the term that can serve as a starting point for future AI policies on an EU level. The Group states that:

“Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”  [5]

9

This definition is an oversimplification of the technical nature of AI, but it still offers an insight into the characteristics of the technology. The Group places an emphasis on the process that AI systems follow to achieve the goal set by the developer. The algorithmic system is designed to perform a specific task, constituting its goal, and the developer must then train the algorithmic system with input data so it can provide an output. This process can be achieved through different techniques of AI. The definition refers to a non-exhaustive list, including “machine learning,” “machine reasoning,” and “robotics” techniques. An important technique that is not mentioned, but might be implied, is Natural Language Processing (NLP), which concerns the analysis of text or speech (Automatic Speech Recognition – ASR) training data, so tasks such as the filing of court documents or the transcription of a trial can be performed. NLP techniques fall under the wider spectrum of AI technology, while they can employ ML techniques for advanced statistical analysis, for example, to perform pattern recognition for the searchability of court documents. [6] They can also use Deep Learning (DL) approaches which are even less dependent on human intervention and can allow for the processing of larger sets of unstructured data to determine the distinctive features among different categories of data. [7] Another issue is that robotics is a branch of engineering that does not necessarily involve the use of AI for the execution of commands. Hence, it may not be considered as a distinct category of techniques that specifically involves AI.

10

In 2021, the European Commission published the Proposal for an AI Act to regulate its distribution on the market, application, and the use of AI systems in the EU, including rules on transparency, monitoring, and surveillance (Article 1).  [8] Article 3 (1) of the Proposal defines AI systems as:

“…software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

11

Annex I of the Proposal further specifies the techniques used for the development of AI software, being (i) ML approaches, including DL; (ii) logic and knowledge-based approaches, including knowledge representation and reasoning and expert systems, and (iii) statistical approaches, Bayesian estimation, and search and optimization methods. This definition is differentiated from The Group’s attempt in that it does not provide a high-level explanation on how AI systems function to achieve a certain goal, making it difficult for a person without a basic computer engineering background to familiarize themselves with the subject matter of the Proposal. In addition, the Proposal’s definition provides more concrete examples of AI techniques, excluding “robotics” and distinguishing between logic and knowledge-based approaches on the one hand, and search and optimization methods on the other. In the Group’s definition, these two approaches coexisted under the category “machine reasoning.” Their separation might be attributed to the fact that search and optimization methods might rely more on machine learning than machine reasoning, according to The Group’s distinction. Logic and knowledge-based approaches seek to represent information (i.e. processed data) in a machine-readable manner, so the system can complete complex tasks, possibly using reasoning techniques that resemble human logic. However, machine reasoning approaches, such as ontologies, can be employed in search-related tasks, most notably to offer a repository of legal terms that are represented not only under their syntactic but also their semantic meaning, acting as available key words in search queries.  [9]

12

Pending the joint adoption of the Proposal by the EU Parliament and the Council of the EU, the latter body has released several political agreements (“General Approaches”), establishing certain amendments to the text of the Proposal. In December 2022, the Council recommended an alternative definition for AI systems. [10] Article 3 (1) defines an AI system as:

“…a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”

13

The most notable difference from the Proposal’s definition is the exclusion of the category of statistical approaches, placing the Council’s definition in line with definitions provided by other international organizations. [11] These AI techniques might be considered as more traditional in comparison with ML and logic or knowledge-based approaches, thus not yielding the same challenges that require the regulatory interventions established in the Proposal, including risks to the safety and fundamental rights of EU citizens. Another reason might be the intention to establish a sufficiently wide regulatory sandbox for the promotion of innovation and for the creation of an attractive environment for business and investment within the EU. This is important since the Union should become competitive in relation to the U.S. and Chinese jurisdictions regarding the development and dissemination of AI systems in the market.

14

An interesting feature of the Council’s definition is the mention of “generative AI systems,” in relation to content production. Generative AI systems are generally regarded as general-purpose AI systems. According to Article 3 (1b) of the General Approach, a General Purpose AI System (GPAIS) “…is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.” The main difference between AI systems and GPAIS seems to be that while GPAIS are intended to be part of multiple AI systems and apply to multiple domains, traditional AI systems are stand-alone and designed for a specific goal (“…for a given set of human-defined objectives…”). However, the indication in the definition that GPAIS “may” be used in multiple contexts and as a part of multiple AI systems implies that they might also be designed for a specific context and to fit a specific AI system, putting into question the generality of their nature. [12]

15

This distinction is important since Article 4b of the General Approach states that GPAIS may be used as “high-risk” AI systems or as their components. High-risk AI systems are regulated under Title III of the Proposal and denote systems that pose a high risk to the health and safety or fundamental rights of natural persons, depending on the performed function, purpose, and intended modalities of the system. These systems must be developed according to a set of requirements prescribed in Articles 8-15 of the Proposal. These requirements concern accountability, transparency, and technical safety goals, ranging from record-keeping (Article 12) to the provision of information to users (Article 13) and human oversight (Article 14). Apart from high-risk AI systems, the Proposal establishes different levels of risk, namely unacceptable (prohibited practices that contravene Union values and are likely to manipulate users’ subconscious or take advantage of vulnerable groups), limited (slight risk of manipulation of users in not realizing that they do not interact with a machine, necessitating transparency obligations), and minimal (not considerable). [13]

16

The common elements of the EU bodies’ definitions of AI are that the systems pursue specific goals through certain techniques, namely through ML and logic or knowledge-based approaches. It is evident that EU representatives started with a wider approach and gradually narrowed down the definition of AI systems, to the point of excluding statistical and related approaches. Despite the restriction of the scope of AI systems in ML and logic or knowledge-based techniques, the Council’s definition might still be considered as technologically neutral to the extent that these techniques encompass a broad field of AI sub-techniques, functionalities, and applications, thus rendering the Proposal applicable to a variety of AI systems developed in the EU and/or addressed to EU citizens and guaranteeing the safety and rights of users throughout the entire lifecycle of the AI system.

2.2. The Use of AI Systems in Judicial Administration

17

“Judicial administration” or “administration of courts” represents the sum of tasks necessary for the internal organization of courts. These tasks can be purely managerial in nature, encompassing back-office duties for the operation of the courthouse and the management of personnel. At the same time, they can be ancillary in the adjudicatory work of judges, in other words assisting them with the systematization of case management and decision-making. Judicial administration is carried out by judicial staff, including judges, prosecutors, judicial assistants, and administrative personnel or clerks. AI systems designed to automate judicial administrative tasks have been classified in various ways throughout recent academic literature.

18

Sourdin makes a distinction among supportive, replacement, and disruptive technologies, under which AI technology may be used to support online information services on justice processes, replace physical court proceedings with online proceedings using videoconferencing tools, and informing judges’ decisions applying prediction models, respectively. [14] Reiling distinguishes between three main categories of AI uses, being the organization of information through the recognition of patterns in documents and files to discover information, the provision of advice to individuals on possible solutions to their problem, and the “prediction” of the outcome of court proceedings. [15] Terzidou reviews AI uses according to the stage of proceedings they are contributing to, namely in pre-trial, hearing, and post-sentencing proceedings. [16] Examples include the provision of information on court proceedings using chatbots, the transcription of the courtroom procedure, and the anonymization of court decisions, respectively. A major part of the reviewed technologies has a managerial character in automating tasks that concern back-office duties, with the exceptions of document discovery and predictive models representing the advisory potential of AI applications to judges’ decision-making process.

19

To better illustrate the use of AI applications with an advisory role, predictive analytics are engineered into the systems to predict defendants’ future behavior or the court’s most probable decision outcome based on previous patterns. In the former scenario, algorithmic systems are reportedly used to measure the risk of convicted people reoffending, in order to decide whether they are eligible for parole. The COMPAS system determines the risk of defendants reoffending in the future based on a risk score that is determined through their responses to a 137-questions survey, complemented by information from their criminal record. [17] In the latter case, AI systems predict the whole or part of the hearing proceedings’ outcome. Aletras et al. used ML and NLP techniques to predict the European Court of Human Rights decisions in cases concerning Articles 3, 6, and 8 of the European Convention of Human Rights, mainly relying on the facts of the case to reveal patterns in the case law document. [18] Additionally, the DataJust project, led by the French Ministry of Justice, aims at offering to the public indicative benchmarks for compensation in cases of physical harm, by processing court decisions to extract and exploit data concerning “the amounts requested and offered by the parties to the proceedings, the assessments proposed within the framework of procedures for the amicable settlement of disputes and the amounts allocated to victims by the courts.” [19]

20

It is important to note that the above systems merely inform judges’ decision-making by providing further grounds in their reasoning or assist individuals in deciding whether to resort to courts for the resolution of their case. In Europe, there is no application that replaces the role of judges in awarding binding and enforceable judgments. In 2019, a magazine article was released concerning the design of a robot judge for the adjudication of small claims disputes based on the analysis of information uploaded by the parties, a project allegedly coordinated by the Estonian Ministry of Justice. [20] This report, however, was subsequently characterized as “misleading” by the Ministry, stating that it does not pursue such a project. [21] The replacement of judges by AI systems automating the decision-making process would likely undermine the legitimacy of the trial and the acceptance of the final judgment, given that the systems cannot currently replicate the reasoning of judges, characterised by well-structured arguments on how legislative provisions and/or case law apply to the facts of the case. [22] The machine’s logic in adhering to its pre-programmed rules cannot be compared with such reasoning, because it can be expressed only in technical terms that are not humanly intelligible and need to be treated by developers in order to circumvent the “black box” effect and derive some kind of explainability. Nevertheless, there are techniques that attempt to enhance algorithmic transparency and mimic human reasoning. These approaches are explored in the next section.

2.3. The Interest of the EU in AI-Assisted Judicial Administration

21

In the EU, Member States’ courts express a preference in the development of AI-based applications with a managerial role, automating administrative tasks for the efficiency of the courthouse. National competent authorities are prioritizing the development of AI systems automating, in full or partially, the anonymization or pseudonymization of judgments, the searchability of court documents for legal research, the analysis of evidence, the filing of court documents, the transcription of the trial, the translation of court documents, and internal and external communications. [23]

22

The interest of Member States in integrating AI-based systems in their judiciaries is further reflected in Preamble 40 of the Proposal for an AI Act, stating that AI systems “…intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts…” should be qualified as high-risk, not including AI systems “…intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases…” The Preamble provides examples of AI systems for “purely ancillary administrative activities, namely the anonymization of court documents, the communication between personnel, and the allocation of resources.” This differentiation of administrative tasks validates the distinction marked above between AI applications for the automation of tasks related to back-office duties and tasks concerning the decision-making process, while highlighting the importance that the European Commission places on the high level of risk that AI systems have for the research, interpretation, and application of what the law might entail.

23

An illustration of a high-risk AI system used by judges for the purpose of retrieving legislative and case law resources in preparation for the hearing would be Open AI’s chatbot, also known as ChatGPT. ChatGPT is, in fact, a language model trained with Reinforcement Learning techniques, upon which Open AI developed its chatbot, which reacts to users’ prompts in a conversational manner and generates suitable responses. [24] There have already been reports on uses of the chatbot by judges, admittedly outside the EU, posing questions regarding the applicable rules to a given legal issue to facilitate their decision-making process, albeit also taking into consideration past case law to arrive to their final decision. [25] Even if the output of the chatbot is not the sole or main basis of the judge’s final decision, these generative AI systems can be characterized as high-risk due to the challenges they pose to case management prior to and during the trial. It is possible that chatbots are not trained with sufficient or domain specific input data, or are trained with data collected through sources of misinformation, thus providing judges with insufficient and/or inaccurate legal information that might lead them to misapplications of the legislation and jurisprudence in a given case. Therefore, a careful design and development of generative AI systems must be conducted by developers and providers alike, in accordance with the Proposal’s requirements on high-risk AI systems.

24

The review of the general understanding of AI through the EU bodies’ definitions of the AI applications in the justice field revealed that AI systems are primarily considered to be based on ML and logic or knowledge-based approaches, applied in judicial administration to automate back-office tasks and assist judges with their decision-making process. The following section expands upon the concept of “intelligence” in relation to artificial artefacts as a further step in determining the components of AI systems that are most conducive to raising the efficiency of judicial administration in EU Member States’ courts.

3. THE INTELLIGENCE OF AI SYSTEMS IN JUDICIAL ADMINISTRATION

25

“Intelligence” is an abstract concept that is normally associated with human beings. Yet, it is the second component of the term “Artificial Intelligence,” hinting the ability of machines to mimic the cognitive functions of human beings. This section attempts to understand what “intelligence” means in relation to artificial artefacts through the review of arguments by leading computer scientists and of the operation of selected AI applications.

3.1. Perspectives on the Intelligence of Artificial Artefacts

26

The Cambridge Dictionary defines “intelligence” as “the ability to learn, understand, and make judgments or have opinions that are based on reason,” [26] competences generally associated with human beings. In the computer science field, John McCarthy claimed that intelligence is “the computational part of the ability to achieve goals in the world,” specifying that AI does not have to restrict itself to biologically observable methods but can also involve computational methods that are not found in human beings. [27] He, then, explains that these computational methods cannot generally be characterized as intelligent because humans themselves cannot yet understand all the mechanisms of intelligence.

27

Earlier work has attempted to establish the machines’ potential to display intelligence by mimicking human reasoning. Turing established a test, called the “Imitation Game,” to conclude if machines, that is digital computers, can think or operate as a human would. The test required three participants: a human interrogator, a human respondent, and a machine respondent; if the interrogator cannot tell the difference between her interaction with the human and the machine, then the machine passes the test. [28] In the same paper, Turing mentioned two contrary opinions to his theory: Lady Lovelace’s argument that a machine does not originate an act but can only perform based on pre-programmed orders, and Professor Jefferson's view that a machine is not driven by thoughts and emotions to perform a task nor can it be emotionally affected by its accomplishments or failures.

28

Under the above statements, AI applications for judicial administration could be viewed as “thinking” agents in terms of carrying out previously manual tasks in a way that humans would, but only because they are originally programmed to do so by human developers. Accordingly, AI systems cannot be considered fully autonomous since there is always a human in the loop operating the system, even if they alleviate much of the effort spent in the performance of a judicial task. For instance, speech-to-text systems are used to transcribe the trial by transforming recorded speech files uploaded to the server into text. [29] The clerk, however, has to upload these files to the system and remains in control of the application by verifying the accuracy of the transcribed text with her signature, while technical issues can be communicated to the IT expert that can make any necessary adjustments to the system.

29

The autonomy of an AI system is better perceived in its ability to interact and adapt to its environment through the improvement of its performance overtime, being constantly trained with new data inputs to build on its past performances. AI systems for information retrieval, that assist judges in finding legislation and jurisprudence by searching structured documents and files, can always improve their accuracy by being trained with larger datasets. The challenge of the optimization of AI systems trained with legal data is that legal documents are long, they display a complex structure and legal terminology, and datasets with domain-specific documents are rare. [30]

30

AI systems could also demonstrate their “thinking” ability by mimicking more complex cognitive tasks. Research projects are focusing on the reproduction of legal reasoning by artificial agents, a process that otherwise requires a considerable time and effort by legal professionals to perform. It has to be noted, however, that AI systems perform legal reasoning in a computational or mathematical manner; the concepts argued are closed-ended rather than open-ended, the context of argumentation is similarly well-defined rather than consisting of incomplete information, and the conclusions are objective and definite rather than subjective and open to further discussion and amendments. [31] As a result, the mechanical analysis of legal texts is distinct from the reasoning of legal professionals on abstract legal concepts and might render relevant AI systems unsuited for case management in the criminal branch, where judges must often deal with legal terms and concepts that are open to interpretation and difficult to computerize.

3.2. Intelligent AI Applications for the Automation of Judicial Administration

31

The “intelligence” of AI systems in (semi-) autonomously completing previously manual tasks through the imitation of basic cognitive features can be demonstrated in several judicial applications. Taking the example of AI systems for the anonymization or pseudonymization of judgments in compliance with personal data protection rules, NLP techniques might be employed for the annotation of entities and their replacement with labels in a consistent manner, so the same entity is assigned the same label throughout the text. [32] There is some mimicking of human intelligence in the processing of textual data to find personal information and replace it with the designated labels. However, human input is still needed to verify and, if needed, correct the output of the algorithm, especially in cases where there is a lack of consistency in the anonymization of the same entity throughout the text. [33]

32

Regarding examples on computational legal reasoning, compliance checking applications automate the assessment of a real-world incident in terms of its compliance with a norm, which in this context means the way a provision is applied. This can be achieved through ontologies, such as the OWL language for knowledge modeling in the Semantic Web, where real world incidents are represented as ontologies and norms are represented as restrictions to ontological properties, reflecting the legal restraints that individuals must comply with. [34] Therefore, legal reasoning is automated through ontologies, which further enables the explainability of AI systems, that is “…the ability to explain both the technical processes of an AI system and the related human decisions …” in a humanly understandable way, [35] without resorting to ML methods that can only be viewed in numerical terms. Explainable processes can lead to accountability for the algorithmic outcomes and redesigning in cases of malfunctions or necessary updates.

33

In continuation of the discussion on the COMPAS system, an ontology could be created to represent the concept of “recidivism,” which is then accompanied by different properties representing the indicators mentioned by the provider Northpointe, such as criminal history, criminal associates, and drug involvement. [36] The conceptualization of “recidivism” into an ontology and the tagging of its distinguished properties would allow users, in this case judges, to infer logical similarities among these properties in an explainable manner. In this way, they could understand how each indicator contributed to the predicted risk score, so as to detect instances of adverse bias when indicators based on protected grounds, such as race or religion, have contributed to the algorithmic output more than permitted by the threshold established by competent authorities.

34

The “thinking” process of AI systems is still of a mathematical nature and realized within the strict limits of the goals set by developers, confirming Lady Lovelace’s argument on the inability of machines to originate an action. Machines are also not conscious in recognizing the reasons behind their actions and taking pride in their accomplishments according to Professor Jefferson, instead acting upon the programmed rules. Nevertheless, machines can still perform an action that could be realized by a human, mimicking minimum cognitive capabilities. Placing such a system under Turing’s test, the human interrogator might not be able to distinguish between the machine and the human participants completing a manual task, thus proving that AI systems are intelligent in this restricted fashion. Combined with their autonomous character, though not autonomous enough to replace their users, AI systems could theoretically yield efficiencies in judicial administration by automating a considerable number of judicial tasks and thus minimizing time and effort spent in back-office duties and, ultimately, disposition time. In addition, AI predictive systems can improve the quality of the adjudication process by providing judges with additional grounds for their decisions, consisting in the system’s outputs that can be assessed for possible adverse biases or other defects through techniques, such as ontologies, that render AI systems explainable.

4. FINAL REMARKS

35

This paper highlighted the evolution of the understanding of AI by EU policymakers and its perceived efficiencies for the judicial administration of EU Member States’ courts. In the first section, it was shown that the definition of AI systems by EU bodies has been gradually narrowed to refer to ML and logic or knowledge-based techniques. The literature review revealed that AI applications in judicial administration can be categorized in AI systems automating managerial, back-office tasks and in those that assist judges in legal research or in predicting post-sentencing parameters, including the amount of compensation to be attributed to the injured party. AI systems assisting judges during the decision-making process are considered as high-risk systems by the Proposal for an AI Act and must be developed in compliance with certain requirements of a technical and governance nature. In the second section, AI systems were claimed to be “intelligent” in terms of their computational ability to arrive to the goal set by human developers, mimicking basic cognitive functions, and of their autonomy in improving their performance overtime by being trained on new data inputs and “learning” from past performances.

36

Certain steps must be taken to ensure the successful integration of AI systems in judicial administration and, consequently, the realization of the potential efficiencies for time and effort management. More specifically, AI applications must adhere to relevant legal requirements, be securely developed, and follow specific rules for their sound integration and systematic use in courts. The use of AI systems in the justice field must primarily adhere to the right to a fair trial, meaning that they must support access to courts and safeguard the independence and impartiality of the judiciary, along with the fairness of the court proceedings. [37] Further legal requirements include the protection of personal information during the training and performance of the algorithm, so their processing is done in a lawful and transparent manner, for clearly stated purposes, and to the extent necessary, retaining the data in an updated form and for the necessary amount of time. [38]

37

Moreover, AI systems must be technically secure and robust throughout their design, development, use, and possible redesign. The High-Level Expert Group on AI states that AI systems must adhere to several standards, including human oversight (continuous human control), technical robustness and safety (accuracy, reliability, and safety from cyberattacks), transparency (documentation and communication of the technical processes in a humanly understandable manner for accountability purposes), and non-discrimination (no reproduction of discrimination based on protected grounds, such as gender). [39] The Proposal for an AI Act further develops these standards according to the level of risk that the AI system presents, ranging from data management and documentation for high-risk systems to transparency measures for limited-risk systems.

38

Finally, the process of the integration of AI applications in courts must be regulated so AI systems can produce legal effects and accountability can be attributed when checking the outputs of AI systems against the existing legal certification. [40] On a national level, few policy or legal documents exist for the regulation of the use of AI in the judiciary; [41] however, national courts in Europe have ongoing AI projects for the automation of their judicial administration that, once concluded, will need to be officialized by a state act or equivalent to be integrated in national justice systems. On a regional level, the Proposal for an AI Act proves that EU bodies and Member States are interested in the uniform regulation of AI systems in the public sector, including the judicial branch, even in the case of high-risk AI applications that must conform with harmonized standards to be introduced to national courts.

* By Kalliopi Terzidou, LL.M.; Doctoral Researcher; Faculty of Law, Economics, and Finance; University of Luxembourg. The present paper has been written in the context of the author’s doctoral research, funded under the PRIDE funding program (DILLAN) of the Fonds National de la Recherche Luxembourg.



[1] * LL.M.; Doctoral Researcher; Faculty of Law, Economics, and Finance; University of Luxembourg. The present paper has been written in the context of the author’s doctoral research, funded under the PRIDE funding program (DILLAN) of the Fonds National de la Recherche Luxembourg.

Council of Europe, ‘The Functioning of Courts in the Aftermath of the Covid-19 Pandemic’ (2020) <https://rm.coe.int/the-functioning-of-courts-in-the-aftermath-of-the-covid-19-pandemic/16809e55ed> accessed 15 August 2022.

[2] Anne Sanders, ‘Video-Hearings in Europe Before, During and After the COVID-19 Pandemic’ (2021) International Journal for Court Administration <https://doi.org/10.36745/ijca.379>, 12-14.

[3] E-CODEX Website, ‘Technical Solutions’ <https://www.e-codex.eu/technical-solutions> accessed 16 August 2022.

[4] Luciano Floridi and J.W. Sanders, ‘On the Morality of Artificial Agents’ (2004) Minds and Machines 14, no. 3 <https://doi.org/10.1023/B:MIND.0000035461.63578.9d> 357–362.

[5] High-Level Expert Group on Artificial Intelligence, ‘A Definition of AI’ (2018) <https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf> 7.

[6] Gokul Prasath, ‘Difference between Machine Learning, Artificial Intelligence and NLP’ (2019) Medium (blog) <https://medium.com/@cs.gokulprasath98/difference-between-machine-learning-artificial-intelligence-and-nlp-d82ba64a7f32>.

[7] IBM, ‘What Is Machine Learning?’ (2021) <https://www.ibm.com/cloud/learn/machine-learning> accessed 27 April 2022.

[8] European Commission, ‘Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ (2021) < https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0206 > Recital 40.

[9] Joost Breuker, Andre Valente, and Radboud Winkels, “Legal Ontologies in Knowledge Engineering and Information Management,” Artificial Intelligence and Law 12 (December 1, 2004): 241–77, https://doi.org/10.1007/s10506-006-0002-1, at 269-273.

[10] General Secretariat of the Council, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts - General approach, 6 December 2022, < https://data.consilium.europa.eu/doc/document/ST-15698-2022-INIT/en/pdf>.

[11] See, for example, UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (2021) <https://unesdoc.unesco.org/ark:/48223/pf0000381137> 10, and; OECD, ‘Scoping the OECD AI Principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO)’ (2019) <https://doi.org/10.1787/d62f618a-en> 7.

[12] Philipp Hacker, Andreas Engel and Theresa List, ‘Understanding and Regulating ChatGPT, and Other Large Generative AI Models: With input from ChatGPT’ (Verfassungsblog, 20 January 2023) <https://verfassungsblog.de/chatgpt/> accessed 7 March 2023.

[13] European Commission, ‘Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts,’ Preamble 5.2.2. – 5.2.4.

[14] Tania Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’ (2018) UNSW Law Journal 41, no. 4 <https://www.unswlawjournal.unsw.edu.au/article/judge-v-robot-artificial-intelligence-and-judicial-decision-making/> 1117-1119.

[15] A. D. (Dory) Reiling, ‘Courts and Artificial Intelligence’ (2020) 11(2) International Journal for Court Administration 8 <https://papers.ssrn.com/abstract=3736411> 3-6.

[16] Kalliopi Terzidou, ‘The Use of Artificial Intelligence in the Judiciary and Its Compliance with the Right to a Fair Trial’ (2022) 31 Journal of Judicial Administration <https://orbilu.uni.lu/handle/10993/51591> 157-158.

[17] Julia Angwin Mattu Jeff Larson,Lauren Kirchner,Surya, ‘Machine Bias’ ProPublica <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?token=l24Nh-wDyBgy53bhcy5jGvQh1IDRcxzE> accessed 24 January 2022.

[18] Nikolaos Aletras et al., ‘Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective’ (2016) PeerJ Computer Science 2 <https://doi.org/10.7717/peerj-cs.93> 6-15.

[19] Justice.Fr, ‘DataJust’ <https://www.justice.fr/donnees-personnelles/datajust> accessed 25 January 2022.

[20] Eric Niiler, ‘Can AI Be a Fair Judge in Court? Estonia Thinks So’ Wired <https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/> accessed 18 August 2022.

[21] Ministry of Justice of Estonia, ‘Estonia Does Not Develop AI Judge | Justiitsministeerium’ <https://www.just.ee/en/news/estonia-does-not-develop-ai-judge> accessed 20 June 2022.

[22] Jasper Ulenaers, ‘The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge? (2020) Asian Journal of Law and Economics 11, no. 2 <https://doi.org/10.1515/ajle-2020-0008> 27-28.

[23] Directorate-General for Justice and Consumers (European Commission) and Trasys International, ‘Study on the Use of Innovative Technologies in the Justice Field: Final Report’ (2020) LU: Publications Office of the European Union, <https://data.europa.eu/doi/10.2838/585101> 111-142.

[24] OpenAI, ‘Introducing ChatGPT’ <https://openai.com/blog/chatgpt> accessed 7 March 2023.

[25] Luke Taylor, ‘Colombian Judge Says He Used ChatGPT in Ruling’ The Guardian (3 February 2023) <https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling> accessed 8 March 2023.

[26] Cambridge Dictionary, ‘Definition of “Intelligence’ <https://dictionary.cambridge.org/dictionary/english/intelligence> accessed 19 August 2022.

[27] John McCarthy, ‘What Is Artificial Intelligence?’ < http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html> accessed 19 August 2022 2.

[28] A. M. Turing, ‘I.—Computing Machinery and Intelligence’ (1950) Mind LIX, no. 236 <https://doi.org/10.1093/mind/LIX.236.433> 433-451.

[29] Tanel Alumäe, ‘Transcription System for Semi-Spontaneous Estonian Speech’ (2012) Human Language Technologies – The Baltic Perspective <https://doi.org/10.3233/978-1-61499-133-5-10> 10-11.

[30] Diego Collarana et al., ‘A Question Answering System on Regulatory Documents’ (2018) Legal Knowledge and Information Systems <https://doi.org/10.3233/978-1-61499-935-5-41> 42.

[31] T. J. M. Bench-Capon and Paul E. Dunne, ‘Argumentation in Artificial Intelligence’ (2007) Artificial Intelligence, Argumentation in Artificial Intelligence, 171, no. 10 <https://doi.org/10.1016/j.artint.2007.05.001> 619-621.

[32] Diego Garat and Dina Wonsever, ‘Automatic Curation of Court Documents: Anonymizing Personal Data’(2022) Information 13, no. 1 <https://doi.org/10.3390/info13010027> 5-6.

[33] See, for example, Alan Akbik, ‘The Flair NLP Framework’ Institut für Informatik <https://www.informatik.hu-berlin.de/en/forschung-en/gebiete/ml-en/Flair> accessed 11 July 2022.

[34] Enrico Francesconi and Guido Governatori, ‘Patterns for Legal Compliance Checking in a Decidable Framework of Linked Open Data’ (2022) Artificial Intelligence and Law <https://doi.org/10.1007/s10506-022-09317-8> 6-7.

[35] High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (2019) Publications Office of the EU <https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1> 18.

[36] Northpointe, ‘Practitioner’s Guide to COMPAS Core,’ (2015) Northpointe, < https://s3.documentcloud.org/documents/2840784/Practitioner-s-Guide-to-COMPAS-Core.pdf > 27.

[37] Terzidou, 158-163.

[38] See, European Commission, ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)’ (2016) < http://data.europa.eu/eli/reg/2016/679/oj> Article 5.

[39] European Commission, 'Ethics guidelines for trustworthy AI' 15-20.

[40] Francesco Contini, ‘Artificial Intelligence and the Transformation of Humans, Law and Technology Interactions in Judicial Proceedings’ (2020) Law, Technology and Humans 2, no. 1 <https://doi.org/10.5204/lthj.v2i1.1478> 8-9.

[41] See, for example, Ministry of Justice of Austria, ‘Digital Justice Strategy’ <https://www.justiz.gv.at/home/service/digitale-justiz.955.de.html> accessed 22 August 2022.

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.

JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law
Article search
Extended article search
Newsletter
Subscribe to our newsletter
Follow Us
twitter
 
Navigation