Document Actions

Articles

Disappearing Authorship: Ethical Protection of AI-Generated News from the Perspective of Copyright and Other Laws

  1. Ph.D. Alžběta Krausová
  2. Ph.D. Václav Moravec

Abstract

Artificial intelligence (AI) has been widely recognized as an important game-changer in our digital society. With help of AI, we are currently able to automate a number of various tasks, including creation of visual, musical, or textual content. Ethical approach to design, development and utilization of AI systems as well as their legal compliance and robustness are defined as prerequisites of building trust and adoption of the technology. In this paper we analyze whether law supports ethics in the specific domain of automated journalism by examining principles of accountability, responsibility, and transparency (the ART principles) from the perspective of legal interests protected by copyright and other laws. Other factors influencing ethical decision-making process, namely specificities of a business model and perception of authorship, are also taken into account. We present results of a recent pilot qualitative study illustrating that perception of authorship is closely related to perception of agency and responsibility. Our findings show that the current Czech law neither incentivizes implementation of the ART principles nor perception of agency in relation to AI systems for automated journalism. Perception of disappearing authorship may, thus, also lead to perception of disappearing responsibility. In order to solve these problems, we suggest introduction of new legal obligations and adaptation of existing personal rights to protect authors involved in the design of AI systems.

Keywords

1. Introduction*

1

Artificial intelligence (AI) has been widely recognized as an important game-changer in our digital society. Some even call it as “the new electricity” [1] with the potential to completely transform the way our society functions. With help of AI, we are currently able to automate a number of various tasks, including creation, selection and recommendation of visual, musical, or textual contents as well as tailoring those contents to individual needs or preferences of those who consume the contents.

2

The level of deploying and using AI systems by society is conditioned by the level of people's trust in these systems. [2] Ethical approach to designing, developing and utilization of AI systems is considered as one of the prerequisites of building such trust and, therefore, should apply to AI systems developed in the European Union. Other prerequisites is their compliance with law and their robustness. [3] Policymakers presume that an ethical approach and legal compliance go hand in hand and cannot be contradictory. This presumption, however, should be subjected to scrutiny.

3

Within this paper we examine a particular case of interplay between ethical principles and requirements on AI systems and legal norms protecting copyright from the perspective of interests of those who design, deploy and use these systems. As an example we shall examine one of the most prominent applications of AI content creation and also a specific subset of discussions on copyright protection of authorship – the field of automated journalism (also known as algorithmic journalism or robot journalism).

4

The purpose of automated journalism is to create AI-based software capable of creating textual news created from machine-readable data. Such software aims to replace routine work of journalists who are often forced to simply describe facts in a manner that does not require original creative thinking. This is mainly applicable in areas such as sports news, weather news, or reports about changes at financial markets.

5

We chose the area of automated journalism as in general there are doubts about “creativity” of AI and if “works” generated by this technology can even qualify for copyright protection. With regard to journalism, there are also constraints on copyright protection and some routine texts produced by human journalists may not be protected. At the same time a rigorous ethical approach in this sphere is necessary as automated journalism has a great potential to influence the public space and also the democracy. There are various applications that might for instance help journalists with verification of facts, finding the appropriate resources, etc. Automated journalism also poses interesting questions such as questions regarding perceptions of readers at to the credibility of contents written with help of AI. However, the existing research is contradictory. The research suggests that some people trust to computer authors less than to human authors, [4] some people consider texts written by robots as more credible, [5] and some people attribute higher credibility to the combined authorship of humans with robots. [6] Automated journalism is sometimes described also as a “social process” in which the news are communicated between humans and machines. [7]

6

In our opinion, the case of automated journalism illustrates that some ethical requirements can be difficult to achieve when they are confronted with particular legal regulation. We analyze ethical principles of accountability, responsibility and transparency from the perspective of individual legitimate interests and argue that the current Czech copyright laws and other related laws do not fully support these ethical principles. The analysis of this conflict is done from the perspective of EU activities on AI ethic and from the perspective of Czech law that is based on principles common in most European countries. As achieving ethical AI systems depends on actions of involved stakeholders as well as actors, we also examine the process of decision-making and factors influencing the process.

7

Apart from ethical guidelines and respective law, the crucial factors to consider are also a) how journalists themselves perceive their role in creating these AI-based systems and b) how do they perceive protection of their intellectual property relative to such systems. Perception of their authorship as one of the factors in a decision-making process can influence their ethical considerations with regard to designing and outcomes of systems for automated journalism. Therefore, we support our overall analysis with a recent pilot qualitative study that describes the practice of news automation in the Czech News Agency (CNA) and in the daily economic newspaper E15 as well as views of journalists who co-design and use these intelligent systems. The study illustrates that perception of authorship is closely related to perception of agency and responsibility.

8

In order to align the existing ethical principles and copyright law, the paper examines existing models of potential legal regulation and consequently suggests legal measures that would promote adopting ethical behavior in design and use of AI systems. These measures aim to prevent identified shortcomings in law and to utilize protection of personal rights in order to increase own perception of agency, responsibility and, thus, ethical behavior.

2. Ethical Design and Utilization of AI Systems

9

Ethical design and ethical utilization of AI systems is one of the priorities of the European Union. In order to stay competitive with the rest of the world, the EU aims to ensure trust of users in AI systems by guaranteeing that AI systems developed within the EU will provide its users with guarantees of human rights protection. In particular, “the EU seeks to remain faithful to its cultural preferences and its higher standard of protection against the social risks posed by AI – in particular those affecting privacy, data protection and discrimination rules – unlike other more lax jurisdictions.” [8]

2.1. Trustworthy AI

10

As the core principle, the EU defined a “human-centric approach” to AI. This approach should ensure that AI systems would be used “in the service of humanity and the common good, with the goal of improving human welfare and freedom.” [9] In this regard, the EU promotes development and use of so called “trustworthy AI”. This concept requires AI systems to be lawful (i.e. compliant with laws), ethical (i.e. adhering to ethical principles and values), and robust (i.e. safe, secure and reliable) at any stage of their life cycle. [10]

11

Trustworthy AI systems must follow four basic principles – respect for human autonomy, prevention of harm, fairness, and explicability. At the same time, in practice trustworthy AI systems need to meet seven key requirements, which should be ensured both by technical and non-technical methods: “(1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability.” [11] These requirements need to be operationalized in every stage of the AI systems’ life cycle. This means that relevant stakeholders need to be actively involved in the process of assessment whether a particular AI system keeps fulfilling the requirements and whether adopted solutions as well as outcomes of the operation are in line with the above mentioned principles and requirements.

2.2. Responsible AI

12

The general concept of human-centric approach to AI needs to be developed further as use of AI in society is a complex challenge. One of the approaches how to face the multi-faceted reality and ensure as complex approach as possible is the concept of “responsible AI”.

13

The responsible AI is based on three main principles – Accountability, Responsibility, and Transparency. [12] These principles correspond to specific characteristics of AI systems – interaction with the environment, autonomy, and adaptability.

14

The principle of accountability means that a system itself is able to explain own actions and that its designers are able explain the rationale (moral values and social norms) beyond the systems design. The principle thus refers to the ability to explain moral reasons. The principle of responsibility means that although AI systems are autonomous to a certain degree, we cannot avoid responsibility of humans for both the design and actions of these systems. The principle thus refers to the obligations of various stakeholders to behave in a certain way with regard to development, manufacturing, selling and using AI systems. The principle of transparency means that although AI systems adapt and develop, stakeholders need to be able to “describe, inspect and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environment” and need to be “explicit and open about choices and decisions concerning data sources and development processes and stakeholders.” [13] The principle thus refers to the ability to explain the particular technical solution, including the data and algorithm use, the rationale for the design process, and who are the involved stakeholders and what are their interests.

15

The principles show that the responsible AI is oriented on three levels at which an ethical approach must be adopted: a) ethics by design, i.e. ethical reasoning capabilities of an AI system – accountability; b) ethics in design, i.e. utilization of methods allowing for assessment of ethical implications – transparency; and c) ethics for design, i.e. codes of conduct for involved stakeholders – responsibility. [14]

3. Ethical Decision-Making of Involved Stakeholders and Actors

16

Ethical design and deployment of AI systems require that involved stakeholders and individual actors act in an ethical manner. As what is moral can be perceived differently by each of these subjects, in practice institutions, organizations and enterprises come up with codes of ethical conduct. These usually serve as guidelines on how to behave in certain situations and what principles to keep in mind when making decisions in a particular field.

17

An ethical decision-making process of involved stakeholders (organizations) and actors (their employees) is very complex. When looked at from the perspective of a person, the decision process takes into account various factors, such as individual attributes of a person, her personal environment, her professional environment (including codes of conduct), her work environment (including a corporate policy), the respective legal environment, and her social environment (including religious, humanistic, cultural, and societal values), that are considered through a cognitive process in which relevant acquired information is weighed for rewards and losses. [15] From a psychological perspective, the cognitive process of an individual can be influenced by factors such as “past experiences, a variety of cognitive biases, an escalation of commitment and sunk outcomes, individual differences, including age and socioeconomic status, and a belief in personal relevance.” [16] These factors can also be reflected in a collective decision-making of an organization.

18

As illustrated, in a decision-making process many factors come into consideration and psychology also plays an important role. In general, the theory of ethical decision-making distinguishes rationalist-based models, which are based primarily on moral reasoning, and non-rationalist-based models, which are based primarily on intuition and emotion. [17] Belonging to one or another group as an individual (i.e. having a certain thinking style) also influences whether a person is more likely to make more selfish or more altruistic decisions and how the person reacts to other people and their interests. [18] It has been evidenced that personal values are also reflected in work-related strategic decision-making. [19]

19

Some of the factors (namely individual attributes) relate to inner characteristics and preferences of a person, i.e. to her intrinsic motivation, while other factors are determined from the outside – namely by laws, policies, codes of conduct, and values formulated by the society. These represent extrinsic motivation, i.e. “the motivation to do something in order to attain some external goal or meet some externally imposed constraint.” [20] Ethics and law are typical examples of extrinsic motivation as there is typically a punishment for not complying. Therefore, ethical and legal factors should be considered as more influential in the decision-making process than some other factors. Society requires that stakeholders and actors should act in compliance with both systems.

20

In an ideal world, law and ethics would go hand in hand and support each other. However, the relationship between these two normative systems is rather complicated. As such, it has been subject to an extensive study. Law depends to a high degree on values adopted from other systems, such as ethics and religion, and provides them with a special status requiring obedience from members of the society. [21] The level of identification of law with ethics varies though – on one hand we can find norms that are fully identical with ethical norms, on the other hand, some legal norms have no ethical dimension at all. [22] In general, it is considered ethical to comply with law. However, in some situations people break law as they think they have moral reasons to do so because what law demands is unethical. [23]

21

Despite the societal pervasiveness of these two normative systems and the motivation to act in line with them, which is reinforced with various sanctions, in practice it is obvious that many stakeholders or actors do not behave ethically or in compliance with law. The reasons are various – be it ignorance, incapacity, emotional rather than rational behavior, different personal ethical standards, or an (un)calculated risk. With regard to legal compliance, a calculated risk may lie in identification of shortcomings in law and legal procedures in order to circumvent the system and avoid otherwise applicable sanctions. In a business context, such behavior is sometimes referred to as “evasive entrepreneurship”. The term refers to a “profit-driven business activity in the market aimed at circumventing the existing institutional framework by using innovations to exploit contradictions in that framework.” [24] To facilitate legal non-compliance, stakeholders sometimes adopt practices leading to silencing employees’ criticism by withholding information and restricting dialogue. [25]

22

These cases show that not all stakeholders (and potentially actors) are interested in acting ethically and they exploit law to serve their purposes. The manner in which these subject do it depends on their business model or personal interests.

23

Business models and personal interests are, however, not always only selfish. Some business models can be built on adopting rigorous ethical codes of conduct and strict legal compliance, build a good reputation and use it as their competitive advantage. Some individuals may also opt for altruism and the higher good, i.e. prosocial behavior.

4. Copyright Protection of Interests of Involved Stakeholders and Actors

4.1. Complexity of the Ecosystem Related to Automated Journalism

24

When assessing a regulatory environment both in form of ethical and legal rules, the complexity of the regulated environment and the particular ecosystem need to be taken into account.

25

As it has been shown, various business models reflect various values and indicate what particular interests stakeholders will take into account and how they will protect these interests. In order to assess the respective law, its use, and its relationship to ART ethical principles, one needs to know the context of how automated journalism operates in the Czech Republic.

26

Current research on robotisation in news media [26] distinguishes three areas in which such robotisation applies – content creation, news gathering (gathering, sorting, and verifying information from sources), and news distribution (personalized news and advertising). In the Czech Republic, the application of AI has been developing mostly in the area of content creation, namely news. AI applications in the areas of news gathering and news distribution are less common. Compared to media from English-speaking countries, the automation of journalism in the Czech Republic has a delay of approximately a decade. This is mainly due to the complexity of the Czech language compared to English. In particular, there is an insufficient amount of suitable datasets for training neural networks. Moreover, there has been a lack of investment in the development and application of robotic journalism.

27

The pioneers of automated journalism in the Czech Republic are the Czech News Agency (CNA), which is a national press agency and has a nature of a public service medium, and the daily economic newspaper E15. In 2018, the CNA started to develop a platform for automated election news making. By utilizing patterns, which were predefined by journalists, simple algorithms co-created final news on the results of municipal and senate elections. [27] In 2020, the CNA implemented a robotic journalist from the Prague Stock Exchange into own redaction system. The robotic journalist was developed within a joint research project of the Charles University, Czech Technical University, University of West Bohemia and the CNA. [28] Within a few seconds after the close of the exchange, this robotic journalist generates news on the results of the trading day without human intervention. [29] Since July 2020, the CNA also started to automate news on prices of fuel. In half of 2020, the economic newspaper E15 started to use the robotic journalist in own redaction system in order to enable studying differences between implementation of automated journalism at a public service medium (CNA) and at a commercial medium (E15).

28

The current research suggests that such software can save around 30-40 minutes of work of a journalist for each automatically created news. Editors appreciate especially the speed of automated journalism. For instance, before implementing the robotic journalist, the E15 was not able to publish results from the end of the trading day as the Prague Stock Exchange provides final results at 16:35 each day and the editorial deadline of E15 is at 17:00. Moreover, editors also appreciate functionality of automated journalism as it unburdens human journalists from creating routine news texts and allows journalists to use their creativity for more complicated texts (such as analyses or commentaries). This Czech experience is similar to experience from abroad where journalists perceive AI systems as tools that assist them in their work. [30]

29

The current experience indicates that there is a difference between implementation of automated journalism in a public service medium (CNA) and a commercial medium (E15). The CNA does not publish any of the automated texts without control of a human editor. The CNA thus uses a hybrid model of AI application in journalism when outputs are always checked by a human. On the other hand, the E15 publishes results of trading at the Prague Stock Exchange immediately after they are generated by software without any editorial control. This is done even despite occurrence of two errors that were caused in the past by providing the software with erroneous input data from Prague Stock Exchange’s API. The E15 thus uses an autonomous model of AI application in journalism when there is no intermediary between the software and recipients of the automatically generated news. The research clearly shows how a particular business model influences practice.

4.2. Copyright Protection and Interests in Relation to AI Systems

30

The question about copyright protection of works created by computers and namely AI has appeared in the legal literature as early as in 1960s. [31] Since then quite a large number of literature has analyzed the question from various perspectives. Currently, there are two legislative models in the world: a) specific protection of computer-generated works (adopted for instance by the United Kingdom), [32] b) requirement on original creative activity performed by a human. [33] The latter model is applicable in the Czech Republic.

31

As the second (prevalent) model has not been considered satisfactory, the main question, which the professional literature has traditionally focused on, is “Who shall be the author of works produced by/with help of AI systems?” A reply to this question shall determine the subject to whom the copyright protection of results generated by AI systems shall be granted. It is an important question but at the same time it is future-oriented. The proposed models and their appropriateness for future will be assessed in chapter F.I. The purpose of this chapter is different – to assess the current copyright protection and describe which interests the law currently protects with regard to AI systems, i.e. to reply to the question “Who is granted authorship to AI systems and to results generated by them?

32

The two described business models of the CNA and the economic daily newspaper E15 show that there are in general two levels that we need to analyze – the level of stakeholders (the CNA and the E15) and the level of their employees who in fact contribute to creation of the AI system for automated journalism. [34] Both the stakeholders and their employees have their own interests regarding copyright.

33

As many other European copyright acts, the Czech Copyright Act (2000) [35] defines what can be considered as a copyrightable “work”. With regard to AI systems, we need to break them down into their elements in order to determine the respective legal protection. Simply speaking, AI systems based on machine learning are algorithms that are derived from data provided to the systems for learning. [36]

34

Algorithms are not protected by copyright. As algorithms are representations of procedures, principles, methods or formulas, law does not provide them with copyright protection not to prevent others from using these procedures. However, what is protected by copyright is the source code and the resulting computer program up to the degree that the source code is original. [37] The Czech law also grants copyright protection only if the author is a human (not an AI system). Questions as to whether an AI system can be considered as tool for creative creation of another AI system have not yet been resolved and would be most probably assessed individually in each case.

35

Data can have various nature. They are not granted copyright protection per se as data usually refers to simple information, numbers, etc. However, there are situations when an AI systems learn from other art work – typically natural language processing systems learn from texts that can be copyrighted. [38] A collection of data can be also protected as a database either by sui generis rights, which provide special entitlements to a person who compiled the database, or by copyright.

36

Given the complicated nature of AI systems, trade secret is sometimes considered as an appropriate tool for protection of economic investment in cases when copyright protection cannot be fully efficient. [39]

37

With regard to authorship, the law distinguishes two types of rights that also correspond to different interests: moral rights and economic rights. Moral rights are typically granted to natural persons as an expression of connection between their intellectual activity leading to creation of a copyrightable work and their personality. Moral rights typically include the right to associate author’s name with their work and the right to object to modifications of this work. Economic rights refer to the right to use a copyrighted work in various forms, request remuneration from others for use of this work, and prohibit others from use of the work. Granting the economic rights shall motivate people (and companies) to invest themselves into an intellectual creative process.

38

With regard to AI systems created by employees for their employer, there are specific provisions protecting the economic investment of employers. Employers exercise economic rights on behalf of their employees if the respective work was created as to fulfill obligations from the labor contract. However, employees keep the moral rights and are entitled to additional reimbursement for their work in case the salary paid for the work gets into an obvious imbalance compared to the profit an employer made out of the work. Regarding the authorship, law presumes that an employer can publish the work under own name, unless otherwise agreed with an employee.

4.3. Special Case of Copyright in Automated Journalism

39

Journalism represents a specific subfield in copyright law as some news are not protected by copyright law. For instance, the Berne Convention for the Protection of Literary and Artistic Works (1979) [40] does not provide protection to “news of the day or to miscellaneous facts having the character of mere items of press information” (Art. 2 par. 8 of the Convention). The same is valid in the Czech law. The Czech Copyright Act excludes “daily news or other data per se” from copyright protection (Art. 2 par. 6 of the Act). The reason for this is not to enable abuse of copyright law for monopolization of information. [41] The simple nature of information or news cannot be considered as ‘creative’ and, thus, be protected by copyright. However, there are other legal instruments allowing stakeholders (news agencies or newspapers) to protect their investment into producing news, such as protection against unfair competition or unjust enrichment. Design of AI systems for automated journalism can be also protected as know-how by trade secret.

4.4. Perception of Authorship

40

Perception of authorship by journalists involved in designing and use of AI systems facilitating automated journalism and its outcomes is important when assessing their perception of own control, agency, and responsibility. These are also factors that influence ethical decision-making.

41

The first experience of Czech journalists with automated journalism allowed us to conduct a pilot qualitative study on reflection of authorship in the environment of Czech editorial offices that utilize AI. The study was conducted in the form of in-depth semi-structured interviews that focused on perception of the notion of authorship in the traditional and robotic journalism. In September 2020, ten contributing editors including the editor-in-chief from the daily economic newspaper E15 participated in the study. In February 2021, ten journalists including the editor-in-chief from the CNA participated in the study. The research sample was designed to include contributing editors, editors, and editors-in-chief with experience in automated journalism. Therefore, the number of journalists from both newsrooms involved in the research was limited.

42

The main research questions were:

1) How do you perceive the notion of authorship in the traditional journalism?

2) How has the notion of authorship changed with use of robotic journalism in situations when journalists contribute with their knowledge to development of this software and when the AI systems learn from their knowledge and texts?

3) How should we present the authorship of automatically generated texts to recipients of these texts?

43

As to the first research question, all respondents stated that they consider authorship in journalism as important, as each author has own style with regard to use of language, richness of their vocabulary, or arranging facts, etc. Authorship is perceived more strongly in relation to opinion journalism (analyses, commentaries, features, or essays) than in relation to common news making. This corresponds to the rationale of copyright protection described above. Moreover, the respondents from the CNA often stated that perception of authorship is even weaker at the CNA than in other types of media as the CNA is a supplier of (mainly) news content to other media.

44

As to the second research question, replies suggested that journalists, regardless of the nature of the media where they work (the news agency or the daily economic newspaper), do not think deeply about the transformation of the concept of authorship after deployment of robotic journalism. Based on additional questions of who is the author of an automatically created news and who should be stated as an author, the respondents started to set forth possible authors of such texts.

45

Replies of respondents from the daily economic newspaper E15 can be divided into three groups, each of them having an almost equal number of respondents. The first group (4 respondents) was of the opinion that the authorship is collective, i.e. each person who had contributed to the development of the software should be considered as an author – programmers, software developers, or journalists who prepare patterns and datasets for training. However, the respondents were not sure how to name the author when such an automatically generated text is presented to readers. [42] The second group (3 respondents) stated that it is impossible to determine authorship of automatically generated texts. The third group from the E15 (3 respondents) stated that the editor who works with the automatically generated texts should be considered as an author because she is responsible for the final text. Respondents clearly indicated their perception of connection between authorship and responsibility.

46

Replies of respondents from the CNA were homogenous and indicated the notion of collective authorship. At the same time the respondents expressed that they had not felt as authors and that they had had “nothing in common with the text”. This might be related to the weaker perception of authorship of agency news as illustrated above. Respondents also expressed that the collective authorship should be attributed to the news agency itself. Replies of respondents from the CNA were not divided into three groups like at the E15.

47

As to the third research question, it is necessary to mention how the automated outputs are presented by the E15 and the CNA. The E15 places an icon of a robot next to the automatically generated news on results of trading at the Prague Stock Exchange and states under the text that “the news is generated by software with help of artificial intelligence”. The CNA labels the automatically generated news with an abbreviation “rur”. This refers to the theatre play R.U.R. by the Czech journalist and playwright Karel Čapek, in which the word “robot” was used for the first time in 1920. Interviews with journalists from both media suggested that they preferred transparent labelling of automatically generated texts as it strengthens credibility of the news media. Two respondents from the CNA indicated that there had been a discussion whether admitting robotisation of journalistic outputs would not give rise to pressuring the CNA by buyers of the news to reduce the price. At the end, the CNA decided for transparency towards their clients.

5. Confronting Stakeholders’ and Actors’ Interests and Their Legal Protection with ART Principles

48

The previous chapters have shown that when it comes to interests of stakeholders (news agencies) and actors (their employees), the situation is rather complex. In practice there are two business models of automated journalism – the hybrid model of AI application and the autonomous model of AI application. The hybrid model involving human checks is used by the public service medium – the CNA, while the autonomous model without human checks is used by a commercial medium – the E15. The E15’s interests are to utilize speed of AI and to reduce the cost of news production by allocating editors to do other work than routine description of facts and numbers. On the other hand, the CNA’s interests are to behave diligently and keep good reputation so they would be able to sell their products (news) to other media. With regard to copyright protection, it is in the interest of both media to prohibit others not to use news generated by AI systems without paying for such use. From the economic perspective, they need to be able to use work of their employees without any hindrances and the more they automate their work, the more they can be efficient.

49

The obvious interests of the actors (employees of the media), which are also protected by law, are to be reimbursed for their work and to be attributed authorship. Law also protects dignity and personality of employees by fundamental human rights. From the economic perspective, employees need to keep their job and in case of changing the employer, they need to be able to show what they did in their previous job, i.e. to show their authorship. Moreover, in case their salary gets into an obvious imbalance compared to the profit an employer made out of AI systems, it is in the interest of an employee to be able to prove own role in designing the system.

5.1. Principle of Accountability

50

The principle of accountability requires that stakeholders and actors should be able to explain moral decisions that they took when designing and operating the AI system. With regard to automated journalism, the law does not per se require from neither stakeholders, nor actors to be able to explain themselves and as such this is more of an ethical issue.

51

From the perspective of a stakeholder, it might be beneficial to explain the reasons for designing an AI system for automated journalism if the reasons are prevention of routine and repetitive work, designating more resources to quality journalism work, and securing timely delivery of information. On the other hand, law does not prohibit pure economic motivation without an ethical dimension. Despite the law requires each subject to respect good morals, it is very hard (and sometimes impossible) to prove the motivation and potential overall negative consequences of purely economically motivated behavior on the society. This is caused by the nature of law that is primarily oriented on overt behavior, not on internal motivation. [43] The law does not also require to make a prior ethical assessment. In practice, AI journalism systems are firstly developed and only consequently ethical aspects can be assessed based on the system’s operation.

52

The copyright law does not presume the ability to explain moral considerations. All it cares about is originality of work for granting protection. A specific case, however, is when the ability to explain is used in order to explain own role in a design of a system which would result in being granted authorship and, consequently, moral and economic rights to the system. This might be interesting for an employee who would attempt to get higher reimbursement for her work if profits gained by an employer get gets into an obvious imbalance with what has been paid to the employee.

53

However, it is also possible to identify a general incentive for stakeholders and actors not to be able to explain their role in the system design – namely in cases when an AI system causes harm. The inability to explain own moral decisions can lead to the lack of evidence and, consequently, avoiding liability in cases where fault needs to be proven. [44] Moreover, a subject can claim that it was impossible to make any ethical consideration beforehand as one could not presume negative consequences due to the novelty of the technology and its unpredictability when confronted with society. Such a claim can also lead to avoiding liability. It needs to be justified and examined thoroughly but again, the lack of evidence may at the end be in favor of the subject who claims not to be able to provide explanation (or detailed explanation).

54

The principle of accountability needs to be examined also from the perspective of authorship of those who contributed to creation of the system. The study on perception of authorship shows that journalists participating at design and later use of the respective AI system mostly do not feel as authors. This concept of disappearing authorship, however, results also in disappearing agency in the sense of own control over a system. The lack of agency can then be reflected in application of the two remaining principles.

5.2. Principle of Responsibility

55

The principle of responsibility, i.e. attributing liability to stakeholders and actors instead of AI systems and their obligation to behave in a certain way when developing and using AI systems, is highly relevant in relation to law. However, just like with the principle of accountability, copyright law does not expressly state any obligation to prevent harm. Copyright law protects (in line with the freedom of speech) even works that are controversial, do shock people, or even cause harm. Only originality of the work is important for granting copyright and providing economic and moral rights. In these cases, stakeholders can take calculated risks and behave not only unethically but also contrary to law as operation of an AI system can bring them benefits that would be higher than potential fines imposed by relevant administrative law. It is a question whether this is a shortcoming of copyright law in this regard or not. Not granting copyright to controversial, shocking, and even harmful content could on one hand result in a special way of censorship and on the other hand it could also cause much more intensive exploitation of unprotected content.

5.3. Principle of Transparency

56

The principle of transparency requires that stakeholders and actors should be able to explain technical functioning of an AI system. The principle of transparency is highly relevant for the sphere of automated journalism as the way of adopting this principle is also influenced by the business model of communicating with content’s recipients.

57

From the perspective of copyright law, transparency can, however, show as problematic. AI systems represent a competitive advantage. Parts of AI systems (such as data or an algorithm) are not protected by copyright per se so the other efficient means of protection is a trade secret. A complete transparency with regard to making the algorithm or data public could lead to threatening a stakeholder’s investment into production of the system as someone might request publication of respective datasets and use them later for training own system.

58

If an AI system causes any harm, such as publishing a text that contains public offence, discredits the status of a public official, ridicules a person, or discriminates, the stakeholder operating the respective AI system needs to provide an explanation if such an act is investigated within administrative proceedings. In such a case, the principle of accountability and the ability to explain comes into play. As shown, an inability to explain can result in more benefit for both stakeholders and actors.

6. Revisiting Models of Copyright Protection from the Perspective of AI Ethics

59

Previous chapters have shown that the current set-up of legal rules does not fully support ethical approach to design and use of AI systems. Not implementing the principle of accountability may in some cases result in avoiding liability. With regard to the principle of responsibility, stakeholders may opt to take a calculated risk and act unethically and contrary to law as long as they profit from operation of a copyrighted AI system. The principle of transparency goes directly against economic interests of stakeholders and their employees as trade secret represents an appropriate and efficient tool for protection of own investments.

60

Moreover, the pilot empirical study shows that actors involved in design and utilization of AI systems in automated journalism perceive their role of authors as diminishing. Majority of respondents expressed that authorship should be collective but that they personally had not felt as authors despite their contribution to the AI system. As the respondents had clearly put responsibility in relation to authorship, their perception of disappearing authorship can also result in their perception of disappearing responsibility. This is, however, not desirable from the perspective of AI ethics. Therefore, copyright law should strengthen protection of authorship in order to strengthen also the responsible approach.

6.1. Proposed Models of AI Copyright Protection

61

The chapter D.II. described how the current law protects authorship of AI systems themselves. At the same time it indicated that current copyright protection has not been deemed as sufficient and to determining authorship of works generated by AI systems. Despite there are a few countries [45] that granted authorship to “programmers”, [46] most legal systems have only general rules and require that an author must be a human.

62

In order to solve this problem of uncertainty in law, numerous analyses have been conducted in various jurisdictions as to find the best way how to determine and grant authorship to AI-generated works. [47] Authorship in the specific field of AI-generated news has been examined as well [48] including questions of liability. [49]

63

Simply put, the common methodology for determining who should be considered an author of AI generated content is often to identify subjects involved in the ecosystem of AI generated content and then choose and justify which of these subjects should be granted copyright protection. The subjects are typically, programmers, people training systems, data providers (proprietors), data clerks and people who prepare and label datasets, or users of systems who initiate their operation. Some authors also analyze the option of granting authorship directly to an AI system or reinterpreting the notion of employment according to which AI systems would be considered as employees. This, however, presumes certain “subjectivity” of AI system which is not acceptable in the context of European values and policies that promote a “human-in-command” approach. [50] AI generated content can also end up as not copyrighted and free for use in public domain.

64

Joint authorship is an approach that has been argued for a lot. This also corresponds to the Czech pilot study in which the majority of respondents considered the model of collective authorship as the most appropriate and fair. With regard to automated journalism, a suggestion was made to attribute collective authorship to a corporate entity and to drastically shorten duration of copyright protection. [51]

65

The proposed models, however, do not solve the shortcomings of law that we have identified in our research. Their main motivation is to assess the best way to protect economic interests and incentivize further investments into development of AI systems.

6.2. A Complex Approach to Regulation and Legal Protection Supporting AI Ethics

66

Copyright protection has traditionally proven as a valuable regulatory tool. However, when challenged by disruptive technologies, such as AI, unprecedented questions arise. Given the global and pervasive impact of AI on our society, law now more than ever needs to become more supportive of ethical behavior.

67

Our paper has identified certain shortcomings that cannot be solved by the copyright law as it stands now. Therefore, a more complex approach is necessary. In light of the case of automated journalism and perception of authorship we propose a two-level solution: a) introduction of new legal obligations, and b) adaptation of existing personal rights to protect actors involved in design of AI systems.

68

The new legal obligations should mitigate shortcomings identified at each of the ART principles. As to the principle of accountability, law could introduce an obligation to conduct a prior ethical assessment of intentions and motivations for setting up an AI ecosystem. This ethical assessment would define control mechanisms for identification of potential harmful effects. In fact, this instrument would be an equivalent of a data protection impact assessment that is set out in the General Data Protection Regulation. [52] As to the principle of responsibility, law should make sure that a calculated risk would not pay off – for instance by increasing fines for breaches of law. As to the principle of transparency, law needs to introduce safeguards on systems’ inspections.

69

Personal rights’ protection entails protection of identity of an individual. The copyright law in fact contests that AI generated news contain original intellectual creative activity of a human as the procedure of compiling the news has been derived from datasets and consequent compilations of news are only replicating principles that were hidden in the original training texts. However, it is important to note that those training texts that were authored by humans contain elements of unique personalities of their authors. What a machine learning system does is in fact distillation of certain elements of original authors’ identities. In a wider context, authorship can be understood also in the sense of creating own identity which entails coming up with special ways of thinking and solving problems. In the past, an identity was rather an intangible concept. Nowadays, given the pervasive technology recording almost everything that we do, identity becomes quite tangible.

70

Making actors involved in design of AI systems aware of how their personality contributes to shaping AI systems would probably increase perception of their authorship and, therefore, also responsibility. However, further study in this sense is needed. Utilization of the concept of personal rights with regard to authorship of AI systems and their work is also completely in line with the promoted human-centric approach to AI.

7. Conclusion

71

Our research has shown that integrating ethical principles and legal regulation is rather a complex task that needs to take into account a number of factors, including specificities of business models or psychological aspects. On the case of automated journalism we illustrated how different business models and their underlying motivation result in adopting different models of AI applications – hybrid or autonomous. Moreover, we have shown that despite being involved in design and use of AI systems, actors feel that their role in production of routine daily news is diminishing due to collective authorship. Given the nature of journalistic work and a lowered copyright protection, the perception of disappearing authorship is accepted quite well. On the other hand it also entails perception of disappearing responsibility. This phenomenon can then contribute to behavior in which law is circumvented. In that regard we proposed introduction of new legal obligations to support adopting ATR ethical principles in practice. Moreover, we proposed adapted utilization of personal rights protecting identity of an individual as a parallel protection to copyright law. This model will be developed in our further research.

*by Mgr. Alžběta Krausová, Ph.D., LL.M. Researcher at the Institute of State and Law, Czech Academy of Sciences, Prague, Czech Republic. E-mail: alzbeta.krausova@ilaw. cas.cz. ORCID 0000-0002-1640-9594 and PhDr. Václav Moravec, Ph.D. et Ph.D. Faculty of Social Sciences, Charles University, Prague, Czech Republic. E-mail: vaclav.moravec@fsv.cuni. cz. ORCID 0000-0002-3349-0785.

This paper was supported by the Technology Agency of the Czech Republic under grant No. TL03000152 “Artificial Intelligence, Media, and Law.”



[1] Catherine Jewell, ‘Artificial intelligence: the new electricity’ (WIPO Magazine, June 2019) <https://www.wipo.int/wipo_magazine/en/2019/03/article_0001.html> accessed 1 April 2021.

[2] Alan F. T. Winfried and Marina Jirotka, ‘Ethical governance is essential to building trust in robotics and artificial intelligence systems’ (2018) Philosophical Transactions of the Royal Society A <https://doi.org/10.1098/rsta.2018.0085> accessed 1 April 2021.

[3] High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (European Commission, 8 April 2019) <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> accessed 15 January 2022.

[4] T. F. Waddell, ‘A Robot Wrote This?’ (2018) 6(2) Digital Journalism <https://doi.org/10.1080/21670811.2017.1384319> accessed 1 April 2021.

[5] B. Liu and L. Wei, ‘Machine Authorship In Situ’ (2019) 7(5) Digital Journalism <https://doi.org/10.1080/21670811.2018.1510740> accessed 2 March 2021.

[6] E. C. Tandoc Jr., L. J. Yao and S. Wu, ‘Man vs. Machine? The Impact of Algorithm Authorship on News Credibility’ (2020) 8(4) Digital Journalism <https://doi.org/10.1080/21670811.2020.1762102> accessed 1 April 2021

[7] S. C. Lewis, A. L. Guzman and T. R. Schmidt, ‘Automation, Journalism, and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News’ (2019) 7(4) Digital Journalism <https://doi.org/10.1080/21670811.2019.1577147> accessed 2 March 2021.

[8] T. Madiega, ‘EU guidelines on ethics in artificial intelligence: Context and implementation’ (European Parliament, September 2019) <https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf> accessed 2 March 2021.

[9] High-Level Expert Group on Artificial Intelligence (n 3) 4.

[10] ibid 5.

[11] ibid 2.

[12] Virginia Dignum, ‘D1.3 Humane AI Ethical Framework. HumanE AI: Toward AI Systems that Augment and Empower Humans by Understanding Us, our Society and the World Around Us’ (Humane AI Net, 12 November 2019) <https://www.humane-ai.eu/wp-content/uploads/2019/11/D13-HumaneAI-framework-report.pdf> accessed 27 February 2021.

[13] ibid 7–8.

[14] Virginia Dignum, ‘Ethics in artificial intelligence: introduction to the special issue’ (2018) 20(1) Ethics and Information Technology <https://doi.org/10.1007/s10676-018-9450-z> accessed 2 March 2021.

[15] M. Bommer, C. Gratto, J. Gravander and M. Tuttle, ‘A Behavioral Model of Ethical and Unethical Decision Making’ (1987) 6(4) Journal of Business Ethics <https://link.springer.com/article/10.1007/BF00382936> accessed 2 March 2021.

[16] C. Dietrich, ‘Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes’ (2010) 2(2) Inquiries Journal <http://www.inquiriesjournal.com/articles/180/decision-making-factors-that-influence-decision-making-heuristics-used-and-decision-outcomes> accessed 2 March 2021.

[17] M. Schwartz, ‘Ethical Decision-Making Theory: An Integrated Approach’ (2016) 139 Journal of Business Ethics <https://doi.org/10.1007/s10551-015-2886-8> accessed 1 April 2021.

[18] F. Liang, Q. Tan, Y. Zhan, X. Wu and J. Li, ‘Selfish or altruistic? The influence of thinking styles and stereotypes on moral decision-making’ (2021) 171 Personality and Individual Differences <https://doi.org/10.1016/j.paid.2020.110465> accessed 2 March 2021.

[19] S. Lichtenstein, G. Lichtenstein and M. Higgs, ‘Personal values at work: A mixed-methods study ofexecutives’ strategic decision-making’ (2017) 43(1) Journal of General Management <https://doi.org/10.1177/0306307017719702> accessed 2 March 2021.

[20] B. Hennessey, S. Moran, B. Altringer and T. M. Amabile, ‘Extrinsic and Intrinsic Motivation’, Wiley Encyclopedia of Management (2014).

[21] Y. Dror, ‘Values and the Law’ (1957) 17 The Antioch Law Review 440.

[22] Mark S. Blodgett, ‘Substantive Ethics: Integrating Law and Ethics in Corporate Ethics Programs’ (2011) 99 Journal of Business Ethics <https://link.springer.com/article/10.1007/s10551-011-1165-6> accessed 2 March 2021. See p. 40.

[23] K. Greenawalt, Conflicts of Law and Morality (Oxford University Press, Inc. 1989).

[24] N. Elert and M. Henrekson, ‘Evasive entrepreneurship’ (2016) 47 Small Business Economy <https://doi.org/10.1007/s11187-016-9725-x> accessed 2 March 2021. See p. 96.

[25] E. Hickland, N. Cullinane, T. Dobbins, T. Dundon and J. Donaghey, ‘Employer silencing in a context of voice regulations: Case studies of non‐compliance’ (2020) 30(4) Human Resource Management Journal <https://onlinelibrary.wiley.com/doi/10.1111/1748-8583.12285> accessed 15 January 2022.

[26] Francesco Marconi, Newsmakers. Artificial Intelligence and the Future of Journalism (Columbia University Press 2020).

[27] Václav Moravec, Proměny novinářské etiky (Academia 2020).

[28] This system is based on machine learning techniques.

[29] V. Moravec, V. Macková, J. Sido and K. Ekštein, ‘The Robotic Reporter in the Czech News Agency: Automated Journalism and Augmentation in the Newsroom’ (2020) 11 Communication Today 36.

[30] A. K. Schapals and C. Porlezza, ‘Assistance or Resistance? Evaluating the Intersection of Automated Journalism and Journalistic Role Conceptions’ (2020) 8(3) Media and Communication <https://doi.org/10.17645/mac.v8i3.3054> accessed 1 April 2021.

[31] R. C. Lawlor, ‘Copyright Aspects of Computer Usage’ (1964) 11 Bulletin of the Copyright Society of the U.S.A. 380.

[32] The UK’s Copyright Designs and Patents Act 1988 defines a computer-generated work in Section 178 as “generated by computer in circumstances such that there is no human author of the work” and “the author of a computer-generated work is deemed to be the person ‘by whom the arrangements necessary for the creation of the work are undertaken’” (Smith, 2017).

[33] A. Gaudamuz, ‘Artificial intelligence and copyright’ (WIPO Magazine, October 2017). <https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html> accessed 2 March 2021.

[34] For the sake of simplicity we do not take into account contractual relationships between partners of the joint research project and their individual contributions.

[35] Act No. 121/2000 of the Collection of Laws of the Czech Republic, on copyright law and on rights related to copyright and on the amendment of certain laws (Copyright Act).

[36] Ethem Alpaydin, Machine Learning: the New AI (The MIT Press 2016).

[37] If copyright is contested, originality needs to be assessed case by case.

[38] Utilization of copyrighted materials for machine learning can be problematic even despite the new EU directive 2019/790 that set out exceptions from copyright protection for the purposes of text and data mining for training AI systems. For more details see E. Rosati, ‘Copyright as an obstacle or an enabler? A European perspective on text and data mining and its role in the development of AI creativity’ (2019) 27(2) Asia Pacific Law Review <https://doi.org/10.1080/10192557.2019.1705525> accessed 1 April 2021.

[39] H. Hammoud, ‘Trade Secrets and Artificial Intelligence: Opportunities & Challenges’ (SSRN, 29 December 2020) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3759349> accessed 2 March 2021.

[40] Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979).

[41] I. Telec and P. Tůma, Autorský zákon. Komentář (C. H. Beck 2019)

[42] Some respondents who suggested collective authorship, however, also mentioned that the robot/software is the author.

[43] Dror (n 21) 443.

[44] Fault is in this case understood as an internal psychological relationship of a person to consequences of her action.

[45] Examples are the Hong Kong, India, Ireland, New Zealand, and the United Kingdom.

[46] Gaudamuz (n 33).

[47] See for instance K. Hristov, ‘Artificial intelligence and the copyright dilemma’ (2017) 57 IDEA: The Journal of the Franklin Pierce Center for Intellectual Property 431; A. Kasap, ‘Copyright and Creative Artificial Intelligence (AI) Systems: A Twenty-First Century Approach to Authorship of AI-Generated Works in the United States’ (2019) 19 Wake Forest Journal of Business and Intellectual Property Law 335; B. Schafer, D. Komuves, J. M. N. Zatarain and L. Diver, ‘A fourth law of robotics? Copyright and the law and ethics of machine co-production’ (2015) Artif Intell Law <https://doi.org/10.1007/s10506-015-9169-7> accessed 1 April 2021; C. Weyhofen, ‘Scaling the meta-mountain: Deep reinforcement learning algorithms and the computer-authorship debate’ (2019) 87 UMKC Law Review 979; J. M. N. Zatarain, ‘The role of automated technology in the creation of copyright works: the challenges of artificial intelligence’ (2017) 31(1) International Review of Law, Computers & Technology <http://dx.doi.org/10.1080/13600869.2017.1275273> accessed 1 April 2021.

[48] See for instance J. Díaz-Noci, ‘Artificial Intelligence Systems-Aided News and Copyright: Assessing Legal Implications for Journalism Practices’ (2020) 12(5) Future Internet <https://doi.org/10.3390/fi12050085> accessed 2 March 2021; T. Montal and Z. Reich, ‘I, Robot. You, Journalist. Who is the Author?’ (2017) 5(7) Digital Journalism <https://doi.org/10.1080/21670811.2016.1209083> accessed 2 March 2021; L. Weeks, ‘Media Law and Copyright Implications of Automated Journalism’ (2014) 4 New York University Journal of Intellectual Property and Entertainment Law 67

[49] S. C. Lewis, A. K. Sanders and C. Carmody, ‘Libel by Algorithm? Automated Journalism and the Threat of Legal Liability’ (2019) 96(1) Journalism & Mass Communication Quarterly <https://doi.org/10.1177/1077699018755983> accessed 2 March 2021.

[50] C. Muller, ‘Opinion of the European Economic and Social Committee on ‘Artificial intelligence — The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society (2017/C 288/01)’ (EUR-Lex, 31 August 2017) <https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52016IE5369> accessed 2 March 2021

[51] Díaz-Noci (n 48).

[52] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) [2016] OJ L119/1.

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.

JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law
Article search
Extended article search
Newsletter
Subscribe to our newsletter
Follow Us
twitter
 
Navigation