Document Actions

Special Issue on Law and Governance in the Digital Era

Regulating Internet Hate: A Flying Pig?

  1. Natalie Alkiviadou

Abstract

This paper will assess the regulation of the internet in the ambit of hate speech expressed digitally through the internet. To do so, it will provide a definitional framework of hate speech, an overview of the internet’s role in the ambit of hate speech and consider the challenges in legally regulating online hate speech through a discussion of relevant case-law as well as the Additional Protocol to the Cybercrime Convention. The jurisprudential analysis will allow for a comparison of the stances adopted by the ECtHR and national courts of European countries on the one hand, and courts of the United States on the other, in the sphere under consideration. By looking at regional and national case-law and the initiative of the Council of Europe in the form of the Additional Protocol to the Cybercrime Convention, the paper seeks to provide an overview of the current state of affairs in the realm of regulating hate but also to demonstrate that such regulation, as occurring to date, is dysfunctional, predominantly due to the vast divergence of US-European approaches to the issues of free expression both on and off line. It is argued that due to the very nature of the internet as a borderless and global entity, this normative divergence cannot be overcome so long as traditional approaches to the issue of regulation continue to be taken. The paper’s analysis will emanate from the premise that there exists a need to strike an equitable balance between the freedom of expression on the one hand and the freedom from discrimination on the other.

Keywords

1. Introduction*

The internet is one of the most powerful contemporary tools used by individuals and groups to express ideas and opinions and receive and impart information. [1] It “magnifies the voice and multiplies the information within reach of everyone who has access to it.” [2] Notwithstanding the positive aspects of this development in the realm of free speech and the exchange of ideas, the internet also provides a platform for the promotion and dissemination of hate. [3] In fact, the internet has seen a sharp rise in the number of extreme-right websites and activity. [4] As well as facilitating the promotion of hate, the internet has also strengthened the far-right movement more generally by bringing hate groups together, converging the lines of previous fragmentation, thereby contributing to the creation of a “collective identity that is so important to movement cohesiveness.” [5] This has occurred on an international level, “facilitating a potential global racist subculture.” [6] Although hate existed long before the creation of the internet, this technological advancement has provided an effective and accessible means of communication and expression for hate groups and individuals whilst simultaneously adding a new dimension to the problem of regulating hate, [7] particularly due to the nature of the internet as a global and, to an extent, anonymous medium. It is the anonymity of the internet which deeply hampers the implementation of traditional legal procedures and enforcement of traditional laws, [8] as the perpetrator cannot readily be determined; whilst the global nature of the internet means that, even if a perpetrator can be identified, bringing him or her to justice may not be possible due to jurisdictional limitations. [9] Thus, technological advances in the form of the internet have altered our conceptualisation of a State which habitually had jurisdiction over the activities occurring within its boundaries. To put it simply, this medium knows no borders.

In light of the significant role of the internet vis-à-vis the promotion and dissemination of hate, this paper will look at the issue of regulating the internet in the ambit of hate speech as digitally expressed by individuals and groups. To do so, it will provide a definitional framework of hate speech, an overview of the internet’s role in the ambit of hate speech and consider the challenges in legally regulating online hate speech through a discussion of relevant case-law as well as the Additional Protocol to the Cybercrime Convention. The paper’s analysis emanates from the premise that, if the internet is to be dealt with in a manner which reflects an adherence to principles such as non-discrimination and equality, ”a new template for addressing cross-border contracts” [10] is urgently required. To this end, a comprehensive and unified multijurisdictional approach must be adopted. However, this has proved difficult to date particularly given the stark contrast in the approach vis-à-vis free speech adopted by the United States of America (USA), on the one hand, and Europe on the other. Essentially, as will be reflected hereinafter, it is the conceptual understanding of the scope of the freedom of expression which deeply hampers the creation of an effective regulatory framework for internet hate speech.

2. Definitional Framework: Hate Speech

Hate speech does not enjoy a universally accepted definition, [11] with most States and institutions adopting their own definitions, [12] notwithstanding that the term is often incorporated in legal, policy, and academic documents. [13] Although non-binding, one of the few documents which has sought to define hate speech is the Recommendation of the Council of Europe Committee of Ministers on hate speech. [14] It states that this term is to be ”understood as covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expression by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.” Interestingly, this definition incorporates the justification of hatred as well as its spreading, incitement and promotion, allowing for a broad spectrum of intentions to fall within its definition. However, it leaves out characteristics such as sexual orientation, gender identity and disability. Hate speech has also been mentioned, but not defined, by the European Court of Human Rights (ECtHR). For example, the Court has refers to hate speech as: “all forms of expression which spread, incite, promote or justify hatred based on intolerance including religious intolerance.” [15] In Vejedland v Sweden, in the framework of homophobic speech, the Court held that it is not necessary for the speech “to directly recommend individuals to commit hateful acts”, [16] since attacks on persons can be committed by “insulting, holding up to ridicule or slandering specific groups of the population” [17] and that speech used in an irresponsible manner may not be worthy of protection. [18] Through this case, the Court drew the correlation between hate speech and the negative effects it can have on its victims, demonstrating that it is not merely an abstract notion, but one with potential to cause harm. In addition, the Fundamental Rights Agency (FRA) of the European Union has offered two separate definitions of hate speech with the first being that it “refers to the incitement and encouragement of hatred, discrimination or hostility towards an individual that is motivated by prejudice against that person because of a particular characteristic.” [19] In its 2009 Report, the FRA held that the term hate speech, as used in the particular section “includes a broader spectrum of verbal acts including disrespectful public discourse.” [20] The problematic part of this definition is the broad reference to disrespectful public discourse, especially since institutions such as the ECtHR extend the freedom of expression to ideas that “shock, offend or disturb.” [21] The Council Framework Decision 2008/913/JHA of 28 November 2008 on Combating Certain Forms and Expressions of Racism and Xenophobia does not directly define hate speech, but instead prohibits different forms of expression and acts that fall within the framework of “Offences Concerning Racism and Xenophobia.” [22] More specifically, Article 1 therein holds that each Member State shall punish the public incitement to violence or hatred directed against a group of persons or a member of a group defined by reference to race, colour, religion, descent or national or ethnic origin, the commission of such an act through public dissemination of material as well as the acts of publicly condoning, denying or grossly trivialising particular crimes such as genocide. This definition could be used in the realm of hate speech but is limited only to particular groups, leaving out others such as sexual minorities. In addition, the threshold of this definition is set to hatred or violence, and does not integrate other “softer” elements of hate speech, such as discrimination. No particular reference to internet hate was made in the above document; however, nothing in its wording prevents it from being used for cases of internet hate. In 2016, the “Code of Conduct on Countering Illegal Hate Speech Online” was signed by different IT companies and the European Commission. This document underlines that the aforementioned Council Framework Decision must be enforced by Member States in online, as well as offline, environments. In the framework of academic commentary, there has been a plethora of definitions put forth to describe hate speech. According to Mari Matsuda, hate speech contains a tripartite definition, namely that the message is “of racial inferiority, the message is directed against historically oppressed groups and the message is persecutory, hateful and degrading.” [23] Wrestling offers a broad interpretation of hate speech including “virtually all racist and related declensions of noxious, identity-assailing expression could be brought within the wide embrace of the term.” [24] Alexander Tsesis has described it as a “societal virus”, [25] while Rodney Smolla refers to the lack of contribution hate speech makes to the development of society since it “cannot contribute to a societal dialogue and therefore can be ethically curtailed.” [26] Scholars, such as Kent Greenawalt have argued about the damaging consequences of such speech, arguing that “epithets and slurs that reflect stereotypes about race, ethnic group, religion and gender may reinforce prejudices and feelings of inferiority in seriously harmful ways.” [27] In discussing bans on racist speech, Post examines several arguments that have been put forth as justifications for such bans including, the “intrinsic harm of racist speech” [28] insofar as there is an “elemental wrongness” [29] to such expression, the infliction of harm to particular groups and individuals, as well as to the marketplace of ideas. [30]

From the above definitions and the variations therein, although some common elements can be discerned, it could be argued that “hate speech seems to be whatever people choose it to mean.” [31] For the purpose of this paper, and taking into consideration that there is no one universal definition of hate speech, a broad definitional basis is embraced. As such, hate speech is hereinafter considered to mean speech that is targeted towards individuals due to their particular characteristics, such as race, ethnic origin, nationality, religion, language, sexual orientation, gender identity and/or disability.

3. The Role of the Internet vis-à-vis Hate Speech

The significant role of the internet in any modern society was recognised by the European Court of Human Rights in Times Newspaper Ltd v UK:

“in light of its accessibility and its capacity to store and communicate vast amounts of information, the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information generally.” [32]

However, as noted above, the internet can also result in harmful expression and this reality began to surface predominantly during the 1990s. In 1994, the UN Secretary General noted that new technologies such as computer programmes, video games and the Minitel system in France were used to disseminate anti-Semitic ideas. [33] In 1995, the UN Special Rapporteur on contemporary forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance recorded the growing use of electronic media for purposes of international communications between far-right groups. [34] In 1996, the UN Secretary General officially recognised the use of the internet and electronic mail as being increasingly used by racist organisations to spread their ideology. [35] In 1997, the aforementioned Rapporteur noted that “the Internet has already captured the imagination of people with a message, including purveyors of hate, racists and anti-Semites.” [36] The first racist website to enter the online world was Stormfront.org set up by a former Ku Klux Klan member and launched in 1995. [37] The dramatic rise of internet hate is reflected by the figures gathered by the Simon Wiesenthal Centre which, in 1995 recorded only one racist website [38] whereas by 2011 its Digital Terrorism and Hate Report had found 14,000 websites, forums and social networks which promoted hate. [39] However, these figures must be considered with a degree of caution, since monitoring becomes more complicated given that websites surface and re-surface at a very fast pace. [40] Websites are not the only sub-tool of the internet with forums, blogs, social networking sites, emails, newsletters, chat rooms and online games being used and abused by extremist groups. Social networking sites have become “breeding grounds for racist and far-right extremist groups to spread their propaganda”, [41] with the sheer number of users, the accessibility to such platforms and the lack of pre-screening of posts or the establishment of, inter alia, Facebook groups, rendering the prospect of regulation a daunting one. In 2008, following a complaint lodged by Martin Shulz, Facebook banned several pages used by Italian extremists to promote violence against Roma. [42] It must be noted that those who spread hate speech may use the internet to harass the victims of their rhetoric directly, to communicate amongst themselves and build up a “sense of belonging and social identity” [43] to a unified movement, but also to recruit new members through the dissemination of their ideology to unsuspecting users who may be confronted with such speech through, amongst others, web links or emails. [44] Further, hate groups attract new members, particularly young people, through the use of innovative methods such as online hate games including “Ethnic Cleaning” and “Shoot the Blacks”.

Thus, the internet which has been named the “network of networks” [45] offers endless possibilities for hate groups to communicate with each other, recruit new members and harass their victims due to its vastness, accessibility and nature as a boundary-free entity governed by no single institution or State. It is the very nature of the internet, and the fact that its effective regulation is contingent upon a common universal approach, which has contributed to its regulation posing a particularly challenging problem for law-makers.

4. Regulation of Online Hate: An Overview

Commencing in the 1990s, several calls were made for more to be done regarding regulating online hate speech. In 1996, the European Commission against Racism and Intolerance (ECRI) requested Council of Europe States to ensure that expression disseminated through the internet which incites discrimination, hate or violence against racial, ethnic, national or religious groups be classed by national law as criminal offences and that such offences should also incorporate the production, dissemination and storage for distribution of harmful material. [46] In 2000, ECRI issued a general policy recommendation on combating the dissemination of racist, xenophobic and anti-Semitic material via the internet, recommending that States ensure that relevant national laws also apply to material uploaded on the internet and to prosecute the perpetrators of relevant offences. [47] ECRI also recommended the clarification of the responsibility of the content host, content provider and site publishers in the framework of the dissemination of racist, xenophobic, and anti-Semitic content over the internet. [48] In 2001, the Declaration and Programme of Action of the Third World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance noted that States must “implement legal sanctions, in accordance with relevant international human rights law, in respect of incitement to racial hatred through new information and communication technologies, including the internet.” [49] In 2003, partly as a response to the fact that the ECRI recommendations on internet hate regulation were not adhered to by the Member States, the Council of Europe took the first and only concrete step in seeking to provide a harmonised approach to the regulation of online hate speech, through its Additional Protocol to the Cybercrime Convention. It must be noted that, although there is a general consensus amongst organisations such as the Council of Europe, the United Nations, the European Union and the Organisation for Security and Co-operation in Europe, that internet hate should be regulated, [50] the UN Special Rapporteur on Freedom of Expression held that excessive regulation of the internet in order to “preserve the moral fabric and cultural identity of societies is paternalistic.” [51] However, no extrapolation was made on what could fall within the framework of excessive regulation and, thus, no further conclusions can be drawn thereof.

It is common practice for States to take the position that “what is illegal and punishable in an offline format must also be treated as illegal and punishable online.” [52] However, as the internet is owned by nobody and everybody and knows no physical, electronic, cyber, abstract or other boundaries, it allows its users to transmit messages beyond any such boundaries; thereby, rendering control, censorship and regulation a difficult task.

5. The Jurisdictional Problem of Regulating Online Hate

It is generally accepted that the complexities created by jurisdictional issues constitute the biggest challenge in the sphere of regulating online hate. [53] This is because the internet is not marked by boundaries, cannot be controlled or censored comprehensively by individual States in particular situations and, as a consequence, questions are raised as to “which law should apply and how to delimit competing jurisdictions.” [54] This results in States finding it “difficult to govern and control the flow of information inside and outside their nation states.” [55] More specifically, regulation problems arise where, for purposes of the present discussion, the hateful material is created within a State which does not prohibit the dissemination of xenophobic or racist material on the internet but, due to the boundary-free nature of the internet it is accessible to or, in fact, uploaded by persons residing in a Contracting State. In fact, on a technical level, it is a relatively simple task for individuals who wish to publish information that may be prohibited in some countries, including their own, to go “forum shopping” by choosing internet service providers which are located in countries which permit such content so as to be sure that, notwithstanding potential restrictions in some jurisdictions, the material will be available online. [56] Forum shopping results in the establishment of hate havens, which individuals and groups conveniently choose as hosts for their material. This is the situation in the USA, which is a haven for hate websites. [57] The majority of hate websites are based in the USA and these include those which seek to avoid anti-hate legislation in their own country. [58] Countries such as Spain, have sought to overcome the consequences of such havens by allowing the judiciary to block internet sites that do not adhere to Spanish law. [59] However, this method presupposes the continuous monitoring of internet sites that are in violation of national law, a huge and complex task which cannot possibly be efficiently carried out. Either way, the availability of havens essentially results in the erosion of the weight and role of national laws that seek to restrict online hate as they can be circumvented by carrying out internet activity in, for example, countries which place more emphasis on the freedom of expression rather than on the negative effects of hate. This results in a “de facto extraterritorial application of the laws of some countries known for their robust protection of freedom of expression”, [60] which in turn contributes to the weakening of the more general principle of State sovereignty vis-à-vis the regulation of internet hate.

In Perrin v UK, the ECtHR was confronted with the question of jurisdiction in the sphere of the internet. It interpreted jurisdiction in a broad sense, arguing that the fact that the applicant’s material was uploaded by the applicant on a website operated and legal in the USA did not free the applicant of his responsibilities under UK law which prohibited such material. [61]As such, the Court considered itself to have competence ratione loci regarding material uploaded [62] on the internet, notwithstanding the location in which the material was uploaded. Furthermore, certain countries have also sought to overcome the issue of jurisdiction on a national level. For example, in Germany the Federal Court held that all material uploaded on the world wide web is answerable to German anti-hate legislation regardless of the country in which this material was created, with the only element posing any sort of significance being its accessibility to German internet users. [63]

The variation in approaches of different States to the issue of hateful expression lies at the heart of jurisdictional limitations in the ambit of regulating internet hate. A State’s approach to the issue of restricting forms of expression will be affected by its own “political, moral, cultural, historical and constitutional values” [64] and it is, in fact, this sharp divergence of legal culture in the realm of speech between the USA and Europe which has hindered the efficacy of any regulatory measures and which has rendered the issue of jurisdiction a serious obstacle thereto. As will be reflected in the discussion on the Additional Protocol to the Cybercrime Convention, the variation of approaches between the USA and Europe has limited any formulation of a functioning regulatory framework of online hate simply because the former adheres to an almost absolutist protection of free speech as per the First Amendment, with the latter seeking restrictions for purposes of ensuring that other fundamental rights and freedoms are exercised, such as that of non-discrimination. More specifically, in the USA, hate speech can be proscribed if it constitutes a “true threat”, a test which was developed in the case of Brandenburg v Ohio, and underlines that free speech can only be limited insofar as advocacy of the use of force “is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” [65] However, the possibility of this test’s application in the sphere of the internet is doubtful, since the internet’s “impersonal contact cannot be seen as readily meeting the true threat requirement of being likely to incite imminent lawless action.” [66] As well as the true threat test, “fighting words” can also be prohibited under US law, a restriction developed in the case of Chaplinsky v New Hampshire. There, the Court held that expression can be restricted if it is made up of fighting words which were deemed to be those which “inflict injury or tend to incite an immediate breach of the peace”. [67] This doctrine was considered within the framework of racist expression in R.A.V. v City of Saint Paul, in which the Supreme Court held that an ordinance which criminalised the placing of symbols, such as a burning cross or a Nazi swastika on a public or private property, was unconstitutional under the First Amendment. [68] The Court held the ordinance unconstitutional because it “imposes special prohibitions on those speakers who express views on the disfavored subjects of race, color, creed, religion or gender” [69] and that St Paul’s objective to “communicate to minority groups that it does not condone the group hatred of bias-motivated speech does not justify selectively silencing speech on the basis of its content.” [70] Thus, given that the Supreme Court was willing to find that the burning of a cross on a black family’s lawn did not fall within the ambit of fighting words and was thus acceptable speech, it seems improbable that the threshold incorporated from the fighting words test could be derived from racist or other hateful speech that takes place online.

Comparatively speaking, freedom of expression in Europe is more readily limited for purposes of preventing not only violence, but also discrimination and hate targeted towards an individual or a group which has a particular characteristic. This is reflected firstly by the fact that the freedom of expression, as protected by Article 10 of the European Convention on Human Rights, is marked by a series of limitation grounds. Subsequently, the ECtHR developed a multi-fold test to be applied in considering whether the freedom of expression should be permitted including: ascertaining whether the limitation has a legitimate aim; is proportional to the aim pursued; is necessary in a democratic society; and is effectuated for purposes of, inter alia, protecting the rights and freedoms of others. To date, the Court’s jurisprudence shows no tolerance for hate speech as reflected in a variety of cases, [71] one of which will be discussed further below. When seeking to restrict hateful expression, the Court usually opts to use the limitation grounds found in Article 10, but has in some situations [72] applied the prohibition of the abuse of rights clause of the ECHR. As noted by the ECtHR, States must “fight against abuses, committed in the exercise of freedom of speech, that openly target democratic values.” [73] Thus, even though this freedom is undoubtedly significant, it must nevertheless coexist harmoniously with other rights and freedoms, with democracy having the duty militantly to protect itself from the abuse of expressive freedom.

Although it is beyond the scope of this paper to carry out an extensive comparative analysis of the stances adopted by the USA and Europe, as an entity in the form of the Council of Europe as well as individual countries which have harmonised their legislation with the European Convention on Human Rights, it is evident that the approaches of the two are oceans away from each other. With such oceans constituting dividing lines, and taking into account the necessity of coherence vis-à -vis online regulation given the nature of the internet, the question of jurisdiction remains a key issue.

6. Case-Law on Regulating Internet Hate

6.1. European Court of Human Rights

In a recent case on online defamation, the Court acknowledged that although “important benefits can be derived from the internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained.” [74] Although this statement was made within the framework of defamatory speech, reference was made to other unlawful speech, thereby, incorporating hate speech as well. Moreover, it is clear that the Court is mindful that this medium can establish a framework through which unacceptable speech can emanate. Cases have come about during which the Court made some significant distinctions on the general nature of the internet and the consequences arising thereof. For example, in K.U. v Finland, which dealt with a minor who was the subject of an advertisement of a sexual nature on an internet dating site, the Court held that the anonymous character of the internet which could be used by individuals for the committal of criminal offences meant that the State has a positive obligation to provide a legal framework through which anonymous perpetrators could be identified and prosecuted. [75] It was also noted that the sheer vastness of the internet means that regulation is a tricky task and could potentially affect the rights and freedoms found in the ECHR. More specifically, in Perrin v UK, which dealt with obscene material, the Court noted that:

“the electronic network, serving billions of users worldwide is not and potentially will never be subject to the same regulations and control. The risk of harm posed by content and communications on the internet to the exercise and enjoyment of human rights and freedoms....is certainly higher than that posed by the press. Therefore, the policies governing reproduction of material from the printed media and the internet may differ. The latter undeniably have to be adjusted according to technology’s specific features in order to secure the protection and promotion of the rights and freedoms concerned.” [76]

In relation to how the Court has considered the question of the internet and the freedom of expression, it could be argued that although the ECtHR has looked at several free speech cases which are interrelated to the internet, it has not yet established a coherent and all-encompassing approach to the issue. The Court has been faced with just one case relevant to the theme of internet hate, during which it decided to replicate its positions and stances developed in general Article 10 cases in internet cases. Specifically, in Féret v Belgium, the Court dealt with racist and xenophobic statements of the leader of the far-right party Front National-National Front, which were transmitted by him during his party’s election campaign through leaflets and posters as well as being posted on his internet site. The Court applied already established principles, such as the fact that political speech that stirred hatred based on religious, ethnic or cultural prejudices was a threat to social peace and political stability in democratic States. [77] Thus, the fact that the internet was used as one of the communication mediums did not affect the Court’s stance on hate speech and nor did it make particular distinctions as to the effect of the internet on the dissemination of these ideas. [78] However, it is too soon to draw concrete conclusions on the Court’s stance on online hate and whether it will, in fact, continue simply transposing its Article 10 reasoning without any further qualifications as to the relevance of the nature of the medium used. As noted, “as the wide picture of internet related issues is still unfolding, it is too early to evaluate the Court’s position in this regard.” [79]

6.2. National Case-Law

There have been some cases which have dealt with the regulation of online hate, particularly in the form of anti-Semitic material. These cases have demonstrated the difficulty in ensuring regulation of such material given the antithesis of approaches adopted by European countries, on the one hand, and the USA on the other. This has resulted in problems within the realm of the jurisdictional and technical implementation of regulatory orders.

In 1999, Frederick Toben was arrested during a visit to Germany for violating German law because of the anti-Semitic material he uploaded onto his website. A lower court found that Germany could not regulate the website as it was based in Australia, but this was later reversed by Germany’s High Court which held that “German authorities may take legal action against foreigners who upload content that is illegal in Germany – even though the Websites may be located elsewhere.” [80] This case reflected that German courts adopted a broad interpretation of the notion of jurisdiction in the realm of the internet. However, had Toben chosen not to travel to Germany, he would not have been arrested and even following arrest, his website continued to run as it was located in a foreign server.

In the case of Yahoo! Inc. v La Ligue Contre Le Racism et L’Antisemitisme et al, [81] two French student organisations [82] commenced proceedings against Yahoo! for allegedly violating French Law by offering Nazi memorabilia for auction on its website. Yahoo! held that its activities did not fall within French jurisdiction as the content was uploaded in the USA where such conduct and material is permitted under the First Amendment. However, this was not accepted by the French Court, which “applied an effects-based jurisdictional analysis and granted prescriptive jurisdiction describing the sale of Nazi paraphernalia.” [83] As such, the French Court held that Yahoo! was liable for its effects in France and particularly for violating R. 645-1 of the French Criminal Code which outlaws the sale, exchange or display of Nazi related materials or Third Reich memorabilia. The Court required Yahoo! to ensure that: French citizens could not access the auctions of Nazi objects; to eliminate their access to web pages on Yahoo.com displaying text, extracts or quotations from Mein Kampf and the Protocols of the Elders of Zion; to post a warning to French citizens on Yahoo.fr that any search through Yahoo.com may lead to sites containing material prohibited under Section R645-1 of the French Criminal Code; and that such viewing of the prohibited material may result in legal action against the internet user and to remove it from all browser directories accessible in the French Republic. The order subjected Yahoo! to a penalty of 100,000 Francs for each day it failed to comply with the order. [84] The order concluded that Yahoo! must “take all necessary measures to dissuade and render impossible any access via Yahoo.com to the Nazi artifact auction service and to any other site or service that may be construed as constituting an apology for Nazism or a contesting of Nazi crimes”. [85] Although Yahoo! took certain steps such as including the required warning regarding the French Criminal Code on its Yahoo.fr website and amending its auction policy, it did not conform to all the orders and instead sought a declaratory judgment from a US Court that the French Order could not be enforced in the USA. In granting the judgment, the Court considered the issue of jurisdiction and approach to free expression underlining that:

“what is at issue here is whether it is consistent with the Constitution and laws of the United States for another nation to regulate speech by a United States resident within the United States on the basis that such speech can be accessed by Internet users in that nation.” [86]

It further held that the First Amendment does “not permit the government to engage in viewpoint-based regulation of speech absent a compelling governmental interest which compelling interest was not present in this case.” [87]

The case of Yahoo! demonstrates the technical and legal consequences of jurisdictional issues vis-à -vis the regulation of the internet. The material was uploaded in a State which permitted the selling of such material, but was accessible to citizens of another State where the selling of such material was a criminal offence. Given the borderless nature of the internet, this can happen easily and readily. Although the French Court viewed jurisdiction in a broad sense, basing its interpretation on the effects of the material on its own citizens and its own laws, and notwithstanding the issuance of a Court Order, this was deemed by the USA to be invalid since it could not be constitutionally justified in that country. This subsequently demonstrates that, given the vast divergence of opinion and approaches between Europe and the USA in the realm of free speech and given that jurisdiction in the ambit of internet regulation is nothing but a lucid notion, those States which seek to impose restrictions to expression and material available online will meet both legal and technical obstacles, whilst their previous ideals pertaining to national sovereignty and the conservation of their own legal culture become increasingly diluted. In fact, as noted by one commentator “the judicial impasse of the Yahoo! case exemplifies the cultural tension inherent in attempts to regulate online speech extraterritorially” [88] with of course the notion of territory taking on a different meaning in the digital era.

Further, Ernst Zündel, a German living in Canada was “one of the world’s most prominent distributors of revisionist neo-Nazi propaganda.” [89] In 1997, the Canadian Human Rights Tribunal considered a complaint brought against Zündel and his website Zündeliste, [90] which was registered on a US server, on the grounds that it promoted hatred or contempt of Jews. [91] In 2002, the Tribunal decided that hate could not be tolerated on the internet or on other mediums and ordered Zündel to cease and desist from publishing hate messages on his website. [92] In 2005 Zündel was deported to Germany on security grounds where he was found guilty of inciting racial hatred, libel and disparaging the dead, and in 2007 was sentenced to five years in prison. [93] Notwithstanding that the ideas and messages disseminated through his website led to the decision of the Canadian Tribunal and his subsequent imprisonment in Germany, the Zündeliste is still running through a US server. This demonstrates that the technological nature of the internet, in addition to the divergence marking the US and European approaches to free speech, has essentially nullified one of the purposes of the Canadian and German proceedings; namely the removal of what they considered to be hate speech from the internet. The difference in approach was further manifested on a technical level and particularly following a request from Germany to the internet service provider Deutsche Telekom to prevent users from accessing Zündel’s site. Deutsche Telekom accepted and, in response to this, users based in the USA created mirror sites, thereby, making the content available to German users in alternative ways. [94] Thus, even seeking to ensure regulation of ISPs cannot actually ensure the prevention of access to material which a State seeks to limit.

In the 2002 case of Warman v Kyburz, which dealt with anti-Semitic content of Kyburz’ website, the Canadian Human Rights Tribunal considered the problems posed in the realm of cease and desist orders as issued in Zündel’s case, by the nature of the internet as a borderless medium and the possibility for the creation of mirror sites. [95] Either way, the Tribunal found that “despite these difficulties and technical challenges, a cease and desist order can have both a practical and symbolic effect” [96] as it prevents the ongoing publishing of hateful material and demonstrates public dismay at such hate. In relation to the former, it could safely be said that this is not the case since, for example, even following the Tribunal’s decision regarding the Zündeliste, material continued and continues to be uploaded thereto through the US server.

The above cases reflect the difficulties related to regulating online hate given the notion of jurisdiction, which is unclear in the realm of the internet and its borderless nature. Both the European Court of Human Rights and national courts of States such as Germany and France, have interpreted this notion broadly. However, as seen from Yahoo!, American courts are ready and willing to limit any sort of effect that restrictive orders may have on internet users in the USA, always in the spirit of the First Amendment.

7. The Additional Protocol to the Convention on Cybercrime

The Council of Europe’s Convention on Cybercrime is the first multilateral treaty that aims to combat crimes committed through computer systems and has, to date, been ratified by 47 countries. [97] This Convention was signed and ratified not only by Council of Europe States, but also by the USA which, although is not a member of this entity, has an observer status. Interestingly however, the USA acceded to the Convention only after the issue of online hate was removed from the table of discussions. [98] This reality demonstrates that “fundamental disagreements remain as to the most appropriate and effective strategy for preventing dissemination of racist messages on the Internet”, [99] which subsequently contribute to the weakening or even nullification of regulatory measures that may be adopted by particular States given that internet regulation requires co-operation for both technical and legal reasons as discussed above. To fill the resulting gaps, the Council of Europe subsequently developed the Additional Protocol to the Cybercrime Convention. This has been ratified by 24 countries. [100] The Council of Europe recognised the limitations in implementing a unilateral approach to the issue of online hate in the form of racist or xenophobic hate and, thereby sought to ensure a common set of standards for participating States and promote co-operation amongst them in the criminalisation of relevant acts. [101] This document is seen as a “supplement” [102] to the Convention so as to ensure that the latter’s procedural and substantive provisions encompass racism and xenophobia online. Thus, a series of the Convention’s articles apply mutatis mutandis to the Protocol under consideration including, amongst others, Article 13 on sanctions and measures and Article 22 on jurisdiction.

However, even at first sight, this document comes with several significant limitations which will be discussed hereinafter. Firstly, as demonstrated in its title, this Protocol tackles only racist and xenophobic hate, completely disregarding other forms of hate on grounds including, but not limited to, sexual orientation, gender identity and disability, whilst religion is considered a protected characteristic within the definitional framework set out by Article 2. Thus, there seems to be an unjustified prioritisation of online hate with the Council of Europe almost arbitrarily seeking to regulate the effects of racism and xenophobia online, leaving victims of other types of hate without a respective legal framework.

The Additional Protocol to the Cybercrime Convention defines what is meant by racist and xenophobic material, underlines the measures to be taken at national level in relation to the dissemination of such material, [103] prohibits racist and xenophobic threats and insults professed through computer systems [104] as well as the denial, gross minimisation, approval or justification of genocide or crimes against humanity. [105] The Protocol also renders the intentional aiding and abetting of any of the above a criminal offence. It must be noted that, unlike Article 9 of the Cybercrime Convention which deals with child pornography, the Protocol does not criminalise the possession and procurement of racist and xenophobic material. [106] As noted in the Explanatory Note of the Protocol, in order to amount to an offence, racist and xenophobic material, insults and revisionist rhetoric must occur on a public level, a point which has been incorporated for purposes adhering to Article 8 of the European Convention on Human Rights. [107]

In relation to the acts that are to be deemed offences, it becomes clear that the freedom of expression is “the sacred cow against which the legislation seeks to justify its apparent encroachment for the sake of providing a measure to prohibit cybercrimes motivated by race hate.” [108] To illustrate this, one can turn to Article 3 on the dissemination of racist and xenophobic material through computer systems, with part 1, therein, providing that:

“each party shall adopt such legislative and other measures as may be necessary to establish as criminal offence under its domestic law, when committed intentionally and without right, the following conduct: distributing or otherwise making available, racist and xenophobic material to the public through a computer system.”

However, Part 3 holds that a party may reserve the right not to apply the above paragraph to those cases of discrimination for reasons of upholding free expression. Thus, the Protocol, as an initiative to combat online hate, has been “thwarted through the compromise they have made to concerns about freedom of expression” [109] with much less regard evidently being given to freedoms such as that of non-discrimination. It could thus be argued that the Protocol undermines itself by its approach in that the Council of Europe has given an unequal and unjustifiable emphasis on expression rather than non-discrimination and equality.  [110]

In relation to general limitations that may be imposed on the applicability of Article 3, Part 2 therein, holds that a State may choose not to attach criminal liability to conduct referred to in Part 1 if this does not promote violence or hatred insofar as other effective remedies are available. This is notwithstanding the fact that in the Protocol’s title, reference is made to the criminalisation of racist and xenophobic acts committed through computer systems. Whilst criminalising racist and xenophobic threats has no option to disregard parts of its provisions, Article 5 on racist and xenophobic insults provides that a State has the right not to apply in whole or in part, Part 1 of this Article, which sets out the legislative and other measures that may be adopted to criminalise racist and xenophobic insults. Although no direct reference to free expression is made here as the justifier of such limitation, it could implicitly be assumed that concerns regarding the freedom of expression led to the formulation of the aforementioned reservation available to those who want it. Reserving the right not to apply a particular provision is also incorporated into the denial, gross minimisation, approval, or justification of genocide or crimes against humanity. Many of the States which ratified the Protocol took the opportunity to incorporate reservations. It generally appears that Article 4 on racist and xenophobic threats is the one granted the most protection as it extends to private as well as public communications, unlike the other acts found in the Protocol, while it gives no opt-out possibility as the others do.

The issue of intent is also significant when seeking to appraise the Protocol. This document renders the dissemination of material, threats, insults and revisionist rhetoric offences illegal as well as aiding and abetting the committal of such offences in the event that such acts and/or expressions are effectuated and/or uttered intentionally. This is particularly significant in the realm of the liability of internet service providers who simply constitute the platform through which problematic speech may arise. The Explanatory Report to the Additional Protocol to the Cybercrime Convention holds that the precise meaning of “intentionally” should be interpreted on a national level. [111] However, it did clearly stipulate that it is not sufficient for an internet service provider which simply constitutes the host of the material to be found guilty of any of the Protocol’s offences if the required intent under domestic law did not exist. [112] Thus, on the one hand it does limit the liability of unknowing ISPs but leaves the general conceptualisation of intent unsure and contingent on national positions. However, the Protocol does not regulate or prohibit the finding of permissive intent in the event that an ISP is made aware of racist or xenophobic material or expression and does not take the necessary measures to remove it, thereby, leaving some doors open for finding potential liability in the inaction of ISPs. Such permissive intent is found, for example, in Germany’s Information and Communications Service Act of 1997, which underlines the liability of ISPs in the event that they knew of hateful content, had the ability to block it, but chose not to. [113] Further, in the realm of ISPs, the Protocol remained silent on the very significant question of jurisdiction in the event of a conflict of law between the hosting country and the other. [114] Although for EU countries, the Directive on Electronic Commerce [115] is applicable with Article 3, therein providing that ISPs are governed by the laws of the Member State in which they are established, [116] the situation is not clear in the event that a non-EU country is involved in a particular dispute. [117]

Although the Protocol may contribute to promoting harmonisation regarding agreed upon principles and procedural, technical and legal cooperation amongst States, the Protocol remains problematic. This is the case not only due to its inherent limitations as described above, but also due to the fact that the USA is not part of it. This, in addition to the absence of any form of extradition treaties between the USA and other countries in the sphere of online hate speech, deeply restricts the efficacy of the Protocol’s aims and objectives. Moreover, it may well appear that the Protocol has sought to achieve the lowest possible common denominator, maybe for purposes of maximising ratification. Either way, the aforementioned delimitations may serve as stumbling blocks when seeking to meet the objectives of the Protocol. Furthermore, as well as limitations as a result of an over-emphasis on the freedom of expression, it could be argued that the Protocol constitutes an ineffective base through which online hate can be restricted since it adopts traditional conceptions of State boundaries, State sovereignty on issues such as the freedom of expression mentioned above, and, more generally, treats the issue of online hate as any other issue of traditional means of communication throwing in the concept of international co-operation without effectively and pragmatically considering the challenges of the internet. However, “the Internet is a very different animal from that we are used to, which requires handling in a different way”, [118] but this has not been taken on board.

8. Conclusion

Hate and hateful expression existed before the creation of the internet and will continue to exist even if tight regulation of online activity were to be achieved. [119] However, the internet has brought about “socio-technological and legal dilemmas that are difficult to handle from a legal point of view”. [120] Moreover, the issue of online hate is moving in new dimensions, with those who disseminate hate speech finding themselves before an array of possibilities to use and abuse the internet for purposes of communication, recruitment and victimisation. However notwithstanding that some case-law has been formulated on a national, transnational and regional level and, even though the Additional Protocol to the Cybercrime Convention has been formulated, the issue of online regulation has not essentially taken any pragmatically significant steps. Firstly, the Protocol itself is lacking as per its scope, as it is arbitrarily limited to racist and xenophobic speech whilst simultaneously limiting its efficacy for purposes of giving particular protection to the freedom of expression. Secondly, the normative US-European divergence of the understanding of free expression has dramatically affected the regulation that Europe seeks to achieve. As noted by one commentator, the global, boundary-free nature of the internet in conjunction with the absolutist approach to expression, as so adopted by the USA, means that “like chasing cockroaches, squashing one does not solve the problem when there are many more waiting behind the walls – or across the border”. [121] More particularly, even if a website is shut down in Germany for example, it may almost immediately pop up again through an American host. At the same time, American courts are not ready to apply any court orders issued in European countries insofar as they are considered to be contrary to the First Amendment. Thus, at the heart of these differences lie fundamental conflicts of legal thought on speech. Interestingly in the case of Yahoo!, the US court recognised that, given that no international treaty or standards were available in the realm of tackling issues on internet speech, the Court is bound by the First Amendment. However following the Yahoo! judgement, the USA finally had the opportunity to be part of such an agreement, however not only opted out of the Additional Protocol to the Cybercrime Convention, but also made its accession to the convention contingent on the exclusion of this theme from the Convention. In brief, there is no intent at the moment on the part of the USA to be part of such an international collaboration in the field of free speech simply because this State’s understanding of free speech does not endorse regulation of hatefulness unless certain high and immediate thresholds, as discussed above, are applied. The result of this approach is that, due to the technical nature of the Internet, the First Amendment has now taken the position as a “default standard for free speech on the Internet” [122] whether other States like it or not. Thus for the moment, it is safe to say that realistic prospects of internet regulation seem unlikely, especially if traditional and purely legal methods are adopted for this purpose.

* Natalie Alkiviadou is a PhD Candidate at the VU Amsterdam and a Lecturer at the University of Central Lancashire Cyprus.



[1] The number of Internet users for 2015 was 3,185,996,155: http://www.internetlivestats.com/internet-users/ [Accessed 28th June 2016].

[2] Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye (22 May 2015) A/HRC/29/32, para 11.

[3] Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2, 123.

[4] James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 233.

[5] Barbara Perry & Patrick Olsson ‘ Cyberhate: The Globalization of Hate’ 18 Information and Communications Technology Law 2, 185.

[6] Ibid.

[7] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 7.

[8] James Banks, ‘Regulating hate speech online’ (2010) 24 Computers & Technology 3, 233.

[9] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2, 783.

[10] Michael L. Rustad & Tomas H. Koenig, ‘Harmonizing Internet Law: Lessons from Europe’ (2006) 9 Journal of Internet Law 11, 3.

[11] European Court of Human Rights, Fact Sheet on Hate Speech, 2013, 1.

[12] Council of Europe Committee of Experts for the Development of Human Rights 2007, Chapter IV, pg.123, para.4.

[13] Tarlach McGonagle, ‘The Council of Europe against Online Hate Speech: Conundrums and Challenges’ Expert Paper, Institute for Information Law, Faculty of Law, http://hub.coe.int/c/document_library/get_file?uuid=62fab806-724e-435a-b7a5-153ce2b57c18&groupId=10227 [accessed 15th August 2015] 3.

[14] Council of Europe’s Committee of Ministers Recommendation on Hate Speech 97 (20).

[15] Gűndűz v Turkey, App. No 35071/97 (ECHR, 4 December 2003) para. 40, Erbakan v Turkey, App. No 59405/00, (6 July 2006) para.56.

[16] Vejdeland and Others v Sweden, App. No 1813/07 (ECHR 09 February 2012) para.54.

[17] Ibid. para.55.

[18] Ibid.

[19] Hate Speech and Hate Crimes against LGBT Persons, Fundamental Rights Agency, 1.

[20] Homophobia and Discrimination on Grounds of Sexual Orientation and Gender Identity in the EU Member States: Part II - The Social Situation, Fundamental Rights Agency, 44.

[21] The Observer and The Guardian v The United Kingdom, App. no 13585/88 (ECHR, 26 November 1991) para. 59.

[22] Article 1, Council Framework Decision 2008/913/JHA of 28 November 2008 on Combatting Certain Forms and Expressions of Racism and Xenophobia.

[23] Mark Slagle, ‘An Ethical Exploration of Free Expression and the Problem of Hate Speech’ 24 Journal of Mass Media Ethics, 242.

[24] Tarlach McGonagle, ‘Wresting Racial Equality from Tolerance of Hate Speech’ (2001) 23 Dublin University Law Journal 21, 4.

[25] Mark Slagle, ‘An Ethical Exploration of Free Expression and the Problem of Hate Speech’ 24 Journal of Mass Media Ethics, 242.

[26] Mark Slagle, ‘An Ethical Exploration of Free Expression and the Problem of Hate Speech’ 24 Journal of Mass Media Ethics, 242.

[27] Kent Greenawalt, ‘Speech, Crime and the Uses of Language’ (1989 New York: OUP), Chapter 2.

[28] Robert C. Post, ‘Racist Speech, Democracy and the First Amendment’ (1990-1991) 32 William and Mary Law Review 267, 272.

[29] Post R.C, ‘Racist Speech, Democracy and the First Amendment’ (1990-1991) 32 William and Mary Law Review 267, 272 quoting Wright ‘Racist Speech and the First Amendment’ 9 Miss. C.L.Rev. 1 (1988).

[30] Ibid. 273.

[31] Roger Kiska, ‘Hate Speech: A Comparison between the European Court of Human Rights and the United States Supreme Court Jurisprudence’ (2012) 25 Regent University Law Review 107,1 10.

[32] Times Newspaper Ltd (Nos 1 & 2) v UK, (10 March 2009) Application nos 3002/03 and 23676/03, para.27.

[33] Secretary-General, Elimination of Racism and Racial Discrimination, UN GA, 48th Sess., UN Doc, A/49/677 (1994).

[34] Maurice Glele-Ahanhanzo, Implementation of the Programme of Action for the Second Decade to Combat Racism and Racial Discrimination – Report of the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, CHR Res.1994/64, UN ESCOR, 51st Sess., UN Doc. E/CH.4/1995/78 (1995).

[35] Secretary – General, Elimination of Racism and Racial Discrimination: Measures to Combat Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, UN GA, 51st Sess. UN Doc. A/51/301 (1996).

[36] Maurice Glele-Ahanhanzo, Implementation of the Programme of Action for the Second Decade to Combat Racism and Racial Discrimination – Report of the UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, CHR Res.1996/21, UN ESCOR, 53rd Sess., UN Doc. E/CN.4/1997/71 (1997).

[37] Simon Wiesenthal Report: Online Terror and Hate – The First Decade: http://www.wiesenthal.com/atf/cf/%7BDFD2AAC1-2ADE-428A-9263-35234229D8D8%7D/IREPORT.PDF pg. 7.

[38] Simon Wiesenthal Report: Online Terror and Hate – The First Decade: http://www.wiesenthal.com/atf/cf/%7BDFD2AAC1-2ADE-428A-9263-35234229D8D8%7D/IREPORT.PDF pg. 3.

[39] Simon Wiesenthal Report: Digital Terrorism and Hate Report 2011.

[40] Barbara Perry & Patrick Olsson ‘Cyberhate: The Globalization of Hate’ 18 Information and Communications Technology Law 2, 188.

[41] James Bank, ‘Regulating Hate Speech Online’ (2010) International Review of Law, 24 Computers & Technology 3, 234.

[42] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008), 16.

[43] Barbara Perry & Patrick Olsson ‘Cyberhate: The Globalization of Hate’ 18 Information and Communications Technology Law 2, 192.

[44] Priscilla Marie Meddaugh & Jack Kay ‘Hate Speech or Reasonable Racism? The Other Stormfront’ (2009) 24 Journal of Mass Media Ethics: Exploring Questions of Media Morality 4, 252.

[45] Dragos Cucereanu Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 22.

[46] ECRI General Policy Recommendation Number 1 on Combatting Racism, Xenophobia, Anti-Semitism and Intolerance (4 October 1996), CRI(96) 43 rev.

[47] ECRI General Policy Recommendation N°6:  Combating the Dissemination of Racist, Xenophobic and Antisemitic Material via the Internet (15 December 2000) CRI(2001)1.

[48] Ibid. pg.5.

[49] Declaration and Programme of Action: http://www.un.org/WCAR/durban.pdf [Accessed 25 October 2015].

[50] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 67.

[51] Report of the Special Rapporteur, Mr. Abid Hussain, submitted pursuant to Commission on Human Rights resolution 1997/26 (28 January 1998) E/CN.4/1998/40, para. 45.

[52] Yaman Akdeniz ‘Racism on the Internet’ (Council of Europe publishing 2009) 21.

[53] Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2, 276.

[54] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 254.

[55] Ibid. 22.

[56] Ibid.354.

[57] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2, 822.

[58] Agence – France Press, ‘Neo-Nazi websites reported to flee Germany’ N.Y. Times (August 21 2000).

[59] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2, 784.

[60] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 254.

[61] Perrin v UK, App. No 5446/03 (18 October 2005).

[62] Nina Vajic & Panayiotis Voyatzis, ‘The Internet and Freedom of Expression: a brave new world and the ECtHR’s evolving case-law.’ In ‘Freedom of Expression: Essays in Honour of Nicolas Bratza’ (eds. 2012 Wolf Legal Publishers) 401.

[63] Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2, 263.

[64] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 16.

[65] Brandenburg v Ohio, 395 U.S. 447,445-49 (1969).

[66] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2 810.

[67] Chaplinsky v New Hampshired, 315 U.S. 568, 572 (1942).

[68] R. A. V. v. St. Paul 505 U.S. 377, 378 (1992).

[69] R. A. V. v. St. Paul  505 U.S. 377, 391-393 (1992).

[70] R. A. V. v. St. Paul  505 U.S. 377, 393 (1992).

[71] See, inter alia, Norwood v UK (Application no. 23131/03) (16 November 2004) and Féret v Belgium (application no. 15615/07) (16 July 2009).

[72] Norwood v UK (Application no. 23131/03) (16 November 2004).

[73] Jean-François Flauss, ‘The European Court of Human Rights and the Freedom of Expression’ (2009) 84 Indiana Law Journal, 809, 837.

[74] Delfi AS vs Estonia , (Application no. 64569/09) (16 June 2015) para.110.

[75] K.U. v Finland, (Application no. 2872/02) (2 March 2009) para 48-49.

[76] Perrin v UK (Application no. 5446/03) (10 October 2005) para.63.

[77] Féret v Belgium, App. no. 15615/07, (ECHR, 16 July 2009) para.73.

[78] Nina Vajic & Panayiotis Voyatzis, ‘The Internet and Freedom of Expression: A Brave New World and the ECtHR’s Evolving Case-Law.’ In ‘Freedom of Expression: Essays in Honour of Nicolas Bratza’ (eds 2012 Wolf Legal Publishers) 396.

[79] Nina Vajic & Panayiotis Voyatzis, ‘The Internet and Freedom of Expression: A Brave New World and the ECtHR’s Evolving Case-Law.’ In ‘Freedom of Expression: Essays in Honour of Nicolas Bratza’ (eds 2012 Wolf Legal Publishers) 405.

[80] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2 804.

[81] Yahoo, Inc. v. La Ligue Contre Le Racisme et L'Antisemitisme, et al 145 F. Supp. 2d 1168, Case No. C-00-21275JF (N.D. Ca., September 24, 2001).

[82] The League Against Racism and Anti-Semitism and The Union of Jewish Students of France.

[83] James Bank, ‘Regulating Hate Speech Online’ (2010) International Review of Law, 24 Computers & Technology 3, 235.

[84] Yahoo, Inc. v. La Ligue Contre Le Racisme et L'Antisemitisme, et al. 145 F. Supp. 2d 1168, Case No. C-00-21275JF (N.D. Ca., September 24, 2001).

[85] Ibid.

[86] Ibid.

[87] Yahoo, Inc. v. La Ligue Contre Le Racisme et L'Antisemitisme, et al 145 F. Supp. 2d 1168, Case No. C-00-21275JF (N.D. Ca., September 24, 2001).

[88] James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 235.

[89] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 67.

[90] Institute for Historical Review: The Importance of the Zündel Hearing in Toronto: http://www.ihr.org/jhr/v19/v19n5p-2_Weber.html [Accessed 23 October 205].

[91] Institute for Historical Review: The Importance of the Zündel Hearing in Toronto: http://www.ihr.org/jhr/v19/v19n5p-2_Weber.html [Accessed 23 October 205].

[92] Nathan Hall, Abbee Corb, Paul Giannasi, John G.D. Grieve, ‘The Routledge International Handbook on Hate Crime’ (eds. Routledge 2015).

[93] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 68.

[94] James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 281.

[95] Warman v Kyburz, (2003 CHRT 18) (2003/05/09) para. 81.

[96] Ibid. Para.82.

[98] James Banks, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 236.

[99] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 17.

[101] James Banks, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 236.

[102] Article 1, Additional Protocol t the Convention on Cybercrime, concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems.

[103] Additional Protocol, Article 3.

[104] Additional Protocol, Article 4 and Article 5.

[105] Additional Protocol, Article 5.

[106] Dragos Cucereanu Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 50.

[107] Explanatory Note to the Additional Protocol, para. 29.

[108] Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2, 124.

[109] Ibid. 123.

[110] Ibid. 126.

[111] Explanatory Note to the Additional Protocol, para. 25.

[112] Ibid. para. 25.

[113] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2, 796.

[114] Ibid. 801.

[115] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce').

[116] It must be noted that the Directive provides for an exception to that choice of law when the receipt country’s choice of law is necessary for the prevention, investigation, detection and prosecution of criminal offense including the ‘fight against any incitement to hatred on grounds of race, sex, religion or nationality, and violations of human dignity concerning individual persons.’

[117] Christopher D. Van Blarcum, ‘Internet Hate Speech: The European Framework and the Merging American Haven’ (2005) 62 Washington and Lee Law Review 2, 801.

[118] Fernne Brennan, ‘Legislating against Internet Race Hate’ (2009) 18 Information and Communications Technology Law 2, 141.

[119] James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 233.

[120] Barbara Perry & Patrick Olsson ‘Cyberhate: The Globalization of Hate’ 18 Information and Communications Technology Law 2, 196.

[121] James Bank, ‘Regulating Hate Speech Online’ (2010) 24 Computers & Technology 3, 237.

[122] Dragos Cucereanu ‘Aspects of Regulating Freedom of Expression on the Internet’ (Intersentia 2008) 232.

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.

JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law
Article search
Extended article search
Newsletter
Subscribe to our newsletter
Follow Us
twitter
 
Navigation