Internet DRAFT - draft-elkins-hrpc-ifilter
draft-elkins-hrpc-ifilter
INTERNET-DRAFT N. Elkins
Intended Status: Informational EDCO
B. Shein
Software Tool and Die
V. Bertola
Open-Exchange
Expires: April 20, 2019 October 17, 2018
Human Rights Considerations of Internet Filtering
draft-elkins-hrpc-ifilter-00
Abstract
This document is a survey of the filtering of content. The focus is
on the human rights involved as cited in the Universal Declaration of
Human Rights" which is one of the foundational documents for HRPC.
The recent years have seen an increase in content filtering for a
variety of reasons including to further the aims of governments who
wish to maintain their rule and suppress dissent but also to enforce
cultural norms, human rights and compliance with the law. Filters
also exist for security (botnets, malware etc.), user-defined
policies (parental control, corporate blocking of social networks
during work time, etc.), spam control, upload of copyrighted material
and other reasons. This document is based on several real world
considerations: the existence of national and regional sovereignty,
Internet Service Providers (ISPs) and Content Distribution Networks
(CDNs) that provide connectivity and content hosting services, Over-
the-top (OTTs) and Content Delivery Platforms (CDPs) that play a
disproportionate role in capturing the attention and "eyeballs" of
many of the users of the Internet.
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as
Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
Elkins Expires April 20, 2019 [Page 1]
INTERNET DRAFT ifilter October 17, 2018
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/1id-abstracts.html
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html
Copyright and License Notice
Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Elkins Expires April 20, 2019 [Page 2]
INTERNET DRAFT ifilter October 17, 2018
Table of Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Content Filtering by States and Public Authorities . . . . . . . 5
2.1 Filtering to Prevent Freedom of Assembly or Information . . 6
2.2 Filtering to Enforce Cultural Norms . . . . . . . . . . . . 6
2.3 Filtering to Prevent Violence . . . . . . . . . . . . . . . 7
2.4 Child Pornography . . . . . . . . . . . . . . . . . . . . . 7
2.5 Unauthorized Gambling and Illegal E-Commerce . . . . . . . . 7
2.6 User Generated Content (UGC) . . . . . . . . . . . . . . . . 8
3 Content Filtering by Internet Service Providers . . . . . . . . 8
3.1 Filtering for Network and Computer Security . . . . . . . . 9
3.2 Filtering on Behalf of the User . . . . . . . . . . . . . . 9
3.3 Filtering for Commercial Reasons . . . . . . . . . . . . . . 10
4 Content Filtering by Platforms Providing Content and Services . 10
4.1 Enforcing Cultural Norms . . . . . . . . . . . . . . . . . . 11
4.2 Blocking Extremist Activity . . . . . . . . . . . . . . . . 12
4.3 Blocking Activity Inciting Violence . . . . . . . . . . . . 13
4.4 Copyright Protection . . . . . . . . . . . . . . . . . . . . 13
4.5 Filtering for Network and Computer Security . . . . . . . . 14
4.6 Content Filtering by End-Users . . . . . . . . . . . . . . . 14
5 Security Considerations . . . . . . . . . . . . . . . . . . . . 14
6 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 14
6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.1 Normative References . . . . . . . . . . . . . . . . . . . . 15
6.2 Informative References . . . . . . . . . . . . . . . . . . . 15
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16
Elkins Expires April 20, 2019 [Page 3]
INTERNET DRAFT ifilter October 17, 2018
1 Introduction
This document explores the use cases and history of filtering of
content at a protocol and category level, grouping them by type of
entity. The focus is on the human rights involved as cited in the
Universal Declaration of Human Rights" [UDHR] which is one of the
foundational documents for HRPC. However, any case of content
blocking has an impact on online expression, thus the document tries
to provide a complete picture of all the reasons and mechanisms that
lead to the filtering and removal of content from the Internet.
The recent years have seen an increase in content filtering by for a
variety of reasons. States, through different legal instruments and
public authorities, require the blocking of Internet content with
different aims; undemocratic governments may wish to maintain their
rule and suppress dissent, but also democratic governments use
blocking to enforce cultural norms, human rights and compliance with
the law . Filters are also widely used by network operators and
Internet access providers for security (stopping botnets, malware
etc.), to implement user-defined policies (parental control,
corporate blocking of social networks during work time, etc.), to
reject spam and for other reasons.
Over-the-top (OTTs) [WikiOTT] and Content Delivery Platforms (CDPs) -
providers like Facebook, Google (YouTube), and Twitter that
distribute streaming media or other content and services as a
standalone product directly over the Internet, bypassing
telecommunications and connectivity providers - implement filters to
prevent the upload of copyrighted material or other content that
infringes their policies; in some countries, such filters are
mandated by law. End users also want to apply content filters or
content classification schemes at the edge of the network, for
example, to protect underage users of the local network or to prevent
the risk of reaching dangerous and inappropriate websites by error.
While filtering usually attracts the highest attention, there are
other ways to discriminate content that could be employed, leading to
similar results. For example, an access provider may isolate the
traffic directed to a specific website or service and slow it down,
or apply additional fees for it, up to the point where users desist
trying to connect to that destination. Content tagging can also
constitute a weaker content discrimination system; even if the
content remains accessible, marking it as dangerous or unsafe with
prominent advance warnings will discourage users from accessing it.
Some call all content filtering "censorship". For example, the
Internet Draft [Censorship] defines blocking of content as:
"Censorship is where an entity in a position of power - such as a
Elkins Expires April 20, 2019 [Page 4]
INTERNET DRAFT ifilter October 17, 2018
government, organization, or individual - suppresses communication
that it considers objectionable, harmful, sensitive, politically
incorrect or inconvenient. (Although censors that engage in
censorship must do so through legal, military, or other means, this
document focuses largely on technical mechanisms used to achieve
network censorship.)"
We find the use of the word "censorship" in this context to have
purely negative connotations. That is, the implication of using the
word "censorship" implies that filtering of content is always "bad"
or as acting against important human rights. The reality of the
situation is that filtering of content is done for many reasons,
including several which may be regarded as "good": acting to
preserve human rights either directly, when hateful and violent
content is removed, or, in the case of security filters, indirectly,
by providing safer Internet access that can encourage users to spend
more time and energies online and enjoy all the deriving
opportunities for education, free speech and assembly.
All in all, a balancing of rights is often at stake, and the right to
free expression of a content creator is not the only right that has
to be considered and protected. We thus feel that the entire subject
needs a more nuanced and careful examination, trying to establish
principles, guidelines and technical protocols that can increase
transparency and user control over these practices, allowing users to
distinguish between the bad and the good uses of content
classification and filtering schemes.
2 Content Filtering by States and Public Authorities
States may filter content through several legal instruments and by
order of different public authorities.
In democratic countries, cases of content blocking are usually
defined by a law and justified by an appropriate balancing of rights
(see section 2.2) and needs/benefits. Depending on the country, the
law may delegate to a specific public authority - either independent,
or part of the government - the power to order blocking of websites
and other content, or such power may be deferred to court orders
following due legal process. In authoritarian countries, such legal
basis and processes are often missing, and the blocking is more
focused on protecting the authority of the ruler (see section 2.1).
In technical terms, the filtering can either be applied at the IP
address level, via firewall rules or routing alterations, or at the
DNS level, by altering the results of the queries for the blocked
names. The latter method is more precise, avoiding to block all other
websites and services hosted on the same IP address, but is also
Elkins Expires April 20, 2019 [Page 5]
INTERNET DRAFT ifilter October 17, 2018
easier to circumvent for end users; thus, democratic countries
usually prefer the latter method while undemocratic ones generally
prefer the former.
The procedure to apply the filters usually involves the appropriate
public authority sending a list of the blocked IP addresses and/or
DNS names to all the country's Internet access providers, requiring
them to implement it on their routers and/or DNS resolvers. Providers
not complying with these requests usually are subject to fines, to
the cancellation of their license to operate (in countries where such
license exists) or even to the penal prosecution of their legally
responsible managers.
2.1 Filtering to Prevent Freedom of Assembly or Information
What is sometimes informally called "censorship", has to do with the
action of some governments to block websites that promote dissent and
counter-information and organize protest actions and assemblies to
contest the government, or even platforms such as Facebook or Twitter
which might enable dissidents to organize protests.
Other filtering is done to suppress knowledge of people who
participated in protest movements being harassed, jailed or even
killed. Some governments actually shut down the Internet altogether
to prevent any witnesses to unfortunate activities.
These activities may all be regarded as acting against basic human
rights in [UDHR].
2.2 Filtering to Enforce Cultural Norms
Some filtering is done via legislation to enforce cultural norms,
such as blocking sites which promote totalitarian and violent
ideologies or falsify history and news in ways that attack and
endanger certain parts of society.
For example, in several countries the advocacy of totalitarian
regimes such as nazi-fascism and communism, or of racist ideas and
practices against religious or ethnic minorities (Holocaust denial,
racism against people of African origin, etc.), is forbidden by law.
While websites located inside the country can be physically taken
down, the groups promoting these ideas often use anonymous hosting
services in foreign countries, thus making blocking the access at the
Internet provider level the only instrument available to these
countries to enforce these laws.
Elkins Expires April 20, 2019 [Page 6]
INTERNET DRAFT ifilter October 17, 2018
In the general balancing of rights, this type of content - which may
be seen as disinformation, and is generally used to promote
undemocratic practices and discrimination against specific minorities
and ethnic groups - is often considered extremely harmful to the
safety and rights of the affected minorities and to democracy and
public order in general, up to the point of overcoming the free
speech rights of the content authors.
This kind of rights balancing also depends on cultural norms, with
countries such as the United States giving priority to the free
speech rights even of hateful authors, and countries in Europe and
Asia giving priority to the general safety and social peace. Thus,
the related filtering practices have to be applied by country,
depending on the nationality of the end user and on the applicability
of jurisdiction.
2.3 Filtering to Prevent Violence
As an extension of the previous case, filtering often also applies to
content inciting violence and promoting terrorism, or making violence
easier. Its objective is to protect the right to safety of the
general population.
The EU wishes to fine Facebook, Google, etc. for problematic content.
[BBCTECH] reads in part:
"If authorities flag content that incites and advocates extremism,
the content must be removed from the web within an hour, the proposal
from the EU's lead civil servant states. Net firms that fail to
comply would face fines of up to 4% of their annual global turnover."
In the United States, a federal court has issued a temporary
injunction against publishing plans for 3-d printed guns on the
Internet.
2.4 Child Pornography
Another type of content which is often blocked is child pornography,
as a way to discourage the exploitation of children for sexual
reasons and protect their safety.
[Child-Porn] A number of countries will obtain the IP addresses of
visitors to child pornography sites. They will attempt to tie these
IP addresses to actual human beings so that they can be prosecuted.
2.5 Unauthorized Gambling and Illegal E-Commerce
Elkins Expires April 20, 2019 [Page 7]
INTERNET DRAFT ifilter October 17, 2018
In most countries, certain services are regulated and thus a license,
often connected to the payment of specific taxes and fees, is
required before being allowed to offer them online. While there may
also be an economic motivation to this, such regulation is generally
justified as protecting the safety and health of the population.
Among the most commonly regulated businesses are:
- Gambling
- Weapons
- Medical products and drugs requiring a doctor's prescription
- Alcohol
- Cigarettes and tobacco
Some countries - for example Italy [ITALY-REG] - use content
filtering to prevent access to websites offering these products for
sale without meeting the country's regulation and/or without having
paid the appropriate taxes and fees
2.6 User Generated Content (UGC)
Legislation to attempt to ensure that User Generated Content (UGC)
does not violate copyright laws has been proposed. [EUCOPY]
The summary is:
" Tech giants must pay for work of artists and journalists which they
use
Small and micro platforms excluded from directive's scope
Hyperlinks, "accompanied by "individual words" can be shared freely
Journalists must get a share of any copyright-related remuneration
obtained by their publishing house"
3 Content Filtering by Internet Service Providers
Internet Service Providers (ISPs) provide access to the Internet to
the general public. As such, they are usually required to apply any
State-mandated filters, depending on the applicable jurisdiction, as
described in section 2.
However, there are additional cases in which ISPs implement
filtering, or weaker content discrimination methods, on their own -
they will be described in this section.
Elkins Expires April 20, 2019 [Page 8]
INTERNET DRAFT ifilter October 17, 2018
3.1 Filtering for Network and Computer Security
Most of the common threats to the security of the Internet, both in
terms of network security and of security of the end users and of
their devices, are based on connections to unsafe websites and
services - either services that have been designed for malicious
purposes since the beginning, or legitimate services that have been
cracked and infected with malicious software.
For example, phishing relies on leading the user's browser to a
forgery of the website of one of the user's suppliers, like his bank
or her utility provider. Malware, such as ransomware and viruses, is
commonly spread by connecting the user's browser to an infected
website that downloads the executable to his device and launches it.
Botnets rely on stable connections between the clients on user
devices (often millions of them) and one or more "command and
control" hosts which move over time.
To counter these attacks and protect their users and their network,
ISPs often acquire timely lists of malicious hosts from specialized
providers and make them inaccessible by filtering them at the
connection level, either by IP address or by DNS name.
This practice is becoming even more common and more useful as the so-
called Internet of Things (IoT) gains adoption. IoT devices usually
are strongly automated but have very little computing power, security
features and update capabilities, making them very vulnerable to
exploits and takeovers. Thus, protecting the home network rather than
the individual device becomes the most viable solution for the
security of the Internet.
3.2 Filtering on Behalf of the User
In some cases, the end users actually desire that some content is
filtered out and made inaccessible, so that they cannot reach it even
by mistake. Three common cases are:
-Security filters: The user explicitly asks the ISP to filter out
malicious websites, as per the previous section.
-Parental control filters: From [UK-Controls] The user asks the ISP
to block content which is not deemed safe for children. This block is
usually customizable by each user, depending on their own desires,
and is requested by families with children accessing the Internet
from their home network. In some countries, the provision of this
service by the ISPs is either mandated by law or required by industry
self-regulation efforts.
Elkins Expires April 20, 2019 [Page 9]
INTERNET DRAFT ifilter October 17, 2018
-Productivity filters: The user - typically a corporate network
administrator - asks the ISP to block content which is inappropriate
or disallowed on the workplace, as it would endanger the corporate
network or reduce productivity. This content usually includes social
networks, sports and leisure websites, etc.
These filters can be provided for free, included in the Internet
access service, or can constitute a specific additional service
requiring opt-in and the payment of an additional fee. Services of
this type are commonly available in several European countries, often
with millions of customers.
3.3 Filtering for Commercial Reasons
Some ISPs provide limited Internet access services that only allow
access to specific types of applications (instant messaging, for
example) or do not include access to specific types of applications
(video streaming, for example). In these cases, connections to the
disallowed content are blocked or slowed down significantly. This
kind of filtering could also depend on specific partnerships - for
example, an ISP may encourage its users to use a specific search
engine by slowing down the connections to the other ones, in exchange
for monetary compensation by the preferred search engine.
Due to concerns over the market and competition impact of these
practices, including potential limitations of user rights, they have
been made illegal in some countries, upholding the so-called "network
neutrality" principle.
4 Content Filtering by Platforms Providing Content and Services
In addition to filters at the edge of the Internet, enforced by ISPs
either on behalf of the State or on their own, those that manage the
content and its delivery inside the network filter content as well.
Again, this may happen because of their decisions, or because these
companies are incorporated to do business under the laws of one or
more nation-states, and therefore are subject to the regulations of
such nation-states.
This kind of filtering happens under several forms. For over-the-top
and content delivery platforms (OTTs/CDPs), content may be examined
and blocked, often automatically, when the user uploads it onto the
platform, or may be verified and removed following a request by other
users or after a court order. In some cases, for example in search
engine results, the content will not be blocked, but will be marked
as unsafe with a prominent warning discouraging the user to proceed
with the connection, or will not be shown unless the user disables
the default "safe" mode.
Elkins Expires April 20, 2019 [Page 10]
INTERNET DRAFT ifilter October 17, 2018
Many of these platforms also employ policies that lead to the
exclusion of a user from the platform after a certain number of
breaches to acceptable content guidelines, thus silencing the user
permanently (though users may try to open a new account, but losing
all their existing followers and connections).
Content Delivery Networks (CDNs) and hosting providers also have the
option of taking websites down entirely by shutting down their web
service (see section 4.4 for an example). Similarly, domain name
registries and registrars may make content temporarily inaccessible
by discontinuing the domain name registration for the hostname used
in URLs, though, differently from CDNs and OTTs, they cannot actually
remove the content from the Internet.
While some of these filters depend on applicable laws, in most cases
the content guidelines are self-imposed, and may err on the side of
content restrictions to reduce the legal risk for the platform, at
the cost of reducing the user's chances to speak. In some cases these
filters are managed by algorithms and artificial intelligence
applications, making it hard for the user to even understand why the
content has been blocked; often, no explanation and appeal mechanism
is provided, or the appeal is untimely and ineffective.
Even when laws apply, given the global nature of these platforms, the
applicable laws are often not those of the user's own country, and it
is almost impossible for the user to exert any legal rights or
request due judiciary process.
Additionally, the more the specific service is globally consolidated
in the hands of a few big competing players and the more these
filters become impactful; particularly in the case of OTT social
networks, the termination of an account often cannot be adequately
replaced by a new account on any competing service or even on the
same one.
Some examples of similar situations follow.
4.1 Enforcing Cultural Norms
The article from the Guardian [FBNorms] expresses the thoughts of the
authors so well that we will be citing a lengthy passage.
From [FBNorms]:
"Facebook allows people to live-stream their suicide attempts "as
long as they are engaging with viewers" but will remove footage "once
there's no longer an opportunity to help the person". Pledges to kill
oneself through hashtags or emoticons or those that specify a fixed
Elkins Expires April 20, 2019 [Page 11]
INTERNET DRAFT ifilter October 17, 2018
date "more than five days" in the future shouldn't be treated as a
high priority.
These are tiny snippets from a cache of training materials that
Facebook content moderators need to absorb, in just two weeks, before
policing the world's largest social network.
The guidelines also require moderators to learn the names and faces
of more than 600 terrorist leaders, decide when a beheading video is
newsworthy or celebratory, and allow Holocaust denial in all but four
of the 16 countries where it's illegal - those where Facebook risks
being sued or blocked for flouting local law.
The documents detail what is and is not permitted on the platform,
covering graphic violence, bullying, hate speech, sexual content,
terrorism and self-harm. For the first time the public has a glimpse
of the thought process behind some of the company's editorial
judgements that go beyond the vague wording of its community
standards or statements made in the wake of a live-streamed murder."
The article goes on to posit that this may be the "most important
editorial guide sheet the world has ever created".
This use case brings up an issue which we may wish to consider. That
is, there is no reason that Facebook, as a private company, needs to
share with anyone what its methodology is for filtering. However,
considering the enormous impact of Facebook, it is in the public
interest to know the methodology. In short, Facebook may be
considered a public utility.
4.2 Blocking Extremist Activity
From [BBCTECH], some of the content providers on the Internet are
acting to censor content pertaining to potential extremist activity
"In 2017, Google said it would dedicate more than 10,000 staff to
rooting out violent extremist content on YouTube
YouTube said staff had viewed nearly two million videos for violent
extremism from June to December 2017
YouTube said more than 98% of such material was flagged
automatically, with more than 50% of the videos removed having fewer
than 10 views
Industry members have worked together since 2015 to create a database
of "digital fingerprints" of previously identified content to better
Elkins Expires April 20, 2019 [Page 12]
INTERNET DRAFT ifilter October 17, 2018
detect extremist material. As of December 2017, it contained more
than 40,000 such "hashes"
In 2017, Facebook claimed that 99% of all Islamic State and al Qaeda-
related content was removed before users had flagged it. The social
network said that 83% of the remaining content was identified and
removed within an hour
Between August 2015 and December 2017, Twitter said that it had
suspended more than 1.2 million accounts in its fight to stop the
spread of extremist propaganda. It said that 93% were flagged by
internal tools, with 74% suspended before their first tweet."
4.3 Blocking Activity Inciting Violence
[Myanmar] The United Nations report on the genocide of Rohingya
muslims ties it to posts on Facebook. Apparently, the Facebook
content provider had very few people who could read Burmese. So,
posts were not reviewed. The posts by the Myanmar military, intended
to incite violence, indeed did so. There was wholesale killing of
Rohingya muslims. Facebook is now censoring such posts and has hired
many Burmese speakers.
[DailyStormer] In August 2017 Cloudflare, one of the leading global
CDNs, terminated the account of the Daily Stormer, a website
advocating white supremacy and antisemitism, thus removing the
website from the Internet. At the same time, several domain name
registrars (GoDaddy, Tucows, Namecheap) discontinued the domain names
used by the website. In the end, the website became accessible again
by finding registries, registrars and hosters that would accept it,
but in practice it was made almost unavailable for several weeks.
4.4 Copyright Protection
Another reason for content filtering by OTTs, CDNs and hosting
services is copyright protection.
This has become a particularly active area since the EU adopted its
digital copyright rules negotiating position (i.e., still in early
stages) on 2018-09-12. Such rules will require all online platforms
to implement automated content control at upload and screen the
content for copyrighted material. [EU-DIGCOPY]
We may wish to study how the music industry has evolved copyright
protection over the past 100+ years in the US and elsewhere.
Elkins Expires April 20, 2019 [Page 13]
INTERNET DRAFT ifilter October 17, 2018
In brief (US) they rely on designated third-party agencies (such as
BMI, ASCAP, Harry J Fox) to provide licensing and collect royalties
and distribute those back to copyright owners. Statutory fees were
set by the US congress. Private agreements are also possible, and
common, of course.
The music industry has developed a sophisticated ecosystem and rather
than rely first on threats of criminal prosecution (which is possible
in extreme cases) instead tries to convert as much of the problem as
possible into civil claims (you used my work, you owe me money!).
This is in stark contrast to the EU directive which approaches the
problem via fines etc. and seems to create none of that
infrastructure.
4.5 Filtering for Network and Computer Security
Like ISPs, OTTs and CDNs also try to keep the network secure by
making malicious or infected websites inaccessible. Search engines
will mark results as unsafe; online platforms will disable links;
hosting services and CDNs will terminate the web service.
Some of the considerations in section 3.1 also apply here. However,
effective filtering measures at the Internet access point fully
protect the end user. To obtain the same effectiveness by acting at
the core of the network, all the OTTs, hosting services and CDNs of
the planet should be effective at taking down malicious content in a
timely manner. Currently, this effectiveness varies; even a few
"rogue" players being uncooperative to abuse and security takedown
requests are enough to provide safe havens for attackers.
4.6 Content Filtering by End-Users
Finally, the users themselves may want to block or mark content for
several reasons. The content filtering types and purposes are the
same described in section 3.2, but rather than relying on the ISP's
infrastructure, they deploy appropriate software on their devices.
This also includes user-controlled content classification mechanisms
that avoid blocking content entirely, but still allow end users to
preselect what they want to see or to miss on the Internet.
5 Security Considerations
No new security vulnerabilities are introduced as a result of this
document.
6 IANA Considerations
Elkins Expires April 20, 2019 [Page 14]
INTERNET DRAFT ifilter October 17, 2018
No IANA actions are requested by this document.
6 References
6.1 Normative References
6.2 Informative References
[Censorship] Hall, J., Aaron, M., Jones, B., Feamster, N., "A Survey
of Worldwide Censorship Techniques",
https://tools.ietf.org/html/draft-hall-censorship-tech-05, May 2018,
Work-in-progress
[BBCTECH] https://www.bbc.com/news/technology-45495544, Sept. 2018
[EU-DIGCOPY] http://www.europarl.europa.eu/news/en/press-
room/20180906IPR12103/parliament-adopts-its-position-on-digital-
copyright-rules, Sept. 2018
[FBNorms] https://www.theguardian.com/news/2017/may/22/facebook-
moderator-guidelines-extreme-content-analysis, May 2017
[Myanmar]
https://www.theguardian.com/technology/2018/aug/27/facebook-removes-
accounts-myanmar-military-un-report-genocide-rohingya, August 2018
[Child-Porn] https://gizmodo.com/fbis-disturbing-hacking-powers-
challenged-in-court-over-1794885187, May 2017
[UDHR] United Nations, "Universal Declaration of Human Rights", 1948,
<http:///www.un.org/en/universal-declaration-human-rights/>
[EUCOPY] http://www.europarl.europa.eu/news/en/press-
room/20180906IPR12103/parliament-adopts-its-position-on-digital-
copyright-rules, September 2018
[ITALY-REG] https://www.adm.gov.it/portale/lagenzia/monopoli-
comunica/contrasto-illegalita, January 2007
[UK-Controls] https://www.ispreview.co.uk/index.php/2017/10/uk-gov-
softens-stance-mandatory-isp-filters-adult-internet-content.html,
October 2017
[DailyStormer] Prince, M., "Why We Terminated Daily Stormer",
https://blog.cloudflare.com/why-we-terminated-daily-stormer/, August
2017
Elkins Expires April 20, 2019 [Page 15]
INTERNET DRAFT ifilter October 17, 2018
[WikiOTT] WikiPedia, "Over-the-top media services",
https://en.wikipedia.org/wiki/Over-the-top_media_services, October
2018
Authors' Addresses
Nalini Elkins
Enterprise Data Center Operators (EDCO)
EMail: nalini.elkins@e-dco.com
Barry Shein
Software Tool and Die
EMail: bzs@theworld.com
Vittorio Bertola
Open Exchange
EMail: vittorio.bertola@open-xchange.com
Elkins Expires April 20, 2019 [Page 16]