One document matched: draft-barnes-blocking-considerations-01.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!-- This template is for creating an Internet Draft using xml2rfc,
which is available here: http://xml.resource.org. -->
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC1122 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.1122.xml">
<!ENTITY RFC2775 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2775.xml">
<!ENTITY RFC3724 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3724.xml">
<!ENTITY RFC4033 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4033.xml">
<!ENTITY RFC4084 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4084.xml">
<!ENTITY RFC4301 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4301.xml">
<!ENTITY RFC4924 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4924.xml">
<!ENTITY RFC5246 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5246.xml">
<!ENTITY RFC5782 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5782.xml">
<!ENTITY RFC6480 PUBLIC "" "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6480.xml">
]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
(Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space
(using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->
<rfc category="info" docName="draft-barnes-blocking-considerations-01.txt"
ipr="trust200902">
<!-- category values: std, bcp, info, exp, and historic
ipr values: full3667, noModification3667, noDerivatives3667
you can add the attributes updates="NNNN" and obsoletes="NNNN"
they will automatically be output with "(if approved)" -->
<!-- ***** FRONT MATTER ***** -->
<front>
<!-- The abbreviated title is used in the page header - it is only necessary if the
full title is longer than 39 characters -->
<title abbrev="Blocking Considerations">Technical Considerations for
Internet Service Blocking</title>
<!--add 'role="editor"' below for the editors if appropriate -->
<!-- Another author who claims to be an editor -->
<author fullname="Richard Barnes" initials="R." surname="Barnes">
<organization>BBN Technologies</organization>
<address>
<postal>
<street>1300 N. 17th St</street>
<!-- Reorder these if your country does things differently -->
<city>Arlington</city>
<region>VA</region>
<code>22209</code>
<country>USA</country>
</postal>
<phone>+1 703 284 1340</phone>
<email>rbarnes@bbn.com</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Alissa" initials="A." surname="Cooper">
<organization>Center for Democracy & Technology</organization>
<address>
<email>acooper@cdt.org</email>
</address>
</author>
<author fullname="Olaf" initials="O." surname="Kolkman">
<organization>NLnet Labs</organization>
<address>
<email>olaf@nlnetlabs.nl</email>
</address>
</author>
<date day="16" month="July" year="2012"/>
<!-- If the month and year are both specified and are the current ones, xml2rfc will fill
in the current day for you. If only the current year is specified, xml2rfc will fill
in the current day and month for you. If the year is not the current one, it is
necessary to specify at least a month (xml2rfc assumes day="1" if not specified for the
purpose of calculating the expiry date). With drafts it is normally sufficient to
specify just the year. -->
<!-- Meta-data Declarations -->
<area>General</area>
<workgroup>Network Working Group</workgroup>
<!-- WG name at the upperleft corner of the doc,
IETF is fine for individual submissions.
If this element is not present, the default is "Network Working Group",
which is used by the RFC Editor as a nod to the history of the IETF. -->
<abstract>
<t>The Internet is structured to be an open communications medium. This
openness is one of the key underpinnings of Internet innovation, but it
can allow communications that may be viewed as either desirable or
undesirable by different parties. Thus, as the Internet has grown, so
have mechanisms to limit the extent and impact of abusive or allegedly
illegal communications. Recently, there has been an increasing emphasis
on "blocking", the active prevention of abusive or allegedly illegal
communications. This document examines several technical approaches to
Internet content blocking in terms of their alignment with the overall
Internet architecture. In general, the approach to content blocking that
is most coherent with the Internet architecture is to inform endpoints
about potentially undesirable services, so that the communicants can
avoid engaging in abusive or illegal communications.</t>
</abstract>
</front>
<middle>
<section title="Introduction">
<t>The original design goal of the Internet was to enable communications
between hosts. As this goal was met and people started using the
Internet to communicate, however, it became apparent that some hosts
were engaging in arguably undesirable communications. The most famous
early example of undesirable communications was the Morris worm, which
used the Internet to infect many hosts in 1988. As the Internet has
evolved into a rich communications medium, so have mechanisms to
restrict undesirable communications.</t>
<t>Efforts to restrict or deny access to Internet resources have evolved
over time. As noted in <xref target="RFC4084"/>, some Internet service
providers impose restrictions on which applications their customers may
use and which traffic they allow on their networks. These restrictions
are often imposed with customer consent, where customers may be
enterprises or indiviuals. Increasingly, however, both governmental and
private sector entities are seeking to block access to certain content,
traffic, or communications without the knowledge or agreement of
affected users. Where these entities do not directly control networks,
they aim to make use of intermediary systems to effectuate the
blocking.</t>
<t>Entities may seek to block Internet content for a diversity of
reasons, including defending against security threats, restricting
access to content thought to be objectionable, and preventing illegal
activity. While blocking remains highly contentious in many cases, the
desire to restrict access to content will likely continue to exist.</t>
<t>This document aims to clarify the technical implications and
trade-offs of various blocking strategies and to identify the potential
for different strategies to come into conflict with the Internet's
architecture or cause harmful side effects ("collateral damage"). The
strategies broadly fall into three categories:</t>
<t><list style="numbers">
<t>Control by intermediaries</t>
<t>Manipulation of authoritative data</t>
<t>Reputation and authentication systems</t>
</list>Examples of blocking or attempted blocking using the DNS, HTTP
proxies, domain name seizures, spam filters, and RPKI manipulation are
used to illustrate each category's properties.</t>
<t>Whether particular forms of blocking are lawful in particular
jurisdictions raises complicated legal questions that are outside the
scope of this document.</t>
</section>
<section title="Architectural Principles">
<t>To understand the implications of different blocking strategies, it
is important to understand the key principles that have informed the
design of the Internet. While much of this ground has been well trod
before, this section highlights four architectural principles that have
a direct impact on the viability of content blocking: end-to-end
connectivity, layering, distribution and mobility, and locality and
autonomy.</t>
<section title="End-to-End Connectivity and "Transparency"">
<t>The end-to-end principle is "the core architectural guideline of
the Internet" <xref target="RFC3724"/>. Adherence to the principle of
vesting endpoints with the functionality to accomplish end-to-end
tasks results in a "transparent" network in which packets are not
filtered or transformed en route <xref target="RFC2775"/>. This
transparency in turn is a key requirement for providing end-to-end
security features on the network. Modern security mechanisms that rely
on trusted hosts communicating via a secure channel without
intermediary interference enable the network to support e-commerce,
confidential communication, and other similar uses.</t>
<t>The end-to-end principle is fundamental for Internet security, and
the foundation on which Internet security protocols are built.
Protocols such as TLS and IPsec <xref target="RFC5246"/><xref
target="RFC4301"/> are designed to ensure that each endpoint of the
communication knows the identity of the other endpoint, and that only
the endpoints of the communication can access the secured contents of
the communication. For example, when a user connects to a bank's web
site, TLS ensures that the user's banking information is communicated
to the bank and nobody else.</t>
<t>Some blocking strategies require intermediaries to insert
themselves within the end-to-end communications path, potentially
breaking security properties of Internet protocols. In these cases it can be difficult or
impossible for endpoints to distinguish between attackers and the
entities conducting blocking.</t>
<t>A similar notion to the end-to-end principle is the notion of
"transparency," that is, the idea that the network should provide a
generic connectivity service between endpoints, with minimal
interaction by intermediaries aside from routing packets from source
to destination. In "Reflections on Internet Transparency" <xref
target="RFC4924"/>, the IAB assessed the relevance of this
principle and concluded that "far from having lessened in relevance,
technical implications of intentionally or inadvertently impeding
network transparency play a critical role in the Internet's ability to
support innovation and global communication".</t>
</section>
<section title="Layering">
<t>Internet applications are built out of a collection of
loosely-coupled components or "layers." Different layers serve
different purposes, such as routing, transport, and naming (see <xref
target="RFC1122"/>, especially Section 1.1.3). The functions at these
layers are developed autonomously and almost always operated by
different entities. For example, in many networks, physical and
link-layer connectivity is provided by an "access provider", while IP
routing is performed by an "Internet service provider" -- and
application-layer services are provided by a completely separate
entity (e.g., a web server). Upper-layer protocols and applications
rely on combinations of lower-layer functions in order to work. As a
consequence of the end-to-end principle, functionality at higher
layers tends to be more specialized, so that many different
specialized applications can make use of the same generic underlying
network functions.</t>
<t>As a result of this structure, actions taken at one layer can
affect functionality or applications at higher layers. For example,
manipulating routing or naming functions to restrict access to a
narrow set of resources via specific applications will likely affect
all applications that depend on those functions.</t>
<t>In a similar manner, physical distances grow as one moves up the
stack. A host must be physically connected to a link-layer access
provider network, and its distance from its ISP is limited by the
length of a link, but Internet applications can be delivered by a host
anywhere in the world.</t>
<t>Thus, as one considers changes at each layer of the stack, changes
at higher layers become more specific in terms of application, but
more broad in terms of impact. Changes to an access network will only
affect a relatively small, well-defined set of users (namely, those
connected to the access network), but can affect all applications for
those users. Changes to an application service can affect users across
the entire Internet, but only for that specific application.</t>
</section>
<section title="Distribution and Mobility">
<t>The Internet is designed as a distributed system both
geographically and topologically. Resources can be made globally
accessible regardless of their physical location or connectivity
providers used. Resources are also highly mobile -- moving content
from one physical or logical address to another can often be easily
accomplished.</t>
<t>This distribution and mobility underlies a large part of the
resiliency of the Internet. Internet routing can survive major outages
such as cuts in undersea fibers because the distributed routing system
of the Internet allows individual networks to collaborate to route
traffic. Application services are commonly protected using distributed
servers. For example, even though the 2010 earthquake in Haiti
destroyed almost all of the Internet infrastructure in the country,
the Haitian top-level domain name (.ht) had no interruption in service
because it was also accessible via servers in the United States,
Canada, and France.</t>
<t>Undesirable communications also benefit from this resiliency --
resources that are blocked or restricted in one part of the Internet
can be reconstituted in another part of the Internet, creating a
"water balloon" effect. If a web site is prevented from using a domain
name or set of IP addresses, the web site can simply move to another
domain name or network.</t>
</section>
<section title="Locality and Autonomy">
<t>The basic unit of Internet routing is an "Autonomous System" -- a
network that manages its own routing internally. The concept of
autonomy is present in many aspects of the Internet, as is the related
concept of locality, the idea that local changes should not have a
broader impact on the network.</t>
<t>These concepts are critical to the stability and scalability of the
Internet. With millions of individual actors engineering different
parts of the network, there would be chaos if every change had impact
across the entire Internet.</t>
<t>Locality implies that the impact of technical changes made to
realize blocking will only be within a defined scope. As discussed
above, this scope might be narrow in one dimension (set of users or
set of applications affected) but broad in another. Changes made to
effectuate blocking are often targeted at a particular locality, but
result in blocking outside of the intended scope.</t>
</section>
</section>
<section title="Examples of Blocking">
<t>As noted above, systems to restrict or block Internet communications
have evolved alongside the Internet technologies they seek to restrict.
Looking back at the history of the Internet, there have been several
such systems deployed, with varying degrees of effectiveness.</t>
<t><list style="symbols">
<t>Firewalls: Firewalls are a very common form of service blocking,
employed at many points in today's Internet. Typically, firewalls
block according to content-neutral rules, e.g., blocking all inbound
connections or outbound connections on certain ports. Firewalls can
be deployed either on end hosts (under user control), or at network
boundaries.</t>
<t>Web Filtering: HTTP and HTTPS are common targets for blocking and
filtering, typically targeted at specific URLs. Some enterprises use
HTTP blocking to block non-work-appropriate web sites, and several
nations require HTTP and HTTPS filtering by their ISPs in order to
block illegal content. HTTPS is a challenge for these systems,
because the URL in an HTTPS request is carried inside the secure
channel. To block access to content made accessible via HTTPS, filtering systems thus must either block based only on IP
address, or else obtain a trust anchor certificate that is trusted
by endpoints (and thus act as a man in the middle).</t>
<t>Spam Filtering: Spam filtering is one of the oldest forms of
service blocking, in the sense that it denies spammers access to
recipients' mailboxes. Spam filters evaulate messages based on a
variety of criteria and information sources to decide whether a
given message is spam. For example, DNS Reverse Black Lists use the
reverse DNS to flag whether an IP address is a known spam source
<xref target="RFC5782"/>. Spam filters are typically either
installed on user devices (e.g., in a mail client) or operated by a
mail domain on behalf of users.</t>
<t>Domain name seizure: In recent years, US law enforcement
authorities have been issuing legal orders to domain name registries
to seize domain names associated with the distribution of
counterfeit goods and other allegedly illegal activity <xref
target="US-ICE"/>. When domain names are seized, DNS queries for the
seized names are typically redirected to resolve to U.S. government
IP addresses that host information about the seizure. Domain name seizures
conflict with the DNS security architecture <xref target="RFC4033"/> (since they involve manipulation
of authoritative DNS data), layering (since it is the content that
is the target, not the name itself), mobility (since the allegedly
illegal activity can easily relocate to a different domain name),
and locality (since content is blocked not only within the
jursidiction of the seizure, but globally, even when it may be
affirmatively legal elsewhere <xref target="RojaDirecta"/>).</t>
<t>Safe Browsing: Modern web browsers provide some measures to prevent users from accessing
malicious web sites. For instance, before loading a URL, current
versions of Google Chrome and Firefox web browsers use the Google
Safe Browsing service to determine whether or not a given URL is
safe to load <xref target="SafeBrowsing"/>. The DNS can also be used
to mark domains as safe or unsafe <xref target="RFC5782"/>.</t>
<t>Interference with routing and addressing data: Governments have recently intervened in the management of IP addressing and routing information in order to maintain control over a specific set of DNS servers. As part of
an internationally coordinated response to the DNSChanger malware, a
Dutch court ordered the RIPE NCC to freeze the accounts of several
resource holders as a means to limit the resource holders' ability to use certain
address blocks <xref target="GhostClickRIPE"/>. These
actions have led to concerns that the resource certification system and
related secure routing technologies developed by the IETF SIDR
working group might be subject to government manipulation as well
<xref target="RFC6480"/>, potentially for the purpose of denying targeted networks access to the Internet.</t>
</list></t>
</section>
<section title="Blocking Design Patterns">
<t>Considering a typical end-to-end Internet communcation, there are
three logical points at which blocking mechanisms can be put in place:
the middle and either end. Mechanisms based in the middle usually involve an
intermediary device in the network that observes Internet traffic and
decides which communications to block. At the service end of a communication,
authoritative databases (such as the DNS) and servers can be manipulated
to deny or alter service delivery. At the user end of a communication,
authentication and reputation systems enable user devices (and users) to make
decisions about which communications should be blocked.</t>
<t>In this section, we discuss these three "blocking design patterns"
and how they align with the Internet architectural principles outlined
above. In general, the third pattern -- informing user devices of which
services should be blocked -- is the most coherent with the Internet
architecture.</t>
<section title="Intermediary-Based Blocking">
<t>A common goal for blocking systems is for the system to be able to
block communications without the consent or cooperation of either
endpoint to the communication. Such systems are thus implemented using
intermediary devices in the network, such as firewalls or filtering
systems. These systems inspect user traffic as it passes through the
network, decide based on the content of a given communication
whether it should be blocked, and then block or allow the
communication as desired.</t>
<t>Common examples of intermediary-based filtering are firewalls and
network-based web-filtering systems. For example, web filtering
devices usually inspect HTTP requests to determine the URL being
requested, compare that URL to a list of black-listed or white-listed
URLs, and allow the request to proceed only if it is permitted by
policy (or at least not forbidden). Firewalls perform a similar
function for other classes of traffic in addition to HTTP. Note that
this class does not cover cases where the intermediary is authorized
by the endpoints to act on an endpoint's behalf (e.g., mail servers),
since these involve the cooperation of at least one affected
endpoint.</t>
<t>Accomplishing blocking in this way conflicts with the end-to-end
and transparency principles noted above. The very goal of blocking
in this way is to impede transparancy for particular content or
communications. For this reason, they run into several technical
issues that limit their viability in practice. In particular, many
issues arise from the fact that an intermediary needs to have access
to a sufficient amount traffic to make its blocking determination.</t>
<t>The first challenge to obtaining this traffic is simply gaining
access to the constituent packets. The Internet is designed to deliver
packets from source to destination -- not to any particular point
along the way. In practice, inter-network routing is often asymmetric, and for
sufficiently complex local networks, intra-network traffic flows can
be asymmetric as well.</t>
<t>This asymmetry means that an intermediary will often see only one
half of a given communication (if it sees any of it at all), limiting
its ability to make decisions based on the content of the
communication. For example, a URL-based filter cannot make blocking
decisions if it only has access to HTTP responses (not requests).
Routing can sometimes be forced to be asymmetric within a given
network using routing configuration or layer-2 mechanisms (e.g.,
MPLS), but these mechanisms are frequently brittle, complex, and
costly -- and often reduce network performance relative to asymmetric
routing.</t>
<t>If an intermediary blocking device can access the packets that
constitute a communication, then the next question is whether the
intermediary can access the application content within these packets.
If the application content is encrypted with a security protocol
(e.g., IPsec or TLS), then the intermediary will require the ability to decrypt the packets to examine application content. Since security
protocols are designed to provide end-to-end security (i.e., to
prevent intermediaries from examining content), the intermediary would need
to masquerade as one of the endpoints, breaking the authentication in
the security protocol, reducing the security and of the users and
services affected, and interfering with private communication.</t>
<t>If the intermediary is unable to decrypt the security protocol,
then its blocking determinations for secure sessions can only be based
on unprotected attributes, such as IP addresses and port numbers. Some
blocking systems today still attempt to block based on these
attributes, for example, blocking TLS traffic to known proxies that
could be used to tunnel through the blocking system.</t>
<t>However, as the Telex project recently demonstrated, if an endpoint
cooperates with a server, it can create a TLS tunnel that is
indistinguishable from legitimate traffic <xref target="Telex"/>. For
example, if a banking website operated a Telex server, then a blocking
system would be unable to distinguish legitimate encrypted banking
traffic from Telex-tunneled traffic to that server (potentially
carrying content that the blocking system would have blocked).</t>
<t>Thus, in principle it is impossible to prevent tunnelling through
an intermediary device without blocking all secure traffic. (The only
limitation in practice is the requirement for special software on the
client.) In most cases, blocking all secure traffic is an unacceptable
consequence of blocking, since security is often required for services
such as online commerce, enterprise VPNs, and management of critical
infrastructure. If governments or network operators were to force these services to use
insecure protocols so as to effectuate blocking, they would expose their users to the various
attacks that the security protocols were put in place to prevent.</t>
<t>Some network operators may assume that only blocking access to resources available via unsecure channels is sufficient for their purposes -- i.e., that the size of the user base that will be willing to use secure tunnels and/or special software to circumvent the blocking is low enough to make blocking via intermediaries worthwhile. However, the longer such a blocking system is in place, the more likely it will become that efficient and easy-to-use circumvention tools that make use of secure tunnelling will become widespread.</t>
<t>It may be tempting for those operating blocking systems to assume that
tunneling through intermediaries is sufficiently difficult that the average
user will not attempt it. Under that assumption, one might decide that there
is no need to control secure traffic, and thus that intermediary-based blocking
is an attractive option. However, the longer such blocking systems
are in place, the more likely it is that efficient and easy-to-use tunnelling
tools will become available. The proliferation of the ToR network, for example, and its
increasingly sophisticated blocking-avoidance techniques demonstrate that
there is energy behind this trend <xref target="Tor"/>.</t>
<t>Blocking via intermediaries is thus only effective in a fairly
constrained set of circumstances. First, the routing structure of the
network needs to be such that the intermediary has access to any
communications it intends to block. Second, the blocking system needs an
out-of-band mechanism to mitigate the risk of secure protocols being
used to avoid blocking (e.g., human analysts identifying IP addresses
of tunnel endpoints), which may be resource-prohibitive, especially if tunnel endpoints begin to change frequently. If the network is sufficiently complex, or the
risk of tunneling too high, then intermediary-based blocking is
unlikely to be effective.</t>
</section>
<section title="Server-Based Blocking">
<t>Internet services are driven by physical devices such as web
servers, DNS servers, certificate authorities, or WHOIS databases.
These devices control the structure and availability of Internet
applications by providing data elements that are used by application
code. For example, changing an A or AAAA record on a DNS server will
change the IP address that is bound to a given domain name;
applications trying to communicate with the host at that name will
then communicate with the host at the new address.</t>
<t>As physical objects, the servers that underlie Internet
applications exist within the jurisdiction of governments, and their operators are
thus subject to certain local laws. It is thus possible for laws to be
structured to facilitate blocking of Internet services operated within a
jurisdiction, either via direct government action or by allowing
private actors to demand blocking (e.g., through lawsuits).</t>
<t>The "seizure" of domain names discussed above is an example of this
type of blocking. Government officials required the operators of the
parent zones of a target name (e.g., "com" for "example.com") to direct
queries for that name to a set of government-operated name servers.
Users of services under a target name would thus be unable to locate
the correct servers for that name, denying them the ability to access
these services. The action of the Dutch police against the RIPE NCC is
of a similar character, limiting the ability of certain ISPs to manage
their Internet services by controlling their WHOIS information. </t>
<t>Blocking services by disabling or manipulating servers does respect
the end-to-end principle, since the affected server is one end of the
blocked communication. However, its iteractions with layering,
resource mobility, and autonomy can limit its effectiveness and cause
undesirable consequences. </t>
<t>The layered architecture of the Internet means that there are
several points at which access to a service can be blocked. The
service can be denied Internet access (via control of routers), DNS
services (DNS servers), or application-layer services (application
servers, e.g, web servers). Blocking via these channels, however, is
both amplified and limited by the global nature of the Internet.</t>
<t>On the one hand, the global nature of Internet resources amplifies
blocking actions, in the sense that it increases the risk of
overblocking -- collateral damage to legitimate use of a resource. A given network or
domain name might host both legitimate services and services that governments desire to block. A service hosted under a domain name and operated
in a jurisdiction where it is considered undesirable might be
considered legitimate in another jurisdiction; a blocking action in
the host jurisdiction would deny legitimate services in the other.</t>
<t>On the other hand, the distributed and mobile nature of Internet resources limits
the effictiveness of blocking actions. Because an Internet service can
be reached from anywhere on the Internet, a service that is blocked in
one jurisdiction can often be moved or re-instantiated in another
jurisdiction. Likewise, services that rely on blocked resources can
often be rapidly re-configured to use non-blocked resources. For
example, the technique of "snowshoe spamming" is already widely used
to spread spam generation across a variety of resources and
jursidictions to prevent spam blocking from being effective.</t>
<t>The efficacy of server-based blocking is further limited by the
autonomy principle discussed above. If the Internet community
realizes that a blocking decision has been made and wishes to counter
it, then local networks can "patch" the authoritative data to avoid
the blocking. For example, in 2008, Pakistan Telecom attempted to deny
access to YouTube within Pakistan by announcing bogus routes for YouTube
address space to peers in Pakistan. YouTube was temporarily denied
service on a global basis due to a route leak, but service was
restored in approximately two hours because network operators around the world
re-configured their routers to ignore the blocking routes <xref
target="RenesysPK"/>. In the context of SIDR and secure routing, a
similar re-configuration could be done if a resource certificate were
to be revoked in order to block routing to a given network.</t>
<t>In the DNS context, similar work-arounds are available. If a domain
name were blocked by changing authoritative records, network operators
can restore service simply by extending TTLs on cached pre-blocking
records in recursive resolvers, or by statically configuring resolvers
to return un-blocked results for the affected name. Indeed these
techniques are commonly used in practice to provide service to domains
that have been disrupted, such as the .ht domain during the 2010
earthquake in Haiti <xref target="EarthquakeHT"/>.</t>
<t>Server-based blocking also has a variety of non-technical
implications. The considerations discussed in ISOC's whitepaper on DNS
filtering <xref target="ISOCFiltering"/> also apply to other global
Internet resources.</t>
<t>In summary, server-based blocking can sometimes be used to
immediately block a target service by removing some of the resources
it depends on. However, such blocking actions often have harmful
side effects due to the global nature of Internet resources. The
global mobility of Internet resources, together with the autonomy of
the networks that comprise the Internet, can mean that the effects of
server-based blocking can be quickly be negated. To adapt a quote by
John Gilmore, "The Internet treats blocking as damage and routes
around it".</t>
</section>
<section title="Endpoint-Based Blocking">
<t>Internet users and their devices make thousands of decisions every
day as to whether to engage in particular Internet communications.
Users decide whether to click on links in suspect email messages;
browsers advise users on sites that have suspicious characteristics;
spam filters evaluate the validity of senders and messages. If the
hardware and software making these decisions can be instructed not to
engage in certain communications, then the communications are
effectively blocked because they never happen.</t>
<t>There are several systems in place today that advise user systems
about which communications they should engage in. As discussed above,
several modern browsers consult with "Safe Browsing" services before
loading a web site in order to determine whether the site could
potentially be harmful. Spam filtering is one of the oldest blocking
systems in the Internet; modern blocking systems typically make use of
one or more "reputation" or "blacklist" databases in order to make
decisions about whether a given message or sender should be blocked.
These systems typically have the property that many blocking systems
(browsers, MTAs) share a single reputation service.</t>
<t>This approach to blocking is coherent with the Internet
architectural principles discussed above, dealing well with the
end-to-end principle, layering, mobility, and locality/autonomy.</t>
<t>Much like server-based blocking, endpoint-based blocking is
performed at one end of an Internet communication, and thus avoids the
problems related to end-to-end security mechanisms that
intermediary-based blocking runs into. Endpoint-based
blocking also lacks some of the limitations of server-based blocking: While
server-based blocking can only see and affect the portion of an
application that happens at a given server (e.g., DNS name
resolution), endpoint-based blocking has visibility into the entire
application, across all layers and transactions. This visibility
provides endpoint-based blocking systems with a much richer set of
information on which to make blocking decisions.</t>
<t>In particular, endpoint-based blocking deals well with adversary
mobility. If a blocked service relocates resources or uses different
resources, a server-based blocking approach may not be able to affect
the new resources. An intermediary-based blocking system may not even
be able to tell whether the new resources are being used, if the
blocked service uses secure protocols. By contrast, endpoint-based
blocking systems can detect when a blocked service's resources have
changed (because of their full visibility into transactions) and
adjust blocking as quickly as new blocking data can be sent out
through a reputation system.</t>
<t>Finally, in an endpoint-based blocking system, blocking actions are
performed autonomously, by individual endpoints or their delegates.
The effects of blocking are thus local in scope, minimizing the
effects on other users or other, legitimate services. </t>
<t>The primary challenge to endpoint-based blocking is that it
requires the cooperation of endpoints. Where this cooperation is
willing, this is a fairly low barrier, requiring only reconfiguration
or software update. Where cooperation is unwilling, it can be
challenging to enforce cooperation for large numbers of endpoints. If
cooperation can be achieved, endpoint-based blocking can be much more
effective than other approaches because it is so coherent with
the Internet's architectural principles. </t>
</section>
</section>
<section title="Summary of Trade-offs">
<t>Intermediary-based blocking is a relatively low-cost blocking
solution in some cases, but a poor fit with the Internet architecture,
especially the end-to-end principle. It thus suffers from several
limitations.</t>
<t><list style="symbols">
<t>Examples: Firewalls, web filtering systems.</t>
<t>A single intermediary device can be used to block access by
many users to many services.</t>
<t>Intermediary blocking can be done without the cooperation of either endpoint
to a communication (although having that cooperation makes it more likely to be effective).</t>
<t>Intermediaries often lack sufficient information to make blocking
decisions, due to routing asymmetry or encryption.</t>
<t>Intermediary blocking sometimes involves breaking end-to-end security assurances.</t>
<t>Tunneling through blocking is difficult to prevent without
preventing legitimate secure services.</t>
</list></t>
<t>Server-based blocking can provide rapid effects for resources under
the control of the blocking entity, but can have limited effects due to
the global, autonomous nature of Internet resources and networks.</t>
<t><list style="symbols">
<t>Examples: Domain name seizures, WHOIS account freezing, RPKI
certificate revocation.</t>
<t>Internet services that depend on specific resources can be
blocked by disabling those resources.</t>
<t>Blocked resources can often be easily relocated or reinstantiated in a
location where they will not be blocked.</t>
<t>Resources used by undesirable services are often also used by
legitimate services, resulting in collateral damage.</t>
<t>Autonomy of Internet networks and users allows them to "route around"
blocking.</t>
</list></t>
<t>Endpoint-based blocking matches well with the overall design of the
Internet. </t>
<t><list style="symbols">
<t>Examples: Safe browsing, spam filtering, enterprise HTTPS
proxies.</t>
<t>Endpoints block services by deciding whether or not to engage in
a given communication.</t>
<t>Blocking system has full visibility into all layers involved in a
communication.</t>
<t>Adversary mobility can be quickly observed so that blocking
systems can be updated to account for it.</t>
<t>Requires cooperation of endpoints.</t>
</list></t>
<t>Because it agrees so well with Internet architectural principles,
endpoint-based blocking is the most effective form of Internet service
blocking, and the least harmful to the Internet.</t>
</section>
<section title="IANA Considerations">
<t>This document makes no request of IANA.</t>
</section>
<section title="Security Considerations">
<t>The primary security concern related to Internet service blocking is
the affect that it has on the end-to-end security model of many Internet
security protocols. When blocking is enforced by an intermediary with
respect to a given communication, the blocking system may need to obtain
access to confidentiality-protected data to make blocking decisions.
Mechanisms for obtaining such access typically require the blocking
system to defeat the authentication mechanisms built into security
protocols.</t>
<t>For example, some enterprise firewalls will dynamically create TLS
certificates under a trust anchor recognized by endpoints subject to
blocking. These certificates allow the firewall to authenticate as any
website, so that it can act as a man-in-the-middle on TLS connections
passing through the firewall.</t>
<t>Modifications such as these obviously make the firewall itself a
point of weakness. If an attacker can gain control of the firewall or
compromise the key pair used by the firewall to sign certificates, he
will have access to the plaintext of all TLS sessions for all users
behind that firewall, in a way that is undetectable to users.</t>
<t>When blocking systems are unable to inspect and block secure
protocols, it is tempting to simply block those protocols. For example,
a web blocking system that is unable to hijack HTTPS connections might
simply block any attempted HTTPS connection. However, since Internet
security protocols are commonly used for critical services such as
online commerce and banking, blocking these protocols would block access
to these services as well, or worse, force them to be conducted over
insecure protocols.</t>
<t>Security protocols can, of course, also be used a mechanism for
blocking services. For example, if a blocking system can insert invalid
credentials for one party in an authentication protocol, then the other
end will typically terminate the connection based on the authentication
failure. However, it is typically much simpler to simply block secure
protocols than to exploit those protocols for service blocking.</t>
</section>
</middle>
<!-- *****BACK MATTER ***** -->
<back>
<references title="Informative References">
&RFC1122;
&RFC2775;
&RFC3724;
&RFC4033;
&RFC4084;
&RFC4301;
&RFC4924;
&RFC5246;
&RFC5782;
&RFC6480;
<reference anchor="RojaDirecta"
target="http://www.techdirt.com/articles/20110201/10252412910/homeland-security-seizes-spanish-domain-name-that-had-already-been-declared-legal.shtml">
<front>
<title>Homeland Security Seizes Spanish Domain Name That Had Already
Been Declared Legal</title>
<author fullname="Mike Masnick" initials="M.M." surname="Masnick">
<organization>TechDirt</organization>
</author>
<date year="2011"/>
</front>
</reference>
<reference anchor="US-ICE"
target="http://www.ice.gov/doclib/news/library/factsheets/pdf/operation-in-our-sites.pdf">
<front>
<title>Operation in Our Sites</title>
<author>
<organization>U.S. Immigration and Customs
Enforcement</organization>
</author>
<date year="2011"/>
</front>
</reference>
<reference anchor="SafeBrowsing"
target="https://developers.google.com/safe-browsing/">
<front>
<title>Safe Browsing API</title>
<author>
<organization>Google</organization>
</author>
<date year="2012"/>
</front>
</reference>
<reference anchor="GhostClickRIPE"
target="http://www.ripe.net/internet-coordination/news/about-ripe-ncc-and-ripe/ripe-ncc-blocks-registration-in-ripe-registry-following-order-from-dutch-police">
<front>
<title>RIPE NCC Blocks Registration in RIPE Registry Following Order
from Dutch Police</title>
<author>
<organization>RIPE NCC</organization>
</author>
<date year="2012"/>
</front>
</reference>
<reference anchor="Telex" target="https://telex.cc/">
<front>
<title>Telex: Anticensorship in the Network Infrastructure</title>
<author fullname="Eric Wustrow" initials="E." surname="Wustrow">
</author>
<author fullname="Scott Wolchok" initials="S." surname="Wolchok">
</author>
<author fullname="Ian Goldberg" initials="I." surname="Goldberg">
</author>
<author fullname="J. Alex Halderman" initials="J.A."
surname="Halderman">
</author>
<date month="August" year="2011"/>
</front>
</reference>
<reference anchor="RenesysPK"
target="http://www.renesys.com/blog/2008/02/pakistan_hijacks_youtube_1.shtml">
<front>
<title>Pakistan hijacks YouTube</title>
<author fullname="Martin A. Brown" initials="M." surname="Brown">
<organization>Renesys</organization>
</author>
<date month="February" year="2008"/>
</front>
</reference>
<reference anchor="EarthquakeHT"
target="http://www.apricot.net/apricot2010/__data/assets/pdf_file/0019/19018/Lightning-Talk_03_Gaurab-Upadhaya-dotht-apricot-lightning.pdf">
<front>
<title>.ht: Recovering DNS from the Quake</title>
<author fullname="Gaurab Raj Upadhaya" initials="G."
surname="Raj Upadhaya">
<organization>PCH</organization>
</author>
<date month="March" year="2010"/>
</front>
</reference>
<reference anchor="ISOCFiltering"
target="http://www.internetsociety.org/what-we-do/issues/dns/finding-solutions-illegal-line-activities">
<front>
<title>DNS: Finding Solutions to Illegal On-line Activities</title>
<author fullname="" initials="" surname="">
<organization>Internet Society</organization>
</author>
<date month="" year="2012"/>
</front>
</reference>
<reference anchor="Tor"
target="https://www.torproject.org/">
<front>
<title>Tor Project: Anonymity Online</title>
<author fullname="" initials="" surname="">
</author>
<date month="" year="2012"/>
</front>
</reference>
</references>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-24 01:06:23 |