One document matched: draft-moncaster-congestion-exposure-problem-00.txt
Congestion Exposure T. Moncaster, Ed.
Internet-Draft L. Burness
Intended status: Informational BT
Expires: April 3, 2010 M. Menth
University of Wuerzburg
J. Araujo
UCL
S. Blake
Extreme Networks
R. Woundy
Comcast
September 30, 2009
The Nede for Congestion Exposure in the Internet
draft-moncaster-congestion-exposure-problem-00
Status of This Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 3, 2010.
Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info).
Moncaster, et al. Expires April 3, 2010 [Page 1]
Internet-Draft Congestion Exposure Problem September 2009
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document.
Abstract
Over the past decades, TCP's congestion control algorithm has allowed
the Internet to grow enormously whilst saving it from congestion
collapse. However, TCP is applied on a voluntary base with bandwidth
shared among flows instead of users. This causes problems,
especially at peak times when the network becomes saturated, and this
leads some ISPs to police traffic to alleviate congestion. However,
since congestion on the downstream path of a flow is not visible,
these approaches are blind to the true impact of the traffic being
policed and are not effective enough.
We propose congestion exposure as a possible solution. This means
that a flow reveals an estimate of the congestion it causes on its
remaining downstream path. Congestion exposure gives many benfits
including meaningful policing at network ingresses, congestion-based
accounting between ISPs, fairer bandwidth sharing among users,
increased trust in the congestion-responsiveness of end-systems, and
possibly congestion-dependent load balancing and routing. In short
congestion exposure leads to a more efficient and fairer Internet.
This document motivates the need for congestion exposure and
illustrates its usefulness in different use cases. Therefore,
actions should be taken to implement a simple form of congestion
exposure in the Internet.
Moncaster, et al. Expires April 3, 2010 [Page 2]
Internet-Draft Congestion Exposure Problem September 2009
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. The Problem . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Why Now? . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Solutions That Exacerbate the Problem . . . . . . . . . . . . 7
4.1. Passive Measurement . . . . . . . . . . . . . . . . . . . 7
4.1.1. Volume Accounting . . . . . . . . . . . . . . . . . . 7
4.1.2. Rate Measurement . . . . . . . . . . . . . . . . . . . 7
4.2. Active Discrimination . . . . . . . . . . . . . . . . . . 7
4.2.1. Bottleneck Rate Policing . . . . . . . . . . . . . . . 8
4.2.2. DPI and Application Rate Policing . . . . . . . . . . 8
5. Towards a Proper Solution . . . . . . . . . . . . . . . . . . 9
5.1. The Impact of Congestion . . . . . . . . . . . . . . . . . 9
5.2. Requirements for a Solution . . . . . . . . . . . . . . . 10
5.3. Explicit Congestion Notification . . . . . . . . . . . . . 11
5.3.1. So Is ECN the Solution? . . . . . . . . . . . . . . . 12
6. A Strawman Congestion Exposure Protocol . . . . . . . . . . . 12
7. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13
9. Security Considerations . . . . . . . . . . . . . . . . . . . 13
10. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 14
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 14
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14
12.1. Normative References . . . . . . . . . . . . . . . . . . . 14
12.2. Informative References . . . . . . . . . . . . . . . . . . 14
Moncaster, et al. Expires April 3, 2010 [Page 3]
Internet-Draft Congestion Exposure Problem September 2009
1. Introduction
One of the strengths of the Internet is its ability to share capacity
and avoid congestion collapse through the use of distributed
algorithms such as TCP running on its end-hosts. Although the
resulting capacity allocation appears fair, it is not well-suited to
the conflicting needs of the many different stakeholders in the
commercial Internet.
Worse, it only needs a small number of end-hosts "gaming the system"
to cause enough congestion to severely limit the user-experience of
many other end-hosts. Faced with this problem, some ISPs are seeking
to reduce what they regard as "heavy usage" regardless of the
underlying congestion situation, thereby eroding the trust which the
original Internet design assumed.
ISPs have a variety of measures available to them to skew access to
capacity amongst their users, ranging from the complex (per-user fair
queuing, volume-based billing) to the crude (per-application
prioritization or rate capping). A problem with such mechanisms is
that they control the wrong quantity: it is neither the volume nor
rate of traffic that causes a problem to other users; rather, it is
the congestion that traffic causes that is the real problem.
Current work at the IETF [LEDBAT] & [ALTO] and IRTF
[CC-open-research] is looking at new approaches for controlling bulk
data transfer rates. But ISPs have no means for enforcing their use
and indeed may even be hindering their deployment through their crude
attempts at traffic control which take no account of the congestion
attributable to a given application. The ISPs still have an
incentive to deploy the flawed tools at their disposal in an attempt
to manage capacity and reduce congestion. Using such mechanisms that
monitor the wrong information embeds assumptions about desirable
behaviour into the core of the network. This stifles innovation and
leads to stagnation of the network. What ISPs lack is a way to
measure or even see the congestion caused by a user's traffic on an
end-to-end basis. Such a measure could be used to build trust
between operators and end-users and between different tiers of
operators ths leading to cooperative behaviour. In short the
Internet lacks a system for accountability.
Currently, congestion information is visible only at end-hosts, and
is concealed from the rest of the network. We propose that this
information should be made visible at the IP layer. Once congestion
information is exposed in this way, it is then possible to use it to
measure the true impact of any traffic on the network. This means
operators will now be able to measure the congestion attributable to
a given application or user and will thus be able to incentivise the
Moncaster, et al. Expires April 3, 2010 [Page 4]
Internet-Draft Congestion Exposure Problem September 2009
use of protocols such as [LEDBAT] which aim to reduce the congestion
caused by bulk data transfers.
The rest of this document is organised as follows. Section 2 looks
in detail at the problem of resource sharing in the Internet.
Section 3 looks briefly at some of the techniques related to the
control of resource sharing and shows how these are inadequate.
Section 4 gives some requirements that we believe need to be met by
any solution. Section 5 looks at a possible solution that reveals
downstream congestion at the IP layer of each packet, where it is
visible to every node that forwards that packet. Section 6 explores
some use cases where we think a solution can be usefully deployed.
Section 7 considers the question of "Why Now?" - what has changed in
the Internet that now makes this a critical problem to solve.
2. The Problem
The Internet owes its success to its ability to share capacity among
billions of hosts through the use of distributed algorithms such as
TCP running on its end-hosts. This has allowed the network to expand
from a few tens of hosts connected at kbps to millions of hosts
connected at Gbps. TCP has undoubtedly been the hero of this piece,
but its limitations are beginning to show [Fairness]. TCP is very
good at sharing bottlenecks equally between flows, so-called TCP-
fairness, but it fails to take proper account of how many flows an
end-host may have or how long they have persisted since historically,
such parameters only varied over small ranges. The resulting
capacity allocation is fair in one narrow sense, but it is unable to
provide for the conflicting needs of all the stakeholders in today's
commercial Internet.
The problem is that the Internet has changed over recent years. It
has gone from being a place of cooperation between scientific
researchers, to a battleground between rival commercial and
sociological interests. In this environment an end-host that manages
to take advantage of the system can rapidly cause sufficient
congestion to impact the user-experience of many other end-hosts.
TCP will still ensure there is no congestion collapse but that is of
scant comfort to those users seeing their connection starved by
overly agressive competition for bandwidth.
At the same time, application writers and content providers are
involved in a battle to win the most consumers. Naturally they are
happy to explore the full range of behaviour of TCP in an effort to
seize a competitive advantage. Some of them choose to use other
transports and congestion response algorithms that are better suited
to inelastic traffic. The IETF has even recognised this need and
proposed TCP-Friendly Rate Control [RFC3448] as a possible solution.
Moncaster, et al. Expires April 3, 2010 [Page 5]
Internet-Draft Congestion Exposure Problem September 2009
ISPs are placed in a quandary - they know that any increases in
capacity will be of most benefit to the most aggressive users. At
the same time, increasing competition is squeezing their profit
margins and driving the Internet towards commoditisation. Faced with
these problems, some ISPs are seeking to reduce what they regard as
"heavy usage" in order to improve the service experienced by the
majority of their customers. Unfortunately they are only able to see
limited information aboutu the traffic they forward. Thus they are
forced to use the only information they do have available which leads
to myopic control that has scant regard for the actual impact of the
traffic or the underlying network conditions. This sets these ISPs
on a direct collision course with consumer rights groups and even
regulators.
3. Why Now?
LEDBAT is a transport-area WG that will focus on broadly applicable
techniques that allow large amounts of data to be consistently
transmitted without substantially affecting the delays experienced by
other users and applications. In doing so it tries to ensure that a
range of applications can co-exist on the internet.
The solutions seem to be based round providing a background or low-
congestion transport of some form. The issue is that these efforts
will only solve part of the problem. Whilst users will now have an
option to put their bulk traffic into a background activity, enabling
interactive traffic to go faster, this only solves the general
problem if everyone does it. It is not enough to stop operators from
doing volume capping as well (...because the operator can not prove
for themselves that the stuff is running in a careful background
mode?) Thus, for as long as some users continue to use aggressive
TCP rather than background transport, other users will still see a
lack of bandwidth, and to overcome that, operators will throttle all
traffic.
The information that distinguishes background traffic from TCP is how
the traffic responds to congestion. If a network could see the
congestion as well as the data rate of a flow, it could safely
determine which traffic was aggressive TCP and which traffic was non-
aggressive because they could see which traffic was responding
strongly to the congestion signals. If the traffic is responding
strongly to such signals and moving itself out of the way of
interactive traffic there is no need to cap it.
We need to show that congestion exposure gives ISPs the information
they need to be able to discriminate in favour of such low-congestion
transports...
Moncaster, et al. Expires April 3, 2010 [Page 6]
Internet-Draft Congestion Exposure Problem September 2009
4. Solutions That Exacerbate the Problem
Existing approaches intended to address the problems outlined above
can be broadly divided into two groups - those that passively monitor
traffic and can thus measure the apparent impact of a given flow of
packets and those that can actively discriminate against certain
packets, flows, applications or users based on various
characteristics or metrics.
4.1. Passive Measurement
Passive measurement of traffic relies on using the information that
can be measured directly or is revealed in the IP header of the
packet. Architecurally, passive measurement is cleaner since it
better fits with the idea of the hourglass design of the Internet
[RFC3439]. This asserts that "the complexity of the Internet belongs
at the edges, and the IP layer of the Internet should remain as
simple as possible."
4.1.1. Volume Accounting
Volume accounting is a passive technique that is often used to
discriminate between users. The volume of traffic sent by a given
user or network is one of the easiest pieces of information to
monitor in a network. Measuring the size of every packet and adding
them up is a simple operation and to make it even easier, every IP
packet carries its overall size in the header. Consequently this has
long been a favoured measure used by operators to control their
customers. Many broadband contracts include volume limits either
explicitly (e.g. you can transfer 10Gb per month) or implicitly in
the fair use policy (e.g. you are allowed to transfer as much as you
like but exceptionally heavy users will be penalised in some
fashion).
4.1.2. Rate Measurement
Traffic rates are often used as the basis of accounting at borders
between ISPs. For instance a contract might specify a charge based
on the 95th centile of the peak rate of traffic crossing the border
every month. Such bulk rate measurements are relatively easy to
make. With a little extra effort it is possible to measure the rate
of a given flow by using the 3-tuple of source and destination IP
address and protocol number.
4.2. Active Discrimination
[RFC5290] seeks to reinforce [RFC3439] by stating that
"...differential treatment of traffic can clearly be useful..." but
Moncaster, et al. Expires April 3, 2010 [Page 7]
Internet-Draft Congestion Exposure Problem September 2009
adding that such techniques are only useful "...as *adjuncts* to
simple best-effort traffic, not as *replacements* of simple best-
effort traffic." We fully agree with the authors that the network
should be built on the concept of simple best effort traffic.
However, as this section shows, a number of approaches have emerged
over recent years that explicitly differentiate between different
traffic types, applications and even users.
4.2.1. Bottleneck Rate Policing
Bottleneck rate policers such as [XCHOKe] and [pBox] have been
proposed as approaches for rate policing traffic without the benefit
of whole path information. But they must be deployed at bottlenecks
in order to work. Unfortunately, a large proportion of traffic
traverses at least two bottlenecks (in two access networks),
particularly with the current traffic mix where peer-to-peer file-
sharing is prevalent. If these bottleneck policers were widely
deployed, the Internet would find itself with one universal rate
adaptation policy embedded throughout the network. Given TCP's
congestion control algorithm is approaching its scalability limits
and new algorithms are being developed for high- speed congestion
control, embedding TCP policing into the Internet would make
evolution to new algorithms extremely painful. If a source wanted to
use a different algorithm, it would have to first discover then
negotiate with all the policers on its path, particularly those in
the far access network. The IETF has already traveled that path with
the Intserv architecture and found it constrains scalability
[RFC2208].
4.2.2. DPI and Application Rate Policing
Some operators use deep packet inspection (DPI) and traffic analysis
to identify certain applications beleived to have an excessive impact
on the network. These so-called heavy applications are generally
things like peer-to-peer or streaming video. Having identified a
flow as belinging to a heavy application, the operator is able to use
standard Diffserv [RFC2475] approaches such as policing and traffic
shaping to limit the throuhgput given to that flow. This has fuelled
the on-going battle between application developers and DPI vendors.
When operators first started to limit the throughput of P2P, it soon
became common knowledge that turning on encryption could boost your
throughput. The DPI vendors then imporved their equipment so that it
could identify P2P traffic by the pattern of packets it sends. This
risks becoming an endless cycle - an arms race that neither side can
win. Furthermore such techniques may put the operator in direct
conflict with the customers, regulators and content providers.
Moncaster, et al. Expires April 3, 2010 [Page 8]
Internet-Draft Congestion Exposure Problem September 2009
5. Towards a Proper Solution
To understand why current attempts at solving the problem are
inadequate we have to better understand what is really happening in
the network. The key thing is to realise that, in an increasingly
commercial world, networks are subject to the laws of economics and
so economics can provide some of the answers.
5.1. The Impact of Congestion
As the Internet has grown the impact of congestion has tended to
reduce as access rates trail those in the core. However over recent
years this has started to change. Increasingly large numbers of
people now access the network via broadband connections and the speed
they can get is steadily increasing. Alongside this have gone
significant changes in traffic patterns. We have been through a boom
in large-scale data transfer by peer to peer networks and now are
seeing an even larger boom in streaming media with applications such
as the BBC iPlayer becoming increasingly popular. The main effect of
this has been that users now routinely see their network connections
running slow in the evenings [http://community.plus.net/blog/2008/07/
17/more-record-breaking-streaming-and-the-latest-iplayer-news/].
The response of many operators has been to use tecnhniques such as
volume caps or application rate limiting to try and squeeze out the
traffic associated with these applications and thus hopefuly improve
the quality of experience for the majority of their customers.
However the problem is they are unable to see the true impact each
customer is actually causing on other customers. By design,
congestion is concealed from all but the end-hosts in the network.
Most network economists accept that congestion represents the shadow
price of network usage [Kelly]. This is actually quite an easy
concept to understand - if a network link is empty then it doesn't
matter how fast you send traffic through it, you will never have an
impact on anyone else. But if that same link is at full capacity
then any extra traffic entering the link will have a significant
impact on everyone sharing the link. In economic terms, this impact
is known as a negative externality and the classic solution is to
convert it to a shadow price that is used to determine the marginal
cost of using the network.
This means congestion is the only fair metric to use to differentiate
between the behaviour of different users. The IETF is already aware
of this, hence such efforts as LEDBAT [LEDBAT] which seeks to
minimise the congestion caused by bulk data transfers in order to
free up the network for more urgent data. However this has no
benefit to the user if the operator is unable to see that they are
behaving in an altruistic manner. The operator is still only able to
Moncaster, et al. Expires April 3, 2010 [Page 9]
Internet-Draft Congestion Exposure Problem September 2009
use the information that is visible to control the traffic. We
believe the obvious solution to this is to reveal congestion in the
netowrk itself.
5.2. Requirements for a Solution
Before we look at requirements it is important to define two terms:
Upstream congestion is defined as the congestion that has already
been experienced by a packet as it travels along its path. In
other words it is the congestion between the current point and the
source of the packet.
Downstream congestion is defined as the congestion that a packet
still has to experience on the remainder of its path. In other
words it is the congestion between the current point and the
destination of the packet.
This section lists the main requirements for any solution to this
problem. Not every requirement is equally important and they are not
listed in any particular order. However we believe that a solution
that meets most or all these requirements is likely to be better than
one that doesn't.
o Allow both upstream and downstream congestion to be visible at the
IP layer -- visibility at the IP layer allows congestion to be
monitored in the heart of the network without deploying
complicated and intrusive equipment such as DPI boxes. This gives
several advantages:
1. It enables policing of flows based on the congestion they are
actually going to cause in the network.
2. It allows the flow of congestion across ISP borders to be
monitored.
3. It supports a diversity of intra-domain and inter-domain
congestion management practices.
o Support the widest possible range of transport protocols for the
widest range of data types (elastic, inelastic, real-time,
background, etc) -- don't force a "universal rate adaptable
policy" such as TCP-friendliness [RFC3448].
o Be responsive to real-time congestion in the network.
o Avoid making assumptions about the behavior of specific
applications (e.g. be application agnostic).
Moncaster, et al. Expires April 3, 2010 [Page 10]
Internet-Draft Congestion Exposure Problem September 2009
o Allow incremental deployment of the solution and ideally permit
permanent partial deployment to increase chances of successful
deployment.
o Support integrity of congestion notifications; that is, make it
difficult to generate false positives and false negatives in
congestion notifications.
o Be robust in the face of DoS attacks aimed at either congestion
exposure itself, or at the network elements implementing
congestion exposure.
Many of these requirements are by no means unique to the problem of
congestion exposure. Incremental deployment for instance is a
critical requirement for any new protocol that affects something as
fundamental as IP. Being robust under attack is also a pre-requisite
for any protocol to succeed in the real Internet and this is covered
in more detail in Section 9.
5.3. Explicit Congestion Notification
Explicit Congestion Notification [RFC3168] allows routers to
explicitly tell end-hosts that they are approaching the point of
congestion. ECN builds on Active Queue Mechanisms such as random
early discard (RED) [RFC2309] by allowing the router to mark a packet
with a Congestion Experienced (CE) codeopoint, rather than dropping
it. The probability of a packet being marked increases with the
length of the queue and thus the rate of CE marks is a guide to the
level of congestion at that queue. This CE codepoint travels forward
through the network to the receiver which then informs the sender
that it has seen congestion. The sender is then required to respond
as if it had experienced a packet loss. Because the CE codepoint is
visible in the IP layer, this approach reveals the upstream
congestion level for a packet.
The choice of two ECT code-points in the ECN field [RFC3168]
permitted future flexibility, optionally allowing the sender to
encode the experimental ECN nonce [RFC3540] in the packet stream.
This mechanism has since been included in the specifications of DCCP
[RFC4340]. The ECN nonce is an elegant scheme that allows the sender
to detect if someone in the feedback loop - the receiver especially -
tries to claim no congestion was experienced when in fact congestion
led to packet drops or ECN marks. For each packet it sends, the
sender chooses between the two ECT codepoints in a pseudo-random
sequence. Then, whenever the network marks a packet with CE, if the
receiver wants to deny congestion happened, she has to guess which
ECT codepoint was overwritten. She has only a 50:50 chance of being
correct each time she denies a congestion mark or a drop, which
Moncaster, et al. Expires April 3, 2010 [Page 11]
Internet-Draft Congestion Exposure Problem September 2009
ultimately will give her away.
5.3.1. So Is ECN the Solution?
In a word, no. ECN does allow dowsntream nodes to measure the
upstream congestion for any flow, but this is not enough to allow
fairer control of traffic. That can only come with knowledge of the
downstream congestion level for which you need additional information
that is still concealed from the network by design. Some approaches
{can't remember for certain, but might be PURPLE} use ECN information
to try and select which flows are suitable to be dropped at a
bottleneck, but these are just variations on bottleneck policing
which was discussed in Section 4.2.1.
6. A Strawman Congestion Exposure Protocol
In this section we are going to explore a simple strawman protocol
that would solve the congestion exposure problem. This protocol
neatly illustrates how a solution might work. A practical
implementation of this protocol has been produced and both
simulations and real-life testing show that it works. The protocol
is based on a concept known as re-feedback [Re-fb] and assumes that
routers can meausre their congestion level precisely.
Re-feedback, standing for re-inserted feedback, is a system designed
to allow end-hosts to reveal to the network information they have
received via conventional feedback (for instance RTT or congestion
level). In IP information always flows in 1 direction round the
feedback loop and so nodes upstream are unable to see as much
information as nodes further downstream. By using re-feedback to re-
insert the congestion feedback signaled by the receiver into the
forward path, we can correct this information asymmetry and close the
feedback loop.
In our strawman protocol we imagine that packets have two
"congestion" fields in their IP header. One field records the
upstream congestion level and routers indicate their current
congestion level by changing this field in every packet. So as the
packet traverses the network it builds up a record of the overall
congestion along its path in this field. This data is then sent back
to the sender who uses it to determine its transmission rate. Using
re-feedback the sender now inserts this congestion value in the
second whole path congestion field on every packet it sends out.
Thus at any node downstream of the sender you can see the upstream
congestion for the packet (the congestion thus far), the whole path
congestion (with a time lag of 1RTT) and can calculate the downstream
congestion by subtracting one from the other.
Moncaster, et al. Expires April 3, 2010 [Page 12]
Internet-Draft Congestion Exposure Problem September 2009
The downstream congestion information can now be used for a number of
things. It allows an ISP to accurately identify which traffic is
having the greatest impact on the network and eitehr police directly
on that basis or use it to determine which users should be policed.
It can form the basis of inter-domain contracts between operators.
It could even be used as the basis for inter-domain routing, thus
encouraging operators to invest appropriately in improving their
infrastructure.
Summing up, exposing congstion both upstream and downstream can be
achieved by coupling congestion notification from routers with the
re-insertion of this informaiton by the sender. This establishes an
information symmetry between users and network providers which opens
the door for the evolution of new congestion responses which are not
bounded to a "universal rate adaptable policy".
7. Use Cases
From Rich Woundy: "I would add a section about use cases. The
primary use case would seem to be an "incentive environment that
ensures optimal sharing of capacity", although that could use a
better title. Other use cases may include "DDoS mitigation", "end-
to-end QoS", "traffic engineering", and "inter-provider service
monitoring". (You can see I am stealing liberally from the
motivation draft here. We'll have to see whether the other use cases
are "core" to this group, or "freebies" that come along with re-ECN
as a particular protocol.)"
My take on this is we need to concentrate on one or two major use
cases. The most obvious one is using this to control user-behaviour
and encourage the use of "congesiton friendly" protocols such as
LEDBAT. {Comments from Louise Burness} simply say that operators MUST
turn off any kind of rate limitation for ledbat traffic and what they
might mean for the amount of bandwidth they see compared to a
throttled customer? YOu could then extend that to say how it leads
to better QoS differentiation under the assumption that there is a
broad traffic mix any way? Not sure how much detail you want to go
into here though?
8. IANA Considerations
This document makes no request to IANA.
9. Security Considerations
This section needs to briefly cover the obvious security aspects of
any congestion exposure scheme: Source address spoofing, DoS,
integrity of signals, honesty. It might also be the place to mention
Moncaster, et al. Expires April 3, 2010 [Page 13]
Internet-Draft Congestion Exposure Problem September 2009
the possible reluctance to reveal too much information to the whole
network (some ISPs view congestion level as a commerically sensitive
concept).
10. Conclusions
11. Acknowledgements
A number of people have provided text and comments for this memo.
The document is being produced in support of a BoF on Congestion
Exposure as discussed extensively on the <re-ecn@ietf.org> mailing
list.
12. References
12.1. Normative References
12.2. Informative References
[ALTO] Seedorf, J. and E. Burger, "Application-Layer
Traffic Optimization (ALTO) Problem Statement",
draft-ietf-alto-problem-statement-04 (work in
progress), September 2009.
[CC-open-research] Welzl, M., Scharf, M., Briscoe, B., and D.
Papadimitriou, "Open Research Issues in Internet
Congestion Control", draft-irtf-iccrg-welzl-
congestion-control-open-research-05 (work in
progress), September 2009.
[Fairness] Briscoe, B., Moncaster, T., and A. Burness,
"Problem Statement: Transport Protocols Don't
Have To Do Fairness",
draft-briscoe-tsvwg-relax-fairness-01 (work in
progress), July 2008.
[Kelly] Kelly, F., Maulloo, A., and D. Tan, "Rate control
for communication networks: shadow prices,
proportional fairness and stability", Journal of
the Operational Research Society 49(3) 237--252,
1998,
<http://www.statslab.cam.ac.uk/~frank/rate.html>.
[LEDBAT] Shalunov, S., "Low Extra Delay Background
Transport (LEDBAT)",
draft-shalunov-ledbat-congestion-00 (work in
progress), March 2009.
Moncaster, et al. Expires April 3, 2010 [Page 14]
Internet-Draft Congestion Exposure Problem September 2009
[RFC2208] Mankin, A., Baker, F., Braden, B., Bradner, S.,
O'Dell, M., Romanow, A., Weinrib, A., and L.
Zhang, "Resource ReSerVation Protocol (RSVP)
Version 1 Applicability Statement Some Guidelines
on Deployment", RFC 2208, September 1997.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B.,
Deering, S., Estrin, D., Floyd, S., Jacobson, V.,
Minshall, G., Partridge, C., Peterson, L.,
Ramakrishnan, K., Shenker, S., Wroclawski, J.,
and L. Zhang, "Recommendations on Queue
Management and Congestion Avoidance in the
Internet", RFC 2309, April 1998.
[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E.,
Wang, Z., and W. Weiss, "An Architecture for
Differentiated Services", RFC 2475,
December 1998.
[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The
Addition of Explicit Congestion Notification
(ECN) to IP", RFC 3168, September 2001.
[RFC3439] Bush, R. and D. Meyer, "Some Internet
Architectural Guidelines and Philosophy",
RFC 3439, December 2002.
[RFC3448] Handley, M., Floyd, S., Padhye, J., and J.
Widmer, "TCP Friendly Rate Control (TFRC):
Protocol Specification", RFC 3448, January 2003.
[RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust
Explicit Congestion Notification (ECN) Signaling
with Nonces", RFC 3540, June 2003.
[RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram
Congestion Control Protocol (DCCP)", RFC 4340,
March 2006.
[RFC5290] Floyd, S. and M. Allman, "Comments on the
Usefulness of Simple Best-Effort Traffic",
RFC 5290, July 2008.
[Re-fb] Briscoe, B., Jacquet, A., Di Cairano-Gilfedder,
C., Salvatori, A., Soppera, A., and M. Koyabe,
"Policing Congestion Response in an Internetwork
Using Re-Feedback", ACM SIGCOMM CCR 35(4)277--
288, August 2005, <http://www.acm.org/sigs/
Moncaster, et al. Expires April 3, 2010 [Page 15]
Internet-Draft Congestion Exposure Problem September 2009
sigcomm/sigcomm2005/techprog.html#session8>.
[XCHOKe] Chhabra, P., Chuig, S., Goel, A., John, A.,
Kumar, A., Saran, H., and R. Shorey, "XCHOKe:
Malicious Source Control for Congestion Avoidance
at Internet Gateways", Proceedings of IEEE
International Conference on Network Protocols
(ICNP-02) , November 2002,
<http://www.cc.gatech.edu/~akumar/xchoke.pdf>.
[pBox] Floyd, S. and K. Fall, "Promoting the Use of End-
to-End Congestion Control in the Internet", IEEE/
ACM Transactions on Networking 7(4) 458--472,
August 1999,
<http://www.aciri.org/floyd/end2end-paper.html>.
Authors' Addresses
Toby Moncaster (editor)
BT
B54/70, Adastral Park
Martlesham Heath
Ipswich IP5 3RE
UK
Phone: +44 7918 901170
EMail: toby.moncaster@bt.com
Louise Burness
BT
B54/77, Adastral Park
Martlesham Heath
Ipswich IP5 3RE
UK
EMail: louise.burness@bt.com
Moncaster, et al. Expires April 3, 2010 [Page 16]
Internet-Draft Congestion Exposure Problem September 2009
Michael Menth
University of Wuerzburg
room B206, Institute of Computer Science
Am Hubland
Wuerzburg D-97074
Germany
Phone: +49 931 888 6644
EMail: menth@informatik.uni-wuerzburg.de
Joao Taveira Araujo
UCL
Steven Blake
Extreme Networks
Pamlico Building One, Suite 100
3306/08 E. NC Hwy 54
RTP, NC 27709
US
EMail: slblake@petri-meat.com
Richard Woundy
Comcast
Comcast Cable Communications
27 Industrial Avenue
Chelmsford, MA 01824
US
EMail: richard_woundy@cable.comcast.com
URI: http://www.comcast.com
Moncaster, et al. Expires April 3, 2010 [Page 17]
| PAFTECH AB 2003-2026 | 2026-04-23 20:06:46 |