One document matched: draft-ietf-pilc-link-design-03.txt
Differences from draft-ietf-pilc-link-design-02.txt
Internet Engineering Task Force Phil Karn
INTERNET DRAFT Aaron Falk
Joe Touch
Marie-Jose Montpetit
Jamshid Mahdavi
Gabriel Montenegro
Dan Grossman
Gorry Fairhurst
File: draft-ietf-pilc-link-design-03.txt July, 2000
Expires: January, 2001
Advice for Internet Subnetwork Designers
Status of this Memo
This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet- Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
Abstract
This document provides advice to the designers of digital
communication equipment, link layer protocols and packet switched
subnetworks (collectively referred to as subnetworks) who wish to
support the Internet protocols but who may be unfamiliar with the
architecture of the Internet and the implications of their design
choices on the performance and efficiency of the Internet.
This document represents an evolving consensus of the members of the
IETF Performance Implications of Link Characteristics (PILC) working
group.
Introduction and Overview
The Internet Protocol [RFC791] is the core protocol of the world-wide
Internet that defines a simple "connectionless" packet-switched
network. The success of the Internet is largely attributed to the
simplicity of IP, the "end-to-end principle" on which the Internet is
based, and the resulting ease of carrying IP on a wide variety of
subnetworks not necessarily designed with IP in mind.
But while many subnetworks carry IP, they do not necessarily do so
with maximum efficiency, minimum complexity or minimum cost. Nor do
they implement certain features to efficiently support newer Internet
features of increasing importance, such as multicasting or quality of
service.
With the explosive growth of the Internet, IP is an increasingly
large fraction of the traffic carried by the world's
telecommunications networks. It therefore makes sense to optimize
both existing and new subnetwork technologies for IP as much as
possible.
Optimizing a subnetwork for IP involves three complementary
considerations:
1. Providing functionality sufficient to carry IP.
2. Eliminating unnecessary functions that increase cost or
complexity.
3. Choosing subnetwork parameters that maximize the performance of
the Internet protocols.
Because IP is so simple, consideration 2 is more of an issue than
consideration 1. I.e., subnetwork designers make many more errors of
commission than errors of omission. But certain enhanced Internet
features, such as multicasting and quality-of-service, rely on
support from the underlying subnetworks beyond that necessary to
carry "traditional" unicast, best-effort IP.
A major consideration in the efficient design of any layered
communication network is the appropriate layer(s) in which to
implement a given function. This issue was first addressed in the
seminal paper "End-to-End Arguments in System Design" [SRC81]. This
paper argued that many functions can be implemented properly *only*
on an end-to-end basis, i.e., at the higher protocol layers, outside
the subnetwork. These functions include ensuring the reliable
delivery of data and the use of cryptography to provide
confidentiality and message integrity.
These functions cannot be provided solely by the concatenation of
hop-by-hop services, so duplicating these functions at the lower
protocol layers (i.e., within the subnetwork) can be needlessly
redundant or even harmful to cost and performance.
However, partial duplication of functionality in a lower layer can
*sometimes* be justified by performance, security or availability
considerations. Examples include link layer retransmission to improve
the performance of an unusually lossy channel, e.g., mobile radio;
link level encryption intended to thwart traffic analysis; and
redundant transmission links to improve availability. Duplication of
protocol function should be done only with an understanding of system
level implications, including possible interactions with higher-layer
mechanisms.
The architecture of the Internet was heavily influenced by the end-
to-end principle, and in our view it was crucial to the Internet's
success.
The remainder of this document discusses the various subnetwork
design issues that the authors consider relevant to efficient IP
support.
Maximum Transmission Units (MTUs) and IP Fragmentation
IP packets (datagrams) vary in size from 20 bytes (the size of the IP
header alone) to a maximum of 65535 bytes. Subnetworks need not
support maximum-sized (64KB) IP packets, as IP provides a scheme that
breaks packets that are too large for a given subnetwork into
fragments that travel as independent packets and are reassembled at
the destination. The maximum packet size supported by a subnetwork is
known as its Maximum Transmission Unit (MTU).
Subnetworks may, but are not required to indicate the length of each
packet they carry. One example is Ethernet with the widely used DIX
(not IEEE 802.3) header, which lacks a length field to indicate the
true data length when the packet is padded to the 60 byte minimum.
This is not a problem for uncompressed IP because it carries its own
length field.
If optional header compression [RFC1144] [RFC2507] [RFC2508] is used,
however, it is required that the link framing indicate frame length
as it is needed for the reconstruction of the original header.
In IP version 4 (current IP), fragmentation can occur at either the
sending host or in an intermediate router, and fragments can be
further fragmented at subsequent routers if necessary.
In IP version 6, fragmentation can occur only at the sending host; it
cannot occur in a router.
Both IPv4 and IPv6 provide a "Path MTU Discovery" procedure [RFC1191]
[RFC1435] [RFC1981] that allows the sending host to avoid
fragmentation by discovering the minimum MTU along a given path and
reducing its packet sizes accordingly. This procedure is optional in
IPv4 but mandatory in IPv6 where there is no router fragmentation.
The Path MTU Discovery procedure (and the deletion of router
fragmentation in IPv6) reflects a consensus of the Internet technical
community that IP fragmentation is best avoided. This requires that
subnetworks support MTUs that are "reasonably" large. The smallest
MTU that IPv4 can use is 28 bytes, but this is clearly unreasonable;
because each IP header is 20 bytes, only 8 bytes per packet would be
available to carry transport headers and application data.
If a subnetwork cannot directly support a "reasonable" MTU with
native framing mechanisms, it should internally fragment. That is, it
should transparently break IP packets into internal data elements and
reassemble them at the other end of the subnetwork.
This leaves the question of what is a "reasonable" MTU. Ethernet (10
and 100 Mb/s) has a MTU of 1500 bytes, and because of its ubiquity
few Internet paths have MTUs larger than this value. This severely
limits the utility of larger MTUs provided by other subnetworks. But
larger MTUs are increasingly desirable on high speed subnetworks to
reduce the per-packet processing overhead in host computers, and
implementers are encouraged to provide them even though they may not
be usable when Ethernet is also in the path.
The increasing popularity of "tunneling" schemes, such as IP Security
[RFC2406], has increased the difficulty of avoiding IP fragmentation.
These schemes treat IP as another subnetwork for IP. By adding their
own encapsulation headers, they can trigger fragmentation even when
the same physical subnetworks (e.g., Ethernet) are used on both sides
of the IP router.
Choosing the MTU in Slow Networks [Stevens94, RFC1144]
In slow networks, the largest possible packet may take a considerable
time to send. Interactive response time should not exceed the well-
known human factors limit of 100 to 200 ms. This includes all sources
of delay: electromagnetic propagation delay, queueing delay, and the
store-and-forward time, i.e,. the time to transmit a packet at link
speed.
At low link speeds, store-and-forward delays can dominate total end-
to-end delay, and these are in turn directly influenced by the
maximum transmission unit (MTU). Even when an interactive packet is
given a higher queuing priority, it may have to wait for a large bulk
transfer packet to finish transmission. This worst-case wait can be
set by an appropriate choice of MTU.
For example, if the MTU is set to 1500 bytes, then a MTU-sized packet
will take about 8 milliseconds to send on a T1 (1.536 Mb/s) link.
But if the link speed is 19.2kb/s, then the transmission time becomes
625 ms -- well above our 100-200ms limit. A 256-byte MTU would lower
this delay to a little over 100 ms. However, care should be taken not
to lower the MTU excessively, as this will increase header overhead
and trigger frequent IP fragmentation (if Path MTU discovery is not
in use).
One way to limit delay for interactive traffic without imposing a
small MTU is to preempt (abort) the transmission of a lower priority
packet when a higher priority packet arrives in the queue. However,
the link resources used to send the aborted packet are lost, and
overall throughput will decrease.
Another way is to implement a link-level multiplexing scheme that
allows several packets to be in progress simultaneously, with
transmission priority given to segments of higher priority IP
packets. ATM (asynchronous transfer mode) is an example of this
technique. However, ATM is generally used on high speed links where
the store-and-forward delays are already minimal, and it introduces
significant (~9%) additional overhead due to the addition of 5-byte
frame headers to each 48-byte data frame.
To summarize, there is a fundamental tradeoff between efficiency and
latency in the design of a subnetwork, and the designer should keep
this in mind.
Framing on Connection-Oriented Subnetworks
IP needs a way to mark the beginning and end of each variable-length,
asynchronous IP packet. Some examples of links and subnetworks that
do not provide this as an intrinsic feature include:
1. leased lines carrying a synchronous bit stream;
2. ISDN B-channels carrying a synchronous octet stream;
3. dialup telephone modems carrying an asynchronous octet stream;
and
4. Asynchronous Transfer Mode (ATM) networks carrying an asynchronous
stream of fixed-sized "cells"
The Internet community has defined packet framing methods for all
these subnetworks. The Point-To-Point Protocol (PPP) [RFC1661] is
applicable to bit synchronous, octet synchronous and octet
asynchronous links (i.e., examples 1-3 above). ATM has its own
framing methods described in [RFC2684] [RFC2364].
At high speeds, a subnetwork should provide a framed interface
capable of carrying asynchronous, variable-length IP datagrams. The
maximum packet size supported by this interface is discussed above in
the MTU/Fragmentation section. The subnetwork may implement this
facility in any convenient manner.
In particular, IP packet boundaries may, but need not, coincide with
any framing or synchronization mechanisms internal to the subnetwork.
When the subnetwork implements variable sized data units, the most
straightforward approach is to place exactly one IP packet into each
subnetwork data unit (SDU), and to rely on the subnetwork's existing
ability to delimit SDUs to also delimit IP packets. A good example
is Ethernet. But some subnetworks have SDUs of one or more fixed
sizes, as dictated by switching, forward error correction and/or
interleaving considerations. Examples of such subnetworks include
ATM, with a single frame size of 48 bytes plus a 5-byte header, and
IS-95 digital cellular, with two "rate sets" of four fixed frame
sizes each that may be selected on 20 millisecond boundaries.
Because IP packets are variable sized, they may not necessarily fit
into an integer multiple of fixed-sized SDUs. An "adaptation layer"
is needed to convert IP packets into SDUs while marking the boundary
between each IP packet in some manner.
There are several approaches to the problem. The first is to encode
each IP packet into one or more SDUs, with no SDU containing pieces
of more than one IP packet, and padding out the last SDU of the
packet as needed. Bits in a control header added to each SDU
indicate where it belongs in the IP packet. If the subnetwork
provides in-order, at-most-once delivery, the header can be as simple
as a pair of bits to indicate whether the SDU is the first and/or the
last in the IP packet. Or only the last SDU of the packet could be
marked, as this would implicitly mark the next SDU as the first in a
new IP packet. The AAL5 (ATM Adaption Layer 5) scheme used with ATM
is an example of this approach, though it adds other features,
including a payload length field and a payload CRC.
In AAL5, the 1-bit per segment flag, carried in the ATM header,
indicates the end of a PDU. The PDU control information (trailer) is
located at the end of the segment. Placing the trailer in a fixed
position may simplify hardware reassembly.
Another framing technique is to insert per segment overhead to
indicate the presence of a segment option. When present, the option
carries a pointer to the end of the PDU. This differs from AAL5 in
that it permits another PDU to follow within the same segment.
MPEG-2 [EN301] [ISO13181] supports this style of fragmentation, and
may utilize either padding (limiting each transport stream packet to
carry only part of one PDU with padding, or to allow a second PDU to
start (no padding)).
A third approach is to insert a special flag sequence into the data
stream between each IP packet, and to pack the resulting data stream
into SDUs without regard to SDU boundaries. The flag sequence can
also pad unused space at the end of an SDU. If the special flag
appears in the user data, it is escaped to an alternate sequence
(usually larger than a flag) to avoid being misinterpreted as a flag.
The HDLC-based framing schemes used in PPP are all examples of this
approach.
All three adaptation schemes introduce overhead; how much depends on
the distribution of IP packet sizes, the size(s) of the SDUs, and in
the HDLC-like approaches, the content of the IP packet (since flags
occurring in the packet must be escaped, which expands them). The
designer must also weigh implementation complexity in the choice and
design of an adaptation layer.
Connection-Oriented Subnetworks
IP has no notion of a "connection"; it is a purely connectionless
protocol. When a connection is required by an application, it is
usually provided by TCP, the Transmission Control Protocol, running
atop IP on an end-to-end basis.
Connection-oriented subnetworks can be (and are) widely used to carry
IP, but often with considerable complexity. Subnetworks with a few
nodes can simply open a permanent connection between each pair of
nodes, as is frequently done with ATM. But the number of connections
is equal to the square of the number of nodes, so this is clearly
impractical for large subnetworks. A "shim" layer between IP and the
subnetwork is therefore required to manage connections in the latter.
These shim layers typically open subnetwork connections as needed
when an IP packet is queued for transmission and close them after an
idle timeout. There is no relation between subnetwork connections and
any connections that may exist at higher layers (e.g., TCP).
Because Internet traffic is typically bursty and transaction-
oriented, it is often difficult to pick an optimal idle timeout. If
the timeout is too short, subnetwork connections are opened and
closed rapidly, possibly over-stressing its call management system
(especially if was designed for voice traffic holding times). If the
timeout is too long, subnetwork connections are idle much of the
time, wasting any resources dedicated to them by the subnetwork.
The ideal subnetwork for IP is connectionless. Connection-oriented
networks that dedicate minimal resources to each connection (e.g.,
ATM) are a distant second, and connection-oriented networks that
dedicate a fixed amount of bandwidth to each connection (e.g., the
PSTN, including ISDN) are the least efficient. If such subnetworks
must be used to carry IP, their call-processing systems should be
capable of rapid call set-up and tear-down.
Bandwidth on Demand (BoD) Subnets (Aaron Falk)
Wireless networks, including both satellite and terrestrial, may use
Bandwidth on Demand (BoD). Bandwidth on demand, which is implemented
at the link layer by Demand Assignment Multiple Access (DAMA) in TDMA
systems, is currently one of the proposed mechanism to efficiently
share limited spectrum resources amongst a large number of users.
The design parameters for BoD are similar to those in connection
oriented subnetworks, however the implementations may be very
different. In BoD, the user typically requests access to the shared
channel for some duration. Access may be allocated in terms of a
period of time at a specific rate, a certain number of packets, or
until the user chooses to release the channel. Access may be
coordinated through a central management entity or through using a
distributed algorithm amongst the users. The resource shared may be a
terrestrial wireless hop, a satellite uplink, or an end-to-end
satellite channel.
Long delay BoD subnets pose problems similar to the Connection
Oriented networks in terms of anticipating traffic arrivals. While
connection oriented subnets hold idle channels open expecting new
data to arrive, BoD subnets request channel access based on buffer
occupancy (or expected buffer occupancy) on the sending port. Poor
performance will likely result if the sender does not anticipate
additional traffic arriving at that port during the time it takes to
grant a transmission request. It is recommended that the algorithm
have the capability to extend a hold on the channel for data that has
arrived after the original request was generated (this may done by
piggybacking new requests on user data).
There are a wide variety of BoD protocols available and there has
been relatively little comprehensive research on the interactions
between the BoD mechanisms and Internet protocol performance. A
tradeoff exists balancing the time a user can be allowed to hold a
channel to drain port buffers with the additional imposed latency on
other users who are forced to wait to get access to the channel. It
is desirable to design mechanisms that constrain the BoD imposed
latency variation. This will be helpful in preventing spurious
timeouts from TCP.
Reliability and Error Control
In the Internet architecture, the ultimate responsibility for error
recovery is at the end points. The Internet may occasionally drop,
corrupt, duplicate or reorder packets, and the transport protocol
(e.g., TCP) or application (e.g., if UDP is used) must recover from
these errors on an end-to-end basis. Error recovery in the
subnetwork is therefore justified only to the extent that it can
enhance overall performance. It is important to recognize that a
subnetwork can go too far in attempting to provide error recovery
services in the Internet environment. Subnet reliability should be
"lightweight", i.e., it only has to be "good enough", *not* perfect.
In this section we discuss how to analyze characteristics of a
subnetwork to determine what is "good enough". The discussion below
focuses on TCP, which is the most widely used transport protocol in
the Internet. It is widely believed (and is in fact a stated goal
within the IETF community) that non-TCP transport protocols should
attempt to be "TCP-friendly" and have many of the same performance
characteristics. Thus, the discussion below should be applicable
even to portions of the Internet where TCP may not be the predominant
protocol.
CRCs, Checksums and Error Detection
The TCP, UDP and IPv4 protocols all use a simple 16-bit 1's
complement checksum algorithm to detect corrupted packets. The IP
checksum protects only the IP header, while the TCP and UDP checksums
protect both the TCP/UDP header and any user data.
IP version 6 (IPv6) does away with the IP header checksum and relies
on end-to-end checking to detect IP header errors.
These checksums are not very strong from a coding theory standpoint.
But they are easy to compute in software, and various proposals to
replace them with stronger checksums have failed. Experience has
shown that nearly all subnetworks that carry the Internet protocols
have a very low undetected error rate, so the existing Internet
checksums are rarely called upon to catch an error. Most of the
errors caught by the Internet checksum are generally the result of
hardware or software failure than of a subnetwork transmission error.
This lowers the motivation to overcome the performance and especially
the backward compatibility problems of a new, stronger Internet
checksum.
The typical IP subnetwork implements an internal error detection code
that discards packets failing this check instead of delivering them
to the IP layer. A 16-bit cyclic redundancy check (CRC) is usually
the minimum, and this is known to be considerably stronger than the
16-bit standard Internet checksum. The Point-to-Point Protocol
[RFC1662] requires support of a 16-bit CRC, with a 32-bit CRC as an
option. (Note that PPP is often used in conjunction with a dialup
modem, which provides its own error control). Other subnetworks,
including 802.3/Ethernet, AAL5/ATM, FDDI, Token Ring and PPP over
SONET/SDH all use a 32-bit CRC that is considerably stronger. In
addition, many subnetworks (notably dialup modems, mobile radio and
satellite channels) also incorporate forward error correction, often
in hardware.
Any new subnetwork designed to carry IP should therefore provide
error detection at least as strong as the 32-bit CRC specified in
[ISO3309].
While this will achieve a very low undetected packet error rate, it
will not (and need not) achieve a very low packet loss rate as the
Internet protocols are better suited to dealing with lost packets
than with corrupted packets.
How TCP Works
One of TCP's functions is end-host based congestion control for the
Internet. This is a critical part of the overall stability of the
Internet, so it is important that link layer designers understand
TCP's congestion control algorithms.
TCP assumes that, at the most abstract level, the network consists of
links and queues. Queues provide output-buffering on links that are
momentarily oversubscribed. They smooth instantaneous traffic bursts
to fit the link bandwidth.
When demand exceeds link capacity long enough to fill the queue,
packets must be dropped. The traditional action of dropping the most
recent packet ("tail dropping") is no longer recommended (see
[RED93]), but it is still widely practiced.
TCP uses sequence numbering and acknowledgements (ACKs) on an end-to-
end basis to provide reliable, sequenced, once-only delivery. TCP
ACKs are cumulative, i.e., each one implicitly ACKs every segment
received so far. If a packet is lost, the cumulative ACK will cease
to advance.
Since the most common cause of packet loss is congestion, TCP treats
packet loss as a network congestion indicator. This happens
automatically, and the subnetwork need not know anything about IP or
TCP. It simply drops packets whenever it must, though RED shows that
some packet-dropping strategies are more fair than others.
TCP recovers from packet losses in two different ways. The most
important is by a retransmission timeout. If an ACK fails to arrive
after a certain period of time, TCP retransmits the oldest unacked
packet. Taking this as a hint that the network is congested, TCP
waits for the retransmission to be ACKed before it continues, and it
gradually increases the number of packets in flight as long as a
timeout does not occur again.
A retransmission timeout can impose a significant performance
penalty, as the sender will be idle during the timeout interval and
restarts with a congestion window of 1 following the timeout. To
allow faster recovery from the occasional lost packet in a bulk
transfer, an alternate scheme known as "fast recovery" was introduced
[ref?]
Fast recovery relies on the fact that when a single packet is lost in
a bulk transfer, the receiver continues to return ACKs to subsequent
data packets, but they will not actually ACK any data. These are
known as "duplicate acknowledgments" or "dupacks". The sending TCP
can use dupacks as a hint that a packet has been lost, and it can
retransmit it without waiting for a timeout. Dupacks effectively
constitute a negative acknowledgement (NAK) for the packet whose
sequence number is equal to the acknowledgement field in the incoming
TCP packet. TCP currently waits until a certain number of dupacks
(currently 3) are seen prior to assuming a loss has occurred; this
helps avoid an unnecessary retransmission in the face of out-of-
sequence delivery.
A new technique called "Explicit Congestion Notification" (ECN)
allows routers to directly signal congestion to hosts without
dropping packets. This is done by setting a bit in the IP header.
Since this is currently an optional behavior (and, longer term, there
will always be the possibility of congestion in portions of the
network which don't support ECN), the lack of an ECN bit MUST NEVER
be interpreted as a lack of congestion. Thus, for the foreseeable
future, TCP MUST interpret a lost packet as a signal of congestion.
The TCP "congestion avoidance" [RFC2581] algorithm is the end-system
congestion control algorithm used by TCP. This algorithm maintains a
congestion window (cwnd), which controls the amount of data which TCP
may have in flight at any given point in time. Reducing cwnd reduces
the overall bandwidth obtained by the connection; similarly, raising
cwnd increases the performance, up to the limit of the available
bandwidth.
TCP probes for available network bandwidth by setting cwnd at one
packet and then increasing it by one packet for each ACK returned
from the receiver. This is TCP's "slow start" mechanism. When a
packet loss is detected (or congestion is signalled by other
mechanisms), cwnd is set back to one and the slow start process is
repeated until cwnd reaches one half of its previous setting before
the loss. Cwnd continues to increase past this point, but at a much
slower rate than before. If no further losses occur, cwnd will
ultimately reach the window size advertised by the receiver.
This is referred to as an "Additive Increase, Multiplicative
Decrease" (AIMD) algorithm. The steep decrease in response to
congestion provides for network stability; the AIMD algorithm also
provides for fairness between long running TCP connections sharing
the same path.
TCP Performance Characteristics
Caveat
In this section, we present the current "state-of-the-art"
understanding of TCP performance. This analysis attempts to
characterize the performance of TCP connections over links of varying
characteristics.
Link designers may wish to use the techniques in this section to
predict what performance TCP/IP may achieve over a new link layer
design. Such analysis is encouraged. Because this is relatively new
analysis, and the theory is based on single stream TCP connections
under "ideal" conditions, it should be recognized that the results of
such analysis may be different than actual performance in the
Internet. That being said, we have done the best we can to provide
information which will help designers get an accurate picture of the
capabilities and limitations of TCP under various conditions.
The Formulae
The performance of TCP's AIMD Congestion Avoidance algorithm has been
extensively analyzed. The current best formula for the performance
of the specific algorithms used by Reno TCP is given by Padhye,
et.al. [PFTK98]. This formula is:
MSS
BW = --------------------------------------------------------
RTT*sqrt(1.33*p) + RTO*p*[1+32*p^2]*min[1,3*sqrt(.75*p)]
In this formula, the variables are as follows:
MSS is the segment size being used by the connection
RTT is the end-to-end round trip time of the TCP connection
RTO is the packet timeout (based on RTT)
p is the packet loss rate for the path
(i.e. .01 if there is 1% packet loss)
This is currently considered to be the best approximate formula for
Reno TCP performance. A further simplification to this formula is
generally made by assuming that RTO is approximately 5*RTT.
TCP is constantly being improved. A simpler formula, which gives an
upper bound on the performance of any AIMD algorithm which is likely
to be implemented in TCP in the future, was derived by Ott, et.al.
[MSMO97][OKM96]
MSS 1
BW = 0.93 --- -------
RTT sqrt(p)
Assumptions of these formulae
Both of these formulae assume that the TCP Receiver Window is not
limiting the performance of the connection in any way. Because
receiver window is entirely determined by end-hosts, we assume that
hosts will maximize the announced receiver window in order to
maximize their network performance.
Both of these formulae allow for BW to become infinite if there is no
loss. This is because an Internet path will drop packets at
bottleneck queues if the load is too high. Thus, a completely
lossless TCP/IP network can never occur (unless the network is being
underutilized).
The RTT used is the average RTT including queuing delays.
The formulae are calculations for a single TCP connection. If a path
carries many TCP connections, each will follow the formulae above
independently.
The formulae assume long running TCP connections. For connections
which are extremely short (<10 packets) and don't lose any packets,
performance is driven by the TCP slow start algorithm. For
connections of medium length, where on average only a few segments
are lost, single connection performance will actually be slightly
better than given by the formulae above.
The difference between the simple and complex formulae above is that
the complex formula includes the effects of TCP retransmission
timeouts. For very low levels of packet loss (significantly less
than 1%), timeouts are unlikely to occur, and the formulae lead to
very similar results. At higher packet losses (1% and above), the
complex formula gives a more accurate estimate of performance (which
will always be significantly lower than the result from the simple
formula).
Note that these formulae break down as p approaches 100%.
Analysis of Link Layer Effects on TCP Performance
Link layer designers who are interested in understanding the
performance of TCP over these links can use these formulae to figure
this out. Consider the following example:
A designer invents a new wireless link layer which, on average, loses
1% of IP packets. The link layer supports packets of up to 1040
bytes, and has a one-way delay of 20 msec.
If this link layer were used in the Internet, on a path which
otherwise had a round trip of of 80 msec, you could compute an upper
bound on the performance as follows:
For MSS, use 1000 bytes (remove the 40 bytes for TCP/IP headers,
which do not contribute to performance).
For RTT, use 120 msec (80 msec for the Internet part, plus 20 msec
each way for the new wireless link).
For p, use .01. For C, assume 1.
The simple formula gives:
BW = (1000 * 8 bits) / (.120 sec * sqrt(.01)) = 666 kbit/sec
The more complex formula gives:
BW = 402.9 kbit/sec
If this were a 2 Mb/s wireless LAN, the designers might be somewhat
disappointed.
Some observations on performance:
1. We have assumed that the packet losses on the link layer are
interpreted as congestion by TCP. This is a "fact of life" which
must be accepted.
2. Note that the equations for TCP performance are all expressed in
terms of packet loss. Many link-layer designers think in terms of
bit-error rate. *If* there were a uniform random distribution of
errors, then the probability of a packet being corrupted would be:
p = 1 - ([1 - BER]^[MSS * 8])
(Here we assume MSS is represented in bytes). If the inequality
BER * MSS * 8 << 1
holds, p can be approximated by:
p = BER * MSS * 8
These equations can be used to apply BER to the performance equations
above.
Note that links with Forward Error Correction (FEC) generally have
very non-uniform bit error distributions. The distribution is a
strong function of the types and combinations of FEC algorithms used.
In such cases these equations cannot be used to apply BER to the
performance equations above. If the distribution of error
distributions under the FEC scheme is known, one could apply the same
type of analysis as above, using the correct distribution function
for the BER. It is more likely in these FEC cases, however, that
empirical methods will need to be used to determine the actual packet
loss rate.
3. Note that the packet size plays an important role. Larger packet
sizes will allow for improved performance at the same *packet loss*
rate. Assuming constant, uniform bit-errors (instead of packet
errors), and assuming that the BER is small enough for the
approximation [p=BER*MSS*8] to apply, a simple derivation will show
that larger packet sizes still result in increased TCP performance.
For this reason (and others) it is advisable to support larger packet
sizes where possible.
To derive this, simply plug in p = BER*MSS*8 into the simple formula
for performance. The result is p = O(sqrt(MSS)), providing larger
performance for larger packet sizes.
If the approximation p = BER*MSS*8 breaks down, and in particular if
the BER is high enough that BER*MSS approaches (or exceeds) 1, the
packet loss rate p will tend to 100%, resulting in zero throughput.
4. We have chosen a specific RTT which might occur on a wide-area
Internet path within the USA. In the Internet, it is important to
recognize that RTT varies considerably.
For example, in a wired LAN environment, RTTs are typically less than
10 msec. International connections (between hosts in different
countries) may have RTTs of 200 msec or more. Modems and other low-
capacity links can add considerable delay to the overall RTTs
experienced by the end hosts due to their long packet transmission
times.
Links running over geostationary repeater satellites have one-way
times of around 250ms (125ms up to the satellite, 125ms down) so the
RTT of an end-to-end TCP connection that includes such a link can be
expected to be greater than 250ms.
Heavily congested links may have queues which back up, increasing
RTTs. Finally, VPNs and other forms of encryption and tunneling can
add significant end-to-end delay to network connections.
Increased delay decreases the overall performance of TCP at a given
loss rate. A good rule of thumb is to recognize that you can't do
anything about the laws of physics, so you can't change the
propagation delay. Many link layer designers are likely to face the
following tradeoff: using additional delay to reduce the probability
of packet loss (through FEC, ARQ, or other methods). Increasing the
delay somewhat in order to decrease packet loss is probably a
worthwhile investment, either up to doubling, or in the case of very
low delay pipes, adding 10-20 msec won't have much effect on a
typical Internet path.
Recovery from Subnetwork Outages
Some types of subnetworks, particularly mobile radio, are subject to
frequent temporary outages. For example, an active cellular data user
may drive or walk into an area (such as a tunnel) that is out of
range of any base station. No packets will be successfully delivered
until the user returns to an area with coverage.
The Internet protocols currently provide no standard way for a
subnetwork to explicitly notify an upper layer protocol (e.g., TCP)
that it is experiencing an outage, as distinguished from severe
congestion. Under these circumstances TCP will, after each
unsuccessful retransmission, wait even longer before trying again;
this is its "exponential backoff" algorithm. And since there is also
currently no way for a subnetwork to explicitly notify TCP when it is
again operational, TCP will not discover this until its next
retransmission attempt. If TCP has backed off, this may take some
time. This can lead to extremely poor TCP performance over such
subnetworks.
It is therefore highly desirable that a subnetwork subject to outages
not silently discard packets during an outage. Ideally, it should
define an interface to the next higher layer (i.e., IP) that allows
it to refuse packets during an outage, and to automatically ask IP
for new packets when it is again able to deliver them. If it cannot
do this, then the subnetwork should hold onto at least some of the
packets it accepts during an outage and attempt to deliver them when
the subnetwork comes back up.
Note that it is *not* necessary to avoid any and all packet drops
during an outage. The purpose of holding onto a packet during an
outage, either in the subnetwork or at the IP layer, is so that its
eventual delivery can implicitly notify TCP that the subnetwork is
again operational. This is to enhance performance, not to ensure
reliability -- a task that as discussed earlier can only be done
properly on an end-to-end basis.
Only a single packet per TCP connection need be held in this way to
cause TCP to recover from the additional losses once the flow
resumes.
Because it would be a layering violation for IP or a subnetwork to
look at the TCP headers of the packets it carries (which would in any
event be impossible if IPSEC encryption is in use), it would be
reasonable for the IP or subnetwork layers to choose, as a design
parameter, some small number of packets that it will retain during an
outage.
Quality of Service, Fairness vs Performance, Congestion signalling
The IP header includes a 1-byte TOS (type of service) field. This
field is broken into two parts: a 3-bit precedence field that
indicates the relative importance of this packet compared to others,
and a 4-bit "qualitative tradeoff" field to indicate if low delay,
high throughput, high reliability or low cost is qualitatively more
important for this particular packet. One TOS bit is reserved.
Although the TOS field has not been heavily used in the past, it is
becoming increasingly important to the provision of Quality of
Service (QoS) features necessary for services such as Voice over IP
(VoIP).
Many subnetwork designers are faced with inherent tradeoffs between
delay, throughput, reliability and cost. When this is the case, the
subnetwork should provide an interface to IP that allows the IP TOS
bits to control these tradeoffs on a per-packet basis. For example,
when a subnetwork implements a hop-by-hop retransmission scheme to
improve reliability, this invariably comes at the cost of increased
delay variance. An IP packet presented to this subnetwork with the
"low delay" bit set in its TOS field should disable hop-by-hop
retransmission within the subnetwork for that particular packet.
Another example is link forward error correction, which can increase
reliability at the expense of throughput without significantly
affecting delay.
[fairness & performance]
Delay Characteristics
[self clocking TCP, (re)transmission shaping]
Bandwidth Asymmetries
Some subnetworks may provide asymmetric bandwidth and the Internet
protocol suite will generally still work fine. However, there is a
case when such a scenario reduces TCP performance. Since TCP data
segments are ``clocked'' out by returning acknowledgments TCP senders
are limited by the rate at which ACKs can be returned [BPK98].
Therefore, when the ratio of the bandwidth of the subnetwork carrying
the data to the bandwidth of the subnetwork carrying the
acknowledgments is too large, the slow return of of the ACKs directly
impacts performance. Since ACKs are generally smaller than data
segments, TCP can tolerate some asymmetry, but as a general rule
designers of subnetworks should avoid large differences in the
incoming and outgoing bandwidth.
One way to cope with asymmetric subnetworks is to increase the size
of the data segments as much as possible. This allows more data to
be sent per ACK, and therefore mitigates the slow flow of ACKs.
Using the delayed acknowledgment mechanism {Bra89], which reduces the
number of ACKs transmitted by the receiver by roughly half, can also
improve performance by reducing the congestion on the ACK channel.
These mechanisms should be employed in asymmetric networks.
Several researchers have introduced strategies for coping with
bandwidth asymmetry. These mechanisms generally attempt to reduce
the number of ACKs being transmitted over the low bandwidth channel
by limiting the ACK frequency or filtering out ACKs at an
intermediate router [BPK98]. While these solutions mitigate the
performance problems caused by asymmetric subnetworks they do have
some cost and therefore, as suggested above, bandwidth asymmetry
should be minimized whenever possible when designing subnetworks.
Buffering, flow & congestion control (Dan Grossman)
Many subnets include multiple links with varying traffic demands and
possibly different transmission speeds. At each link there must be a
queuing system, including buffering, scheduling and a capability to
discard excess subnet packets. These queues may also be part of a
subnet flow control or congestion control scheme.
For the purpose of this discussion, we talk about packets without
regard to whether they refer to a complete IP datagram or a
subnetwork packet. At each queue, a packet experiences a delay that
depends on competing traffic and the scheduling discipline, and is
subjected to a local discarding policy.
In addition, some subnets may have flow control or congestion control
mechanisms in addition to packet dropping and reliance on TCP
behavior. Such mechanisms can operate on components in the subnet
layer, such as schedulers, shapers or discarders, and can affect the
operation of IP forwarders at the edges of the subnet. However, with
the exception of RFC2481 explicit congestion notification (which
will be discussed below), IP has no way to pass explicit congestion
or flow control signals to TCP, and TCP would not react to such
signals if they were available.
TCP traffic, and especially aggregated TCP traffic, is bursty. As a
result, instantaneous queue depths can vary dramatically, even in
nominally stable networks. For optimal performance, packets should
be dropped in a controlled fashion, not just when buffer space is
unavailable. How much buffer space should be supplied is still a
matter of debate, but as a rule of thumb, each node should have
enough buffering to hold one bandwidth*delay product's worth of data
for each TCP connection sharing the link.
This is often difficult to estimate, since it depends on parameters
beyond the subnetwork's control or knowledge, and Internet nodes
generally do not implement admission control policies. In general,
it is wise to err in favor of too much buffering rather than too
little. It may also be useful for subnets to incorporate mechanisms
for measuring propagation delay, to assist in buffer sizing
calculations.
There is a rough consensus in the research community that active
queue management is important to improving fairness, link utilization
and throughput [RFC2309]. Although there are questions and concerns
about the efficacy of active queue management (e.g., see [MBDL99]),
it is widely considered an improvement over tail-drop discard
policies.
One well known example of an active queue management the Random Early
Detection (RED) algorithm [RED93]. RED maintains an exponential
weighted moving average of the queue depth. When this average queue
depth is between a maximum threshold max_th, and a minimum threshold
min_th, packets are dropped with a probability which is proportional
to the amount by which the average queue depth exceeds min_th. When
this average queue depth is equal to max_th, the drop probability is
equal to a configurable parameter max_p. When this average queue
depth is greater than max_th, packets are always dropped. Numerous
variants on RED appear in the literature, and there are other active
queue management algorithms which claim advantages over RED in
various dimensions.
Active queue management algorithms form a control regime where
dropped packets are treated as a feedback signal. Randomization of
dropping tends to break up the observed tendency of TCP windows
belonging to different TCP connections to become synchronized by
correlated drops, and also imposes a degree of fairness on those
connections which properly implement TCP congestion avoidance.
Another property of active queue management algorithms which is
particularly important to subnet designers is that they attempt to
keep average queue depths short, while accommodating large short term
bursts.
Since TCP neither knows nor cares whether congestive packet loss
occurs at the IP layer or in a subnet, it may be advisable for
subnets that perform queueing and discarding to consider
implementing some form of active queue management. This is
especially true if large aggregates of TCP connections are likely to
share the same queue. However, active queue management may be less
effective in the case of many queues carrying smaller aggregates of
TCP connections, as for example, in an ATM switch that implements
per-VC queueing.
Note, incidentally, that the performance of active queue management
algorithms is highly sensitive to settings of configurable
parameters, and also to factors such as RTT [MBB00][FB00].
Some subnets, most notably ATM, perform segmentation and reassembly
at the edge of the subnet, to forward subnet packets of size less
than an MTU. There are advantages and disadvantages to doing this.
However, if this is done, care should be taken in designing discard
policies. Subnet packets with missing fragments must be destroyed by
the subnet, as they are of no use to TCP. If the subnet discards
random and uncorrelated fragments of IP packets, then the balance of
these packets constitute an unproductive load on the subnet and can
markedly degrade end-to-end performance. [RF95] Therefore, subnets
should attempt to discard entire IP packets. If a portion of an IP
packet has been forwarded and discarding of subnet packets which are
fragments of the IP packet becomes unavoidable, then either all
remaining fragments or all but the fragment marking the end of the
packet should be discarded. For ATM subnets, this specifically
means using Early Packet Discard and Partial Packet Discard [ATMFTM].
Some subnets might includes flow control mechanisms that effectively
require that the rate of traffic flows be shaped as they enter the
subnet. One example of such a subnet mechanism is in the ATM
Available Bit rate (ABR) service category [ATMFTM]. Such flow
control mechanisms have the effect of making the subnet nearly
lossless by pushing congestion into the IP routers edges of the
subnet. In such a case, adequate buffering and discard policies are
needed in these routers to deal with a subnet which appears to have
dynamically varying bandwidth. Whether there is benefit in this
kind of flow control is controversial, and there have been numerous
simulation and analytical studies that go both ways. It appears that
some of the issues that lead to such different results include
sensitivity to ABR parameters, use of binary rather than explicit
rate feedback, use (or not) of per-VC queueing, and the specific ATM
switch algorithms selected for the study. Anecdotally, some large
networks have used IP over ABR to carry TCP traffic, and have claimed
it to be successful, but have published no results.
Another possible approach to flow control in the subnet would be to
work with TCP Explicit Congestion Notification (ECN) semantics
[RFC2481]. Routers at the edges of the subnet, rather than shaping,
would set the ECN bit in those IP packets that are received in subnet
packets that have an ECN indication. Nodes in the subnet would need
to implement an active queue management protocol which marks subnet
packets rather than dropping. However, RFC2481 is presently
experimental, and TCPs which can use ECN are not widely deployed.
Compression
User data compression is a function that can usually be omitted at
the subnetwork layer. The endpoints typically have more CPU and
memory resources to run a compression algorithm and a better
understanding of what is being compressed. End-to-end compression
benefits every network element in the path, while subnetwork-layer
compression, by definition, benefits only a single subnetwork.
Data presented to the subnetwork layer may already be in compressed
format (e.g., a JPEG file), compressed at the application layer
(e.g., the optional "gzip", "compress", and "deflate" compression in
HTTP/1.1 [RFC2616]), or compressed at the IP layer (the IP Payload
Compression Protocol [RFC2393] supports DEFLATE [RFC2394] and LZS
[RFC2395]). In any of these cases, compression in the subnetwork is
of no benefit.
The subnetwork may also process data that has been encrypted at the
application protocol layer (OpenPGP [RFC2440] or S/MIME
[RFCs-2630-2634]), the transport layer (SSL, TLS [RFC2246]), or the
IP layer (IPSEC ESP [RFC2406]). Ciphers generate random-looking bit
streams lacking any patterns that can be exploited by a compression
algorithm.
If a subnetwork decides to implement user data compression, it must
detect when the data is encrypted or already compressed and transmit
it without further compression. This is important because most
compression algorithms increase the size of encrypted data or data
that has already been compressed.
In contrast to user data compression, subnetworks that operate at low
speed or with small packet size limits are encouraged to compress IP
and transport-level headers (TCP and UDP). An uncompressed 40-byte
TCP/IP header takes about 33 milliseconds to send at 9600 bps. "VJ"
TCP/IP header compression [RFC1144] compresses most headers to 3-5
bytes, reducing transmission time to several milliseconds. This is
especially beneficial for small, latency-sensitive packets, such as
in interactive sessions.
Designers should consider the effect of the subnetwork error rate on
performance when considering header compression. TCP ordinarily
recovers from lost packets by retransmitting only those packets that
were actually lost; packets arriving correctly after a packet loss
are kept on a resequencing queue and do not need to be retransmitted.
In VJ TCP/IP [RFC1144] header compression, however, the receiver
cannot explicitly notify a sender about data corruption and
subsequent loss of synchronization between compressor and
decompressor. It relies instead on TCP retransmission to
resynchronize the decompressor. After a packet is lost, the
decompressor must discard every subsequent packet, even if the
subnetwork makes no further errors, until the sending TCP retransmits
to resynchronize the decompressor. This effect can substantially
magnify the effect of subnetwork packet losses if the sending TCP
window is large, as it will often be on a path with a large
bandwidth*delay product.
Alternative header compression schemes such as those described in
[RFC2507] include an explicit request for retransmission of an
uncompressed packet to allow decompressor resynchronization without
waiting for a TCP retransmission. However, these schemes are not yet
in widespread use.
Packet Reordering
The Internet architecture does not guarantee that packets will arrive
in the same order in which they were originally transmitted, and
transport protocols like TCP must take this into account. However,
we recommend that subnetworks not gratuitously deliver packets out of
sequence. Since TCP returns a cumulative acknowledgment (ACK)
indicating the last in-order segment that has arrived, out-of-order
segments cause a TCP receiver to transmit a duplicate acknowledgment.
When the TCP sender notices three duplicate acknowledgments it
assumes that a segment was dropped by the network and uses the fast
retransmit algorithm [Jac90,APS99] to resend the segment. In
addition, the congestion window is reduced by half, effectively
halving TCP's sending rate. If a subnetwork badly re-orders segments
such that three duplicate ACKs are generated the TCP sender
needlessly reduces the congestion window, and therefore performance.
Mobility
Internet users are increasingly mobile. Not only are many Internet
nodes laptop computers, but pocket organizers and mobile embedded
systems are also becoming nodes on the Internet. These nodes may
connect to many different access points on the Internet over time,
and they expect this to be largely transparent to their activities.
Except when they are not connected to the Internet at all, and for
performance differences when they are connected, they expect that
everything will "just work" regardless of their current Internet
attachment point or local subnetwork technology.
Mobility can be provided at any of several layers in the Internet
protocol stack, and there is ongoing debate as to which are the most
appropriate and efficient. Mobility is already an feature of certain
application layer protocols; the Post Office Protocol (POP) [RFC1939]
and the Internet Message Access Protocol (IMAP) [RFC2060] were
created specifically to provide mobility in the receipt of electronic
mail.
Mobility can also be provided at the IP layer [RFC2002]. This
mechanism provides greater transparency, viz., IP addresses that
remain fixed as the nodes move, but at the cost of potentially
significant network overhead and increased delay because of the non-
optimum network routing and tunneling involved.
Some subnetworks may provide internal mobility, transparent to IP, as
a feature of their own internal routing mechanisms. To the extent
that these simplify routing at the IP layer, reduce the need for
mechanisms like Mobile IP, or exploit mechanisms unique to the
subnetwork, this is generally desirable. This is especially true when
the subnetwork covers a relatively small geographic area and the
users move rapidly between the attachment points within that area.
However, if the subnetwork is physically large and connects to other
parts of the Internet at multiple geographic points, care should be
taken to optimize the wide-area routing of packets between nodes on
the external Internet and nodes on the subnet. This is generally done
with "nearest exit" routing strategies. Because a given subnetwork
may be unaware of the actual physical location of a destination on
another subnetwork, it simply routes packets bound for the other
subnetwork to the nearest gateway between the two. This implies some
awareness of IP addressing and routing within the subnetwork. The
subnetwork may wish to use IP routing internally for wide area
routing and restrict subnetwork-specific routing to constrained
geographic areas where the effects of suboptimal routing are
minimized.
Multicasting
Similar to the case of broadcast and discovery, multicast is more
efficient on shared links where it is supported natively. Native
multicast support requires a reasonable number (?? - over 10, under
1000?) of separate link-layer broadcast addresses. One such address
SHOULD be reserved for native link broadcast; other addresses SHOULD
be provided support separate multicast groups (and there SHOULD be at
least 10?? such addresses).
The other criteria for native multicast is a link-layer filter, which
can select individual or sets of broadcast addresses. Such link
filters avoid having every host parse every multicast message in the
driver; a host receives, at the network layer, only those packets
that pass its configured link filters. A shared link SHOULD support
multiple, programmable link filters, to support efficient native
multicast.
[Multicasting can be simulated over unicast subnets by sending
multiple copies of packets, but this is wasteful. If the subnet can
support native multicasting in an efficient way, it should do so]
Broadcasting and Discovery
Link layers fall into two categories: point-to-point and shared link.
A point-to-point link has exactly two endpoint components (hosts or
gateways); a shared link has more than two, either on an inherently
broadcast media (e.g., Ethernet, radio) or on a switching layer
hidden from the network layer (switched Ethernet, Myrinet, ATM).
There are a number of Internet protocols which make use of link layer
broadcast capabilities. These include link layer address lookup
(ARP), auto-configuration (RARP, BOOTP, DHCP), and routing (RIP).
These protocols require broadcast-capable links. Shared links SHOULD
support native, link layer subnet broadcast.
The lack of broadcast can impede the performance of these protocols,
or in some cases render them inoperable. ARP-like link address lookup
can be provided by a centralized database, rather than owner response
to broadcast queries. This comes at the expense of potentially higher
response latency and the need for explicit knowledge of the ARP
server address (no automatic ARP discovery).
For other protocols, if a link does not support broadcast, the
protocol is inoperable. This is the case for DHCP, for example.
Routing
[what is proper division between routing at the Internet layer and
routing in the subnet? Is it useful or helpful to Internet routing to
have subnetworks that provide their own internal routing?]
Security
[Security mechanisms should be placed as close as possible to the
entities that they protect. E.g., mechanisms that protect host
computers or users should be implemented at the higher layers and
operate on an end-to-end basis under control of the users. This makes
subnet security mechanisms largely redundant unless they are to
protect the subnet itself, e.g., against unauthorized use.]
References
References of the form RFCnnnn are Internet Request for Comments
(RFC) documents available online at www.rfc-editor.org.
[APS99] Mark Allman, Vern Paxson, W. Richard Stevens. TCP Congestion
Control, April 1999. RFC 2581.
[BPK98] Hari Balakrishnan, Venkata Padmanabhan, Randy H. Katz. The
Effects of Asymmetry on TCP Performance. ACM Mobile Networks and
Applications (MONET), 1998.
[Jac90] Van Jacobson. Modified TCP Congestion Avoidance Algorithm.
Email to the end2end-interest mailing list, April 1990. URL:
ftp://ftp.ee.lbl.gov/email/vanj.90apr30.txt.
[SRC81] Jerome H. Saltzer, David P. Reed and David D. Clark, End-to-
End Arguments in System Design. Second International Conference on
Distributed Computing Systems (April, 1981) pages 509-512. Published
with minor changes in ACM Transactions in Computer Systems 2, 4,
November, 1984, pages 277-288. Reprinted in Craig Partridge, editor
Innovations in internetworking. Artech House, Norwood, MA, 1988,
pages 195-206. ISBN 0-89006-337-0. Also scheduled to be reprinted in
Amit Bhargava, editor. Integrated broadband networks. Artech House,
Boston, 1991. ISBN 0-89006-483-0.
http://people.qualcomm.com/karn/library.html.
[RFC791] Jon Postel. "Internet Protocol". September 1981.
[RFC1144] Jacobson, V., "Compressing TCP/IP Headers for Low-Speed
Serial Links," RFC 1144, February 1990.
[RFC1191] J. Mogul, S. Deering. "Path MTU Discovery". November 1990.
[RFC1435] S. Knowles. "IESG Advice from Experience with Path MTU
Discovery". March 1993.
[RFC1577] M. Laubach. "Classical IP and ARP over ATM". January 1994.
[RFC1661] W. Simpson. "he Point-to-Point Protocol (PPP)". July 1994.
[RFC1981] J. McCann, S. Deering, J. Mogul. "Path MTU Discovery for IP
version 6". August 1996.
[RFC2364] G. Gross et al. "PPP Over AAL5". July 1998.
[RFC2393] A. Shacham et al. "IP Payload Compression Protocol
(IPComp)". December 1998.
[RFC2394] R. Pereira. "IP Payload Compression Using DEFLATE".
December 1998.
[RFC2395] R. Friend, R. Monsour. "IP Payload Compression Using LZS".
December 1998.
[RFC2440] J. Callas et al. "OpenPGP Message Format". November 1998.
[RFC2246] T. Dierks, C. Allen. "The TLS Protocol Version 1.0".
January 1999.
[RFC2507] M. Degermark, B. Nordgren, S. Pink. "IP Header
Compression". February 1999.
[RFC2508] S. Casner, V. Jacobson. "Compressing IP/UDP/RTP Headers for
Low-Speed Serial Links". February 1999.
[RFC2581] M. Allman, V. Paxson, W. Stevens. "TCP Congestion Control".
April 1999.
[RFC2406] S. Kent, R. Atkinson. "P Encapsulating Security Payload
(ESP)". November 1998.
[RFC2616] R. Fielding et al. "Hypertext Transfer Protocol --
HTTP/1.1". June 1999.
[RFC2684] D. Grossman, J. Heinanen. "Multiprotocol Encapsulation over
ATM Adaptation Layer 5". September 1999.
[PFTK98] Padhye, J., Firoiu, V., Towsley, D., and Kurose, J.,
Modeling TCP Throughput: a Simple Model and its Empirical Validation,
UMASS CMPSCI Tech Report TR98-008, Feb. 1998.
[MSMO97] M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic
Behavior of the TCP Congestion Avoidance Algorithm",Computer
Communication Review, volume 27, number 3, July 1997.
[OKM96] T. Ott, J.H.B. Kemperman, M. Mathis, The Stationary Behavior
of Ideal TCP Congestion Avoidance.
ftp://ftp.bellcore.com/pub/tjo/TCPwindow.ps
[RED93] S. Floyd, V. Jacobson, "Random Early Detection gateways for
Congestion Avoidance", IEEE/ACM Transactions in Networking, V.1 N.4,
August 1993, http://www.aciri.org/floyd/papers/red/red.html
[Stevens94] R. Stevens, "TCP/IP Illustrated, Volume 1," Addison-
Wesley, 1994 (section 2.10).
[ATMFTM] The ATM Forum, "Traffic Management Specification, Version
4.0", April 1996, document af-tm-0056.000 (www.atmforum.com).
[FB00] Firoiu V., and Borden M., "A Study of Active Queue Management
for Congestion Control" to appear in Infocom 2000
[MBB00] May, M., Bonald, T., and Bolot, J-C., "Analytic Evaluation of
RED Performance" to appear INFOCOM 2000
[MBDL99] May, M., Bolot, J., Diot, C., and Lyles, B., Reasons not to
deploy RED, technical report, June 1999.
[RF95] Romanow, A., and Floyd, S., Dynamics of TCP Traffic over ATM
Networks. IEEE JSAC, V. 13 N. 4, May 1995, p. 633-641.
[RFC2481] Ramakrishan, K. and Floyd S., "A Proposal to add Explicit
Congestion Notification (ECN) to IP" RFC2481 January 1999
[ISO3309] ISO/IEC 3309:1991(E), "Information Technology -
Telecommunications and information exchange between systems - High-
level data link control (HDLC) procedures - Frame structure",
International Organization For Standardization, Fourth edition
1991-06-01.
[EN301] ETSI, (European Broadcasting Union), Digital Video
Broadcasting (DVB); DVB Specification for Data Broadcasting, 1997.
Draft ETSI Standard EN 301 192 v1.1.1 (August 1997).
[ISO13181] ISO/IEC, ISO/IEC 13181-1: Information Technology - Generic
coding of moving pictures and associated audio information, 1995,
International Organization for Standardization and International
Electrotechnical Commission.
Security Considerations
[comment here]
Authors' Addresses:
Phil Karn (karn@qualcomm.com)
Aaron Falk (afalk@panamsat.com)
Joe Touch (touch@isi.edu)
Marie-Jose Montpetit (marie@teledesic.com)
Jamshid Mahdavi (mahdavi@novell.com)
Gabriel Montenegro (Gabriel.Montenegro@eng.sun.com)
Dan Grossman (dan@dma.isg.mot.com)
Gorry Fairhurst (gorry@erg.abdn.ac.uk)
| PAFTECH AB 2003-2026 | 2026-04-22 06:13:38 |