One document matched: draft-ietf-forces-sctptml-04.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY % rfc2629 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.2629.xml'>
<!ENTITY % rfc3654 PUBLIC ''
"http://xml.resource.org/public/rfc/bibxml/reference.RFC.3654.xml">
<!ENTITY % rfc3746 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3746.xml'>
<!ENTITY % rfc3768 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3768.xml'>
<!ENTITY % rfc4960 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.4960.xml'>
<!ENTITY % rfc3554 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3554.xml'>
<!ENTITY % rfc4109 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.4109.xml'>
<!ENTITY % rfc4301 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.4301.xml'>
<!ENTITY % rfc4303 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.4303.xml'>
<!ENTITY % rfc2404 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.2404.xml'>
<!ENTITY % rfc3602 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3602.xml'>
<!ENTITY % rfc5061 PUBLIC ''
'http://xml.resource.org/public/rfc/bibxml/reference.RFC.5061.xml'>
]>
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<?rfc strict="no"?>
<rfc ipr="trust200811" docName="draft-ietf-forces-sctptml-04">
<front>
<title abbrev="ForCES SCTP TML">
SCTP based TML (Transport Mapping Layer) for ForCES protocol
</title>
<author fullname="Jamal Hadi Salim" initials="J." surname="Hadi Salim">
<organization>Mojatatu Networks</organization>
<address>
<postal>
<city>Ottawa, Ontario</city>
<country>Canada</country>
</postal>
<email>hadi@mojatatu.com</email>
</address>
</author>
<author fullname="Kentaro Ogawa" initials="K." surname="Ogawa">
<organization>NTT Corporation</organization>
<address>
<postal>
<street>3-9-11 Midori-cho</street>
<city>Musashino-shi, Tokyo</city>
<code>180-8585</code>
<country>Japan</country>
</postal>
<email>ogawa.kentaro@lab.ntt.co.jp</email>
</address>
</author>
<date day="1" month="July" year="2009" />
<area>Routing</area>
<keyword>RFC</keyword>
<keyword>Request for Comments</keyword>
<keyword>I-D</keyword>
<keyword>Internet-Draft</keyword>
<keyword>ForCES</keyword>
<keyword>TML</keyword>
<keyword></keyword>
<abstract>
<t>
This document defines the SCTP based TML (Transport Mapping Layer) for
the ForCES protocol. It explains the rationale for choosing the
SCTP (Stream Control Transmission Protocol) <xref target="RFC4960"/>
and also describes how this TML addresses all
the requirements described in <xref target="RFC3654"/>
and the ForCES protocol <xref target="FE-PROTO"/> draft.
</t>
</abstract>
</front>
<middle>
<section title="Definitions">
<t>
The following definitions are taken from <xref target="RFC3654"/>and
<xref target="RFC3746"/>:
<t>
ForCES Protocol --
The protocol used at the Fp reference point in the ForCES
Framework in <xref target="RFC3746"/>.
</t>
<t>
ForCES Protocol Layer (ForCES PL) -- A layer in ForCES protocol
architecture that defines the ForCES protocol architecture
and the state transfer mechanisms as defined in <xref target="FE-PROTO"/>.
</t>
<t>
ForCES Protocol Transport Mapping Layer (ForCES TML) -- A layer in
ForCES protocol architecture that specifically addresses the
protocol message transportation issues, such as how the protocol
messages are mapped to different transport media (like SCTP, IP, ATM,
Ethernet, etc), and how to achieve and implement reliability,
security, etc.
</t>
</t>
</section>
<section title="Introduction">
<t>
The ForCES (Forwarding and Control Element Separation) working group
in the IETF defines the architecture and protocol for separation
of Control Elements(CE) and Forwarding Elements(FE) in Network Elements(NE)
such as routers. <xref target="RFC3654"/> and <xref target="RFC3746"/>
respectively define architectural and protocol
requirements for the communication between CE and FE. The ForCES
protocol layer specification <xref target="FE-PROTO"/> describes the
protocol semantics and workings. The ForCES protocol layer operates
on top of an inter-connect hiding layer known as the TML. The relationship
is illustrated in <xref target="pltml_fig"/>.
<t>
This document defines the SCTP based TML for the ForCES
protocol layer. It also addresses all the requirements for the TML
including security, reliability, etc as defined in <xref target="FE-PROTO"/>.
</t>
<!--
<t>
XXXX: TBD - a reference to the correct document for a more complete list of
terminology.
</t>
-->
</t>
</section>
<section title="Protocol Framework Overview" anchor="overv">
<t>
The reader is referred to the Framework document <xref target="RFC3746"/>,
and in particular sections 3 and 4, for an architectural overview and
explanation of where and how the ForCES protocol fits in.
</t>
<t>
There is some content overlap between the ForCES protocol draft
<xref target="FE-PROTO"/> and this section (<xref target="overv"/>)
in order to provide basic context to the reader of this document.
</t>
<t>
The ForCES protocol layering constitutes two pieces: the PL and TML layer.
This is depicted in <xref target="pltml_fig"/>.
<figure anchor = "pltml_fig" title="Message exchange between CE and FE
to establish an NE association">
<preamble> </preamble>
<artwork><![CDATA[
+----------------------------------------------+
| CE PL |
+----------------------------------------------+
| CE TML |
+----------------------------------------------+
^
|
ForCES PL | messages
|
v
+-----------------------------------------------+
| FE TML |
+-----------------------------------------------+
| FE PL |
+-----------------------------------------------+
]]></artwork>
</figure>
The PL is in charge of the ForCES protocol. Its semantics and
message layout are defined in <xref target="FE-PROTO"/>.
The TML is necessary to connect two ForCES end-points as
shown in <xref target="pltml_fig"/>.
<t>
Both the PL and TML are standardized by the IETF. While only
one PL is defined, different TMLs are expected to be
standardized. The TML at each of the nodes (CE and FE) is
expected to be of the same definition in order to inter-operate.
</t>
<t>
When transmitting from a ForCES end-point, the PL delivers
its messages to the TML.
The TML then delivers the PL message to the destination
TML(s).
<!--
Add here a pointer to section where this is discussed in
more details
-->
<t>
On reception of a message, the TML delivers the message to its
destination PL level (as described in the ForCES header).
</t>
</t>
</t>
<section title="The PL">
<t>
The PL is common to all implementations of ForCES and is
standardized by the IETF <xref target="FE-PROTO"/>.
The PL level is responsible for associating an FE or CE to an NE.
It is also responsible for tearing down such associations.
</t>
<t>
An FE may use the PL level to asynchronously send packets to the CE.
The FE may redirect via the PL (from outside the NE) various
control protocol packets (e.g. OSPF, etc) to the CE. Additionally,
the FE delivers various events that CE has subscribed-to via PL
<xref target="FE-MODEL"/>.
</t>
<t>
The CE and FE may interact synchronously via the PL.
The CE issues status requests to the FE and receives responses
via the PL.
The CE also configures the associated FE's LFBs' components using
the PL <xref target="FE-MODEL"/>.
</t>
</section>
<section title="The TML">
<t>
The TML level is responsible for transport of the PL level messages.
<xref target="FE-PROTO"/> section 5 defines the requirements that
need to be met by a TML specification. The SCTP TML specified in
this document meets all the requirements specified in
<xref target="FE-PROTO"/> section 5. <xref target="TMLREQ"/>
describes how the TML requirements are met.
</t>
<!--
XXX: message to Ken; i commented this out because it is hard to
keep in sync with the protocol document for this section.
It is also just much easier to reference the protocol document for the
details.
<list style = "numbers">
<t>Reliability<vspace />
As defined by <xref target="RFC3654"/>, section 6 #6.
</t>
<t>Security<vspace />
TML provides security services to the ForCES PL.
The TML definition needs to define how the following
are achieved:
</t>
<list style = "symbols">
<t>Endpoint authentication of FE and CE</t>
<t>Message authentication </t>
<t>Confidentiality service<vspace />
</t>
</list>
<t>Congestion Control <vspace />
The congestion control mechanism defined by the TML should
prevent the FE from being overloaded by the CE.
Additionally, the circumstances under which notification
is sent to the
PL to notify it of congestion must be defined.
</t>
<t>Uni/multi/broadcast addressing/delivery, if any
<vspace />
If there is any mapping between PL and TML level
uni/multi/broadcast addressing
it needs to be defined.
</t>
<t>Transport High Availability <vspace />
It is expected that availability of transport links
is the TML's responsibility. However, on config basis,
the PL layer may wish to
participate in link failover schemes and therefore the
TML must allow for this.
<vspace />
</t>
<t>Encapsulations used<vspace />
Different types of TMLs will encapsulate the PL messages
on different types of headers. The TML needs to specify
the encapsulation used.
<vspace />
</t>
<t>Prioritization<vspace />
The TML SHOULD will be able to handle up to 8
priority levels needed by the PL and will provide
preferential treatment. <vspace />
The TML needs to define how this is achieved.
</t>
<t>Protection against DoS attacks <vspace />
As described in the Protocol Requirements
<xref target="RFC3654"/>, section 6
</t>
</list>
It is expected more than one TML will
be standardized. The different TMLs each could implement things
differently based on capabilities of underlying media and transport.
However, since each TML is standardized, interoperability is
guaranteed only as long as both endpoints support the same TML.
-->
<section title="TML and PL Interfaces" anchor="TMLAPIs">
<t>
There are two interfaces to the PL and TML, both of which
are out of scope for ForCES. The first one is the interface
between the PL and TML and the other is the
CE Manager (CEM)/FE Manager (FEM)<xref target="RFC3746"/>
interface to both the PL and TML. Both interfaces are shown in
<xref target="pltml_api"/>.
</t>
<!--
<t>
<xref target="TML-API"/> defines an interface between the PL and the
TML layers. The end goal of <xref target="TML-API"/> is to provide
a consistent top edge semantics for all TMLs to adhere to.
Conforming to such an interface makes it easy to plug in
different TMLs over time for a singular PL.
</t>
-->
<figure anchor = "pltml_api" title="The TML-PL interface">
<preamble> </preamble>
<artwork><![CDATA[
+----------------------------+
| +----------------------+ |
| | | |
+---------+ | | PL Layer | |
| | | +----------------------+ |
|FEM/CEM |<---->| ^ |
| | | | |
+---------+ | |TML API |
| | |
| V |
| +----------------------+ |
| | | |
| | TML Layer | |
| | | |
| +----------------------+ |
+----------------------------+
]]></artwork>
</figure>
<!--
<t>
XXX - Editorial Note: There is some concern (and confusion) about defining
APIs in ForCES. So at the moment the future of <xref target="TML-API"/>
is unknown and we will remove references to it in future revisions of
this document.
</t>
<t>
We are going to assume the existence of such an interface and not discuss
it further. The reader is encouraged to read <xref target="TML-API"/> as
a background.
</t>
-->
<t>
<xref target="pltml_api"/> also shows an interface referred
to as CEM/FEM<xref target="RFC3746"/> which is responsible for
bootstrapping and parameterization of the TML.
In its most basic form the CEM/FEM interface takes the form of a simple
static config file which is read on startup in the pre-association
phase.
</t>
<t>
<xref target="serviceint"/> discusses in more details the service
interfaces.
</t>
</section>
<section title ="TML Parameterization" anchor ="TMLp">
<t>
It is expected that it should be possible to use a
configuration reference point, such as the FEM
or the CEM, to configure the TML.
</t>
<t>
Some of the configured parameters may include:
</t>
<list style = "symbols">
<t>PL ID</t>
<t>Connection Type and associated data. For example if a
TML uses IP/SCTP then parameters such as SCTP
ports and IP addresses need to be configured.</t>
<t>Number of transport connections </t>
<t>Connection Capability, such as bandwidth, etc.</t>
<t>Allowed/Supported Connection QoS policy (or
Congestion Control Policy)</t>
</list>
</section>
</section>
</section>
<section title="SCTP TML overview">
<t>
</t>
<!--
<section title="Introduction to SCTP">
</section>
-->
<t>
SCTP <xref target="RFC4960"/> is an end-to-end transport
protocol that is equivalent to TCP, UDP, or DCCP in many
aspects.
With a few exceptions, SCTP can do most of what UDP, TCP,
or DCCP can achieve. SCTP as well can do most of what
a combination of the other transport protocols can achieve
(e.g. TCP and DCCP or TCP and UDP).
</t>
<t>
Like TCP, it provides ordered, reliable, connection-oriented,
flow-controlled, congestion controlled data exchange.
Unlike TCP, it does not provide byte streaming and instead
provides message boundaries.
</t>
<t>
Like UDP, it can provide unreliable, unordered
data exchange.
Unlike UDP, it does not provide multicast support
</t>
<t>
Like DCCP, it can provide unreliable, ordered, congestion
controlled, connection-oriented data exchange.
</t>
<t>
SCTP also provides other services that none of the
3 transport protocols mentioned above provide. These
include:
<list style = "symbols">
<t>Multi-homing <vspace />
An SCTP connection can make use of multiple
destination IP addresses to communicate with
its peer.
</t>
<t>Runtime IP address binding <vspace />
With the SCTP Dynamic Address Reconfiguration
(<xref target="RFC5061"/>) feature, a new IP
address can be bound at runtime. This allows for
migration of endpoints without restarting
the association (valuable for high availability).
</t>
<t>A range of reliability shades with congestion
control <vspace />
SCTP offers a range of services from full
reliability to none, and from full ordering to none.
With SCTP, on a per message basis, the application
can specify a message's time-to-live. When
the expressed time expires, the message can
be "skipped".
</t>
<t>Built-in heartbeats <vspace />
SCTP has built-in heartbeat mechanism that validate
the reachability of peer addresses.
<!--
Are these HBs traffic sensitive? I think yes,
but need to double check.
-->
</t>
<t>Multi-streaming <vspace />
A known problem with TCP is head of line (HOL)
blocking. If you have independent messages, TCP
enforces ordering of such messages. Loss at the
head of the messages implies delays of delivery
of subsequent packets. SCTP allows for defining
up to 64K independent streams over the same socket
connection, which are ordered independently.
</t>
<t>Message boundaries with reliability <vspace />
SCTP allows for easier message parsing (just
like UDP but with reliability built in)
because it establishes boundaries on a PL
message basis. On a TCP stream, one would
have to use techniques such peeking
into the message to figure the boundaries.
</t>
<t>Improved SYN DOS protection <vspace />
Unlike TCP, which does a 3 way connection
setup handshake, SCTP does a 4 way handshake.
This improves against SYN-flood attacks because
listening sockets do not set up state until
a connection is validated.
</t>
<t>Simpler transport events <vspace />
An application (such as the TML) can subscribe
to be notified of both local and remote transport
events. Events that can be subscribed-to include
indication of association changes, addressing
changes, remote errors, expiry of timed messages,
etc. These events are off by default
and require explicit subscription.
</t>
<t>Simplified replicasting <vspace />
Although SCTP does not allow for multicasting
it allows for a single message from an application
to be sent to multiple peers. This reduces
the messaging that typically crosses different
memory domains within a host (example in a kernel
to user space domain of an operating system).
</t>
</list>
</t>
<section title="Rationale for using SCTP for TML">
<t>
SCTP has all the features required to provide a robust
TML. As a transport that is all-encompassing, it negates
the need for having multiple transport protocols in order
to satisfy the TML requirements (<xref target="FE-PROTO"/>
section 5).
<!--as has
been suggested so far in the other proposals for TMLs.
-->
As a result it allows for simpler coding
and therefore reduces a lot of the interoperability concerns.
</t>
<t>
SCTP is also very mature and widely used
making it a good choice for ubiquitous deployment.
<!-- in comparison with other proposed TMLs.-->
</t>
<t>
</t>
</section>
<section title="Meeting TML requirements">
<figure anchor = "sctptml_api" title="The TML-SCTP interface">
<preamble> </preamble>
<artwork><![CDATA[
PL
+----------------------+
| |
+-----------+----------+
| TML API
TML |
+-----------+----------+
| | |
| +------+------+ |
| | TML core | |
| +-+----+----+-+ |
| | | | |
| SCTP socket API |
| | | | |
| | | | |
| +-+----+----+-+ |
| | SCTP | |
| +------+------+ |
| | |
| | |
| +------+------+ |
| | IP | |
| +-------------+ |
+----------------------+
]]></artwork>
</figure>
<t>
<xref target="sctptml_api"/> details the interfacing between the PL
and SCTP TML and the internals of the SCTP TML. The core of the TML
interacts on its north-bound interface to the PL (utilizing the TML
API).
On the south-bound interface, the TML core interfaces to the SCTP layer
utilizing the standard socket interface<xref target="SCTP-API"/>
There are three SCTP socket connections opened between any two PL
endpoints (whether FE or CE).
</t>
<section title="SCTP TML Channels" anchor = "sctptml_channs" >
<figure anchor = "sctptml_chan" title="The TML-SCTP channels">
<preamble> </preamble>
<artwork><![CDATA[
+--------------------+
| |
| TML core |
| |
+-+-------+--------+-+
| | |
| Med prio, |
| Semi-reliable |
| channel |
| | Low prio,
| | Unreliable
| | channel
| | |
^ ^ ^
| | |
Y Y Y
High prio,| | |
reliable | | |
channel | | |
Y Y Y
+-+--------+--------+-+
| |
| SCTP |
| |
+---------------------+
]]></artwork>
</figure>
<t>
<xref target="sctptml_chan"/> details further the interfacing
between the TML core and SCTP layers. There are 3 channels used
to separate and prioritize the different types of ForCES traffic.
Each channel constitutes a socket interface.
It should be noted that all SCTP channels are congestion aware (and
for that reason that detail is left out of the description of the 3
channels).
SCTP port 6700, 6701, 6702 are used for the higher, medium and
lower priority channels respectively.
</t>
<section title="Justifying Choice of 3 Sockets" anchor="3socks">
<t>
SCTP allows up to 64K streams to be sent over a single socket
interface. The authors initially envisioned using a single socket
for all three channels (mapping a channel to an SCTP stream). This
simplifies programming of the TML as well as conserves use of SCTP ports.
</t>
<t>
Further analysis revealed head of line blocking issues with this
initial approach. Lower priority packets not needing reliable delivery
could block higher priority packets (needing reliable delivery) under
congestion situation for an indeterminate period of time (depending
on how many outstanding lower priority packets are pending).
For this reason, we elected to go with mapping
each of the three channels to a different SCTP socket (instead of
a different stream within a single socket).
</t>
<!--
<t> XXX: Talk here about Michael Tuxen's approach which will allow
for SCTP to prioritize streams within a single socket.
Unfortunately, until that approach completes standardization
effort we cannot recomend its use for ForCES TML.
</t>
-->
</section>
<section title="Higher Priority, Reliable channel" anchor = "HP">
<t>
The higher priority (HP) channel uses a standard SCTP reliable socket on
port 6700. It is used for CE solicited messages and their responses:
<list style = "numbers">
<t>
ForCES configuration messages flowing from CE to FE and responses
from the FE to CE.
</t>
<t>
ForCES query messages flowing from CE to FE and responses from
the FE to the CE.
</t>
</list>
</t>
<t>
It is recommended that PL priorities 4-7 be used for this channel and
that the following PL messages use the HP channel for transport:
<list style = "symbols">
<t>
Association Setup
</t>
<t>
Association Setup Response
</t>
<t>
Association Teardown
</t>
<t>
Config
</t>
<t>
Config Response
</t>
<t>
Query
</t>
<t>
Query Response
</t>
</list>
</t>
<!--
Ken, I have commented this out because I feel that it is causing
confusion. We should be explicit - i.e tell people exactly what to do
and not merely make suggestion (which often causes implementation
challenges). I think there maybe events that will require guaranteed
delivery which we should add here, but i cant think of one...
<t>
Some events which require guaranteed delivery could also optionally
use this interface. An example of an event that would be prioritized
and delivered on this channel would be a PL heartbeat
(in a scenario when the first few HBs fail to make it to the destination).
</t>
-->
</section>
<section title="Medium Priority, Semi-Reliable channel" anchor = "MP">
<t>
The medium priority (MP) channel uses SCTP-PR on port 6701.
Time limits on how long a message is valid are set on each
outgoing message. This channel is used for events from the FE
to the CE that are obsoleted over time.
Events that are accumulative in nature and are recoverable by the CE
(by issuing a query to the FE) can tolerate lost events and therefore
should use this channel.
For example, a generated event which carries the value of a counter
that is monotonically incrementing fits to use this channel.
</t>
<t>
It is recommended that PL priorities 2-3 be used for this channel and that
the following PL messages use the MP channel for transport:
<list style = "symbols">
<t>
Event Notification
</t>
</list>
</t>
</section>
<section title="Lower Priority, Unreliable channel" anchor = "LP">
<t>
<!-- check with sctp specs: Could we achieve this by
setting timeout to be much lower that medium channel?
In that case, could we not then combine into one socket
with the medium prio?
Also could we not just use a single socket as original plan was
because a) reliable socket means it will cause a HOL blocking
of medium and low prio and b) we can distinguish the medium
and low reliable versions by using different timeouts.
Example, if a reliable packet is ahead of medium prio and
it is being retransmitted, it will block until reliable packet
makes it out. If medium prio packet is blocking the low prio
packet, the medium prio will sit there until it times out and only
then can low prio packets make it.
UPDATE: July 07/2008. We have decided to go with a triplicate
socket approach. So remove this text around Version 2 of doc.
-->
The lower priority (LP) channel uses SCTP port 6702.
This channel
also uses SCTP-PR with lower timeout values than the MP channel.
The reason an unreliable channel is used for redirect
messages is to allow the control protocol at both the CE and its
peer-endpoint to take charge of how the end-to-end semantics of the
said control protocol's operations. For example:
<list style = "numbers">
<t>
Some control protocols are reliable in nature, therefore making
this channel reliable introduces an extra layer of reliability
which could be harmful. So any end-to-end retransmits will
happen from remote.
</t>
<t>
Some control protocols may desire to have obsolescence of messages
over retransmissions; making this channel reliable contradicts
that desire.
</t>
</list>
</t>
<t>
Given ForCES PL level heartbeats are traffic sensitive, sending
them over the LP channel also makes sense. If the other end is
not processing other channels it will eventually get heartbeats;
and if it is busy processing other channels heartbeats will be
obsoleted locally over time (and it does not matter if they did not
make it).
</t>
<t>
It is recommended that PL priorities 0-1 be used for this channel and that
that the following PL messages use the LP channel for transport:
<list style = "symbols">
<t>
Packet Redirect
</t>
<t>
Heartbeats
</t>
</list>
</t>
</section>
<section title="Scheduling of The 3 Channels" anchor = "3csched">
<t>
Strict priority work-conserving scheduling is used to process
both on sending and receiving (of the PL messages) by the TML Core as
shown in <xref target="gen-sched"/>.
</t>
<t>
This means that the HP messages are always processed
first until there are no more left. The LP channel is
processed only if a channel that is higher priority than itself
has no more messages left to process.
This means that under congestion situation, a higher priority channel
with sufficient messages that occupy the available bandwidth would
starve lower priority channel(s).
</t>
<t>
The design intent of the SCTP TML is to tie prioritization
as described in <xref target="3socks"/> and transport congestion
control to provide implicit node congestion control. This
is further detailed in <xref target="sched-det"/>.
</t>
<t>
<figure anchor = "gen-sched" title="SCTP TML Strict Priority Scheduling">
<preamble> </preamble>
<artwork><![CDATA[
SCTP channel +----------+
Work available | DONE +---<--<--+
| +---+------+ |
Y ^
| +-->--+ +-->---+ |
+-->-->-+ | | | | |
| | | | | | ^
| ^ ^ Y ^ Y |
^ / \ | | | | |
| / \ | ^ | ^ ^
| / Is \ | / \ | / \ |
| / there \ | /Is \ | /Is \ |
^ / HP work \ ^ /there\ ^ /there\ ^
| \ ? / | /MP work\ | /LP work\ |
| \ / | \ ? / | \ ? / |
| \ / | \ / | \ / ^
| \ / ^ \ / ^ \ / |
| \ / | \ / | \ / |
^ Y-->-->-->+ Y-->-->-->+ Y->->->-+
| | NO | NO | NO
| | | |
| Y Y Y
| | YES | YES |
^ | | |
| Y Y Y
| +----+------+ +---|-------+ +----|------+
| |- process | |- process | |- process |
| | HP work | | MP work | | LP work |
| +------+----+ +-----+-----+ +-----+-----+
| | | |
^ Y Y Y
| | | |
| Y Y Y
+--<--<---+--<--<----<----+-----<---<-----+
]]></artwork>
</figure>
</t>
</section>
<section title="SCTP TML Parameterization" anchor="params">
<t>
The following is a list of parameters needed for booting the TML.
It is expected these parameters will be extracted via the FEM/CEM
interface for each PL ID.
<list style = "numbers">
<t>
The IP address or a resolvable DNS/hostname of the CE/FE.
</t>
<t>
Whether to use IPsec or not. If IPsec is used, how to
parameterize the different required ciphers, keys etc
as described in <xref target="ipsec"/>
</t>
<t>
The HP SCTP port, as discussed in <xref target="HP"/>. The
default HP port value is 6700 (<xref target="IANA"/>).
</t>
<t>
The MP SCTP port, as discussed in <xref target="MP"/>.
The default MP port value is 6701 (<xref target="IANA"/>).
</t>
<t>
The LP SCTP port, as discussed in <xref target="LP"/>.
The default LP port value is 6702 (<xref target="IANA"/>).
</t>
<!--
Security parameters such as TLS or IPsec setup ...
-->
</list>
</t>
</section>
</section>
<section title="Satisfying TML Requirements" anchor="TMLREQ">
<t>
<xref target="FE-PROTO"/> section 5 lists requirements that
a TML needs to meet. This section describes how the SCTP
TML satisfies those requirements.
</t>
<section title="Satisfying Reliability Requirement">
<t>
As mentioned earlier, a shade of reliability
ranges is possible in SCTP. Therefore this requirement
is met.
</t>
</section>
<section title="Satisfying Congestion Control Requirement">
<t>
Congestion control is built into SCTP. Therefore,
this requirement is met.
</t>
</section>
<section title="Satisfying Timeliness and Prioritization Requirement">
<t>
By using 3 sockets in conjunction with the
partial-reliability feature, both timeliness and
prioritization can be achieved.
</t>
</section>
<section title="Satisfying Addressing Requirement">
<t>
There are no extra headers required for SCTP to
fulfil this requirement.
SCTP can be told to replicast packets to multiple
destinations. The TML implementation will need to
translate PL level addresses, to a variety of unicast
IP addresses in order to emulate multicast and broadcast
PL addresses.
</t>
</section>
<section title="Satisfying HA Requirement">
<t>
Transport link resiliency is one of SCTP's strongest point.
<!--
(where it totally outclasses all other TML proposals).
-->
Failure detection and recovery is built in, as mentioned
earlier.
<list style = "symbols">
<t>
The SCTP multi-homing feature is used to provide path
diversity.
Should one of the peer IP addresses become unreachable,
the other(s) are used without needing lower layer
convergence (routing, for example) or even
the TML becoming aware.
</t>
<t>
SCTP heartbeats and data transmission thresholds are used
on a per peer IP address to detect reachability faults.
The faults could be a result of an unreachable address or
peer, which may be caused by a variety of reasons,
like interface, network, or endpoint failures. The cause
of the fault is noted.
</t>
<t>
With the ADDIP feature, one can migrate IP addresses
to other nodes at runtime. This is not unlike the
VRRP<xref target="RFC3768"/> protocol use. This feature
is used in addition to multi-homing in a planned migration
of activity from one FE/CE to another. In such a case, part
of the provisioning recipe at the CE for replacing an FE
involves migrating activity of one FE to another.
</t>
</list>
</t>
</section>
<section title="Satisfying Node Overload Prevention Requirement">
<t>
The architecture of this TML defines three separate
channels, one per socket, to be used within any FE-CE setup.
The scheduling design for processing the TML channels
(<xref target="3csched"/>) is strict priority. A fundamental
desire of the strict prioritization is to ensure that more
important work always gets node resources such as CPU
and bandwidth over lesser important work.
</t>
<t>
When a ForCES node CPU is overwhelmed because the incoming
packet rate is higher than it can keep up with, the channel
queues grow and transport congestion subsequently follows.
By virtue of using SCTP, the congestion is propagated back
to the source of the incoming packets and eventually
alleviated.
</t>
<t>
The HP channel work gets prioritized at the expense of the
MP which gets prioritized over LP channels.
The preferential scheduling only kicks in when there is
node overload regardless of whether there is transport
congestion. As a result of the preferential work treatment,
the ForCES node achieves a robust steady processing capacity.
Refer to <xref target="sched-det"/> for details on scheduling.
</t>
<t>
For an example of how the overload prevention works: consider
a scenario where an overwhelming amount redirected packets
(from outside the NE) coming into the NE may overload the
FE while it has outstanding config work from the CE.
In such a case, the FE, while it is busy processing config
requests from the CE ignores processing the redirect packets
on the LP channel.
If enough redirect packets accumulate, they are dropped
either because the LP channel threshold is exceeded or because
they are obsoleted. If on the other hand, the FE has successfully
processed the higher priority channels and their related work,
then it can proceed and process the LP channel.
So as demonstrated in this case, the TML ties transport and
node overload implicitly together.
</t>
</section>
<section title="Satisfying Encapsulation Requirement">
<t>
There is no extra encapsulation added by the SCTP TML.
</t>
<t>
In the future, should the need arise, a new SCTP
extension/chunk can be defined to meet
newer ForCES requirements <xref target="RFC4960"/>.
</t>
</section>
</section>
</section> <!--TMLREQ-->
</section>
<section anchor="work" title="SCTP TML Channel Work">
<t>
There are two levels of TML channel work within an NE when a ForCES
node (CE or FE) is connected to multiple other ForCES nodes:
<list style = "numbers">
<t>
NE-level I/O work where a ForCES node (CE or FE)
needs to choose which of the peer nodes to process.
</t>
<t>
Node-level I/O work where a ForCES node, handles
the three SCTP TML channels separately for each single
ForCES endpoint.
</t>
</list>
</t>
<t>
NE-level scheduling definition is left up to the implementation and
is considered out of scope for this document.
<xref target="tml-NE"/> discuss briefly some constraints that an
implementor needs to worry about.
</t>
<t>
This document provides suggestions on SCTP channel work implementation
in <xref target="tml-work"/>.
</t>
<t>
The FE SHOULD do channel connections to the
CE in the order of incrementing priorities i.e. LP socket first,
followed by MP and ending with HP socket connection. The CE, however,
MUST NOT assume that there is ordering of socket connections from any FE.
</t>
</section>
<section anchor="IANA" title="IANA Considerations">
<t>
This document makes request of IANA to reserve SCTP ports
6700, 6701, and 6702.
</t>
</section>
<section anchor="Security" title="Security Considerations">
<!--
<t>
This section is derived from the TML security requirements in
<xref target="FE-PROTO"/>.
</t>
<t>
Because a ForCES PL is used to operate an NE, attacks designed to
confuse, disable, or take information from a ForCES-based NE may be
seen as a prime objective during a network attack.
</t>
<t>
An attacker in a position to inject false messages into a PL
stream can either affect the FE's treatment of the data path
(example by falsifying control data reported as coming from the CE),
or the CE itself (by modifying events or responses reported as
coming from the FE); for this reason, CE and FE node authentication
and TML Message authentication is important.
</t>
<t>
The PL messages may also contain information of value
to an attacker, including information about the configuration of the
network, encryption keys and other sensitive control data,
so care must be taken to confine their visibility to authorized users.
</t>
-->
<t>
The SCTP TML provides the following security services to
the PL level:
<list style = "symbols">
<t>
A mechanism to authenticate ForCES CEs and FEs at transport level
in order to prevent the participation of unauthorized CEs and
unauthorized FEs in the control and data path processing of a ForCES
NE.
</t>
<t>
A mechanism to ensure message authentication
of PL data and headers transferred from the CE to FE (and vice-versa)
in order to prevent the injection of incorrect data into PL messages.
</t>
<t>
A mechanism to ensure the confidentiality of
PL data and headers transferred from the CE to FE (and vice-versa),
in order to prevent disclosure of PL level information transported
via the TML.
</t>
</list>
</t>
<t>
Security choices provided by the TML are made by the operator
and take effect during the pre-association phase of the ForCES
protocol. An operator may choose to use all, some or none of the
security services provided by the TML in a CE-FE connection.
</t>
<t>
When operating under a secured environment, or for other
operational concerns (in some cases performance issues)
the operator may turn off all the security functions between CE and FE.
</t>
<t>
IP Security Protocol (IPsec) <xref target="RFC4301"/> is used
to provide needed security mechanisms.
</t>
<t>
IPsec is an IP level security scheme
transparent to the higher-layer applications and therefore can provide
security for any transport layer protocol. This gives IPsec the
advantage that it can be used to secure everything between the CE
and FE without expecting the TML implementation to be aware
of the details.
</t>
<t>
The IPsec architecture is designed to provide message integrity and
message confidentiality outlined in the
TML security requirements (<xref target="FE-PROTO"/>).
Mutual authentication and key exchange protocol are provided by
Internet Key Exchange (IKE)<xref target="RFC4109"/>.
</t>
<!--
</section>
-->
<!--
The security information is very old, find new RFCs and update
(eg IPsec uses RFC 4xxx these days).
We also need to explain what mode of IPsec is going to run;
transport, ESP?
<section title="TML Security Services using IPsec">
<t>
XXXX: Editors note: We should review what RFCs to list as references
(eg IKEv2, ESP etc).
</t>
-->
<section anchor="ipsec" title="IPsec Usage">
<t>
A ForCES FE or CE MUST support the following:
<list style = "symbols">
<t>
Internet Key Exchange (IKE)<xref target="RFC4109"/> with
certificates for endpoint authentication.
</t>
<t>
Transport Mode Encapsulating Security Payload (ESP)<xref target="RFC4303"/>.
</t>
<t>
HMAC-SHA1-96 <xref target="RFC2404"/> for message
integrity protection
</t>
<t>
AES-CBC with 128-bit keys <xref target="RFC3602"/> for message
confidentiality.
</t>
<t>
Replay protection<xref target="RFC4301"/>.
</t>
</list>
</t>
<t>
It is expected to be possible for the CE or FE to be operationally
configured to negotiate other cipher suites and even use manual keying.
</t>
<section anchor="SAPD" title="SAD and SPD setup">
<t>
To minimize the operational configuration it is recommended that only
the IANA issued SCTP protocol number(132) be used as a selector
in the Security Policy Database (SPD) for ForCES. In such a case
only a single SPD and SAD entry is needed.
</t>
<t>
It should be straightforward to extend such a policy to alternatively
use the 3 SCTP TML port numbers as SPD selectors. But as noted above
this choice will require increased number of SPD entries.
</t>
<t>
In scenarios where multiple IP addresses are used within a
single association, and there is desire to configure different
policies on a per IP address,
then it is recommended to follow <xref target="RFC3554"/>
</t>
<!--
-->
</section>
</section>
</section>
<!--
<section anchor="Manageability" title="Manageability Considerations">
<t>TBA</t>
Talk about TML configuration here ..
</section>
-->
<section anchor="Acknowledgements" title="Acknowledgements">
<t>
The authors would like to thank Joel Halpern, Michael Tuxen,
Randy Stewart and Evangelos Haleplidis for engaging us in
discussions that have made this draft better.
</t>
</section>
</middle>
<!--
XXX: The model, proto, tmlapi drafts have changed ownership and
release dates - please update
-->
<back>
<references title="Normative References">
<!--
-->
&rfc2404;
&rfc3554;
&rfc3602;
&rfc4301;
&rfc4303;
&rfc4109;
&rfc4960;
&rfc5061;
</references>
<references title="Informative References">
&rfc3654;
&rfc3746;
<reference anchor="FE-MODEL">
<front>
<title>ForCES Forwarding Element Model</title>
<author initials="J." surname="Halpern" fullname="J. Halpern"></author>
<author initials="J." surname="Hadi Salim" fullname="Jamal Hadi Salim"> </author>
<date month="October" year="2008" />
</front>
</reference>
<reference anchor="FE-PROTO">
<front>
<title>ForCES Protocol Specification</title>
<author initials="A." surname="Doria (Ed.)" fullname="Avri Doria"></author>
<author initials="R." surname="Haas (Ed.)" fullname="Robert Haas"></author>
<author initials="J." surname="Hadi Salim (Ed.)" fullname="Jamal Hadi Salim"> </author>
<author initials="H." surname="Khosravi (Ed.)" fullname="Hormuzd M Khosravi"> </author>
<author initials="W. " surname="M. Wang (Ed.)" fullname="Weiming Wang"> </author>
<author initials="L. " surname="Dong" fullname="Ligang Dong"> </author>
<author initials="R. " surname="Gopal" fullname="Ram Gopal"> </author>
<date month="November" year="2008" />
</front>
</reference>
<!--
<reference anchor="TML-API">
<front>
<title>ForCES Transport Mapping Layer (TML) Service Primitives</title>
<author initials="W. " surname="M. Wang" fullname="Weiming Wang"> </author>
<author initials="J." surname="Hadi Salim" fullname="Jamal Hadi Salim"> </author>
<author initials="A." surname="Audu" fullname="Alex Audu"> </author>
<date month="Feb." year="2007" />
</front>
</reference>
-->
<reference anchor="SCTP-API">
<front>
<title>
Sockets API Extensions for Stream Control Transmission Protocol
(SCTP)
</title>
<author initials="R. " surname="Stewart" fullname="Randall R. Stewart"> </author>
<author initials="K." surname="Poon" fullname="Kacheong Poon"> </author>
<author initials="M." surname="Tuexen" fullname="Michael Tuexen"> </author>
<author initials="V." surname="Yasevich" fullname="Vladislav Yasevich"> </author>
<author initials="P." surname="Lei" fullname="Peter Lei"> </author>
<date month="Feb." year="2009" />
</front>
</reference>
&rfc3768;
</references>
<section anchor="tml-work" title="SCTP TML Channel Work Implementation">
<t>
As mentioned in <xref target="work"/>,
there are two levels of TML channel work within an NE when a ForCES
node (CE or FE) is connected to multiple other ForCES nodes:
<list style = "numbers">
<t>
NE-level I/O work where a ForCES node (CE or FE)
needs to choose which of the peer nodes to process.
</t>
<t>
Node-level I/O work where a ForCES node, handles
the three SCTP TML channels separately for each single
ForCES endpoint.
</t>
</list>
</t>
<t>
NE-level scheduling definition is left up to the implementation and
is considered out of scope for this document.
<xref target="tml-NE"/> discuss briefly some constraints that an
implementor needs to worry about.
</t>
<t>
This document and in particular <xref target="tml-init"/>,
<xref target="sched-det"/> and <xref target="tml-fin"/> discuss
details of node-level I/O work.
</t>
<section anchor="tml-init" title="SCTP TML Channel Initialization">
<t>
As discussed in <xref target="work"/>, it is recommended that the
FE SHOULD do socket connections to the
CE in the order of incrementing priorities i.e. LP socket first,
followed by MP and ending with HP socket connection. The CE, however,
MUST NOT assume that there is ordering of socket connections from any FE.
<xref target="tmlboot"/> has more details on the expected initialization
of SCTP channel work.
</t>
</section>
<section anchor="sched-det" title="Channel work scheduling">
<t>
This section provides high level details of the scheduling view
of the SCTP TML core (<xref target="sctptml_channs"/>). A practical
scheduler implementation takes care of many little details (such
as timers, work quanta, etc) not described in this document. The
implementor is left to take care of those details.
</t>
<t>
The CE(s) and FE(s) are coupled together in the principles of
the scheduling scheme described here to tie together node overload
with transport congestion. The design intent is to provide the
highest possible robust work throughput for the NE under
any network or processing congestion.
</t>
<!--
<t>
XXX (Editorial note): We need to solicit feedback whether it
would help implementors if we publish algorithm for the CE/FE
scheduling in the form of pseudo-code.
</t>
-->
<section anchor="sched-det-FE" title="FE Channel work scheduling">
<t>
The FE scheduling, in priority order, needs to I/O process:
<list style = "numbers">
<t>
The HP channel I/O in the following priority order:
<list style = "numbers">
<t>
Transmitting back to the CE any outstanding
result of executed work via the HP channel transmit path.
</t>
<t>
Taking new incoming work from the CE which creates
ForCES work to be executed by the FE.
</t>
</list>
</t>
<t>
ForCES events which result in transmission of unsolicited
ForCES packets to the CE via the MP channel.
</t>
<t>
Incoming Redirect work in the form of control packets that come
from the CE via LP channel. After redirect processing, these packets
get sent out on external (to the NE) interface.
</t>
<t>
Incoming Redirect work in the form of control packets that come from
other NEs via external (to the NE) interfaces. After some
processing, such packets are sent to the CE.
</t>
</list>
<t>
It is worth emphasizing at this point again that the SCTP TML
processes the channel work in strict priority. For example,
as long as there are messages to send to the CE on the HP channel,
they will be processed first until there
are no more left before processing the next priority work
(which is to read new messages on the HP channel incoming from the
CE).
</t>
</t>
<!--
<t>
The following pseudo-code provides a flow of the scheduling
work for the FE related to the three channels.
</t>
<artwork><![CDATA[
// we have an IO event
HP_process:
if HP_tx_congested is set {
if IO event is "HP channel transmit available" {
unset HP_tx_congested flag
goto send_HP_channel
} else {
DONE
}
} else { // no previous congestion ..
if IO event is "HP channel received" {
read HP channel
process ForCES work from read data
} else {
goto MP_process
}
}
send_HP_Channel:
if we have HP ForCES packets to send to CE {
write one or more packet to HP channel
if we failed to write to HP channel {
set HP_tx_congested flag
DONE
}
goto HP_process
}
//
// Process MP work
MP_process:
if MP_tx_congested is set {
if IO event is "MP channel transmit available" {
unset MP_tx_congested flag
goto send_MP_channel
} else {
DONE
}
} else { // no previous congestion ..
if there are ForCES events to tell CE about {
compose packets from ForCES events
goto send_MP_channel
} else {
goto redirect_process
}
}
send_MP_Channel:
if we have MP ForCES packets to send to CE {
write one or more packet to MP channel
if we failed to write to MP channel {
set MP_tx_congested flag
DONE
}
goto redirect_process
}
//
// Process redirect work
redirect_process:
if external_tx_congested is set {
if IO event is "external channel transmit available" {
unset flag external_tx_congested
} else {
DONE
}
}
send_external_channel:
if we have redirect packets from CE {
process (strip/add headers etc)
write one or more packet to external channel(s)
if we failed to write to external channel {
set external_tx_congested flag
DONE
}
}
//
LP_process:
if LP_tx_congested is set {
if IO event is "LP channel transmit available" {
unset LP_tx_congested flag
goto send_LP_channel
} else {
DONE
}
} else { // no previous congestion ..
if IO event is "external channel received" {
read external channel
process packets received
} else {
DONE
}
}
send_LP_Channel:
if we have redirect packets to send to CE {
process (add headers etc)
write one or more packet to LP channel
if we failed to write to LP channel {
set LP_tx_congested flag
DONE
}
} else {
goto HP_process
}
]]></artwork>
-->
</section>
<section anchor="sched-det-CE" title="CE Channel work scheduling">
<t>
The CE scheduling, in priority order, needs to deal with:
<list style = "numbers">
<t>
The HP channel I/O in the following priority order:
<list style = "numbers">
<t>
Process incoming responses to requests of work it made to the FE(s).
</t>
<t>
Transmitting any outstanding HP work it needs for the FE(s)
to complete.
</t>
</list>
</t>
<t>
Incoming ForCES events from the FE(s) via the MP channel.
</t>
<t>
Outgoing Redirect work in the form of control packets that get sent from
the CE via LP channel destined to external (to the NE) interface
on FE(s).
</t>
<t>
Incoming Redirect work in the form of control packets that come from
other NEs via external (to the NE) interfaces on the FE(s).
</t>
</list>
</t>
<t>
It is worth to repeat for emphasis again that the SCTP TML
processes the channel work in strict priority. For example,
if there are messages incoming from an FE on the HP channel,
they will be processed first until there are no more left before
processing the next priority work
which is to transmit any outstanding HP channel messages going to
the FE.
</t>
<!--
<t>
The following pseudo-code provides a flow of the scheduling
work for the CE related to the three channels.
</t>
<artwork>
<![CDATA[
// we have an IO event
HP_process:
if IO event is "HP channel received" {
read HP channel
process ForCES work from read data
goto HP_process
}
if HP_tx_congested is set {
if IO event is "HP channel transmit available" {
unset HP_tx_congested flag
goto send_HP_channel
} else {
DONE
}
}
send_HP_Channel:
if we have HP ForCES packets to send to FE {
write one or more packet to HP channel for FE
if we failed to write to HP channel {
set HP_tx_congested flag
DONE
}
}
//
// Process MP work
MP_process:
if IO event is "MP channel received" {
read MP channel
process ForCES work from read data
goto HP_process
}
//
LP_process:
if LP_tx_congested is set {
if IO event is "LP channel transmit available" {
unset LP_tx_congested flag
goto send_LP_channel
} else {
DONE
}
}
send_LP_Channel:
if we have redirect packets to send to FE {
process (add headers etc)
write one or more packet to LP channel
if we failed to write to LP channel {
set LP_tx_congested flag
DONE
}
} else {
if IO event is "LP channel received" {
read LP channel
process packets received
}
}
goto HP_process
]]>
</artwork>
-->
</section>
</section>
<section anchor="tml-fin" title="SCTP TML Channel Termination">
<t>
<xref target="tmlshut"/> describes a controlled disassociation
of the FE from the NE.
</t>
<t>
It is also possible for connectivity to be lost between the FE and
CE on one or more sockets. In cases where SCTP multi-homing features
are used for path availability, the disconnection of a socket
will only occur if all paths are unreachable; otherwise, SCTP will
ensure reachability. In the situation of a total connectivity loss
of even one SCTP socket, it is recommended that the FE and CE SHOULD
assume a state equivalent to ForCES Association Teardown being issued
and follow the sequence described in <xref target="tmlshut"/>.
</t>
<t>
A CE could also disconnect sockets to an FE to indicate an
"emergency teardown". The "emergency teardown" may be necessary in
cases when a CE needs to disconnect an FE but knows that an FE is busy
processing a lot of outstanding commands (some of which the FE hasn't
got around to processing yet).
By virtue of the CE closing the connections, the FE will immediately
be asynchronously notified and will not have to process any
outstanding commands from the CE.
</t>
</section>
<section anchor="tml-NE" title="SCTP TML NE level channel scheduling">
<t>
In handling NE-level I/O work, an implementation needs to worry about
being both fair and robust across peer ForCES nodes.
</t>
<t>
Fairness is desired
so that each peer node makes progress across the NE. For the sake of
illustration consider two FEs connected to a CE; whereas one FE
has a few HP messages that need to be processed by the CE, another
may have infinite HP messages. The scheduling scheme may decide to use
a quota scheduling system to ensure that the second FE does not
hog the CE cycles.
</t>
<t>
Robustness is desired so that the NE does not succumb to a DoS
attack from hostile entities and always achieves a maximum stable
workload processing level. For the sake of illustration consider
again two FEs connected to a CE. Consider FE1 as having a large
number of HP and MP messages and FE2 having a large number of
MP and LP messages. The scheduling scheme needs to ensure that
while FE1 always gets its messages processed, at some point
we allow FE2 messages to be processed. A promotion and preemption
based scheduling could be used by the CE to resolve this issue.
</t>
</section>
</section>
<section anchor="serviceint" title="Service Interface">
<!--
<t>
XXX - Editorial Note and repeated emphasis:
There is some concern (and confusion) about defining
APIs in ForCES. So at the moment the future of <xref target="TML-API"/>
is unknown and we will remove references to it in future revisions of
this document.
</t>
-->
<t>
This section provides high level service interface between
FEM/CEM and TML, the PL and TML, and between local and remote TMLs.
The intent of this interface discussion is to provide general guidelines.
The implementer is expected to worry about details and even
follow a different approach if needed.
</t>
<t>
The theory of operation for the PL-TML service is as follows:
<list style = "numbers">
<t>
The PL starts up and bootstraps the TML. The end result of
a successful TML bootstrap is that the CE TML and the FE TML
connect to each other at the transport level.
</t>
<t>
Sending and reception of the PL level messages commences after
a successful TML bootstrap.
The PL uses send and receive PL-TML interfaces to communicate to
its peers. The TML is agnostic to the nature of the messages being
sent or received.
The first message exchanges that happen are to establish
ForCES association. Subsequent messages maybe either
unsolicited events from the FE PL, control message redirects
from/to the CE to/from FE, and configuration from the
CE to the FE and their responses flowing from the FE to the CE.
</t>
<t>
The PL does a shutdown of the TML after terminating ForCES
association.
</t>
</list>
</t>
<section anchor="tmlboot" title="TML Boot-strapping">
<t>
<xref target="bootstrap"/> illustrates a flow for the TML
bootstrapped by the PL.
</t>
<t>
When the PL starts up (possibly after some internal initialization),
it boots up the TML. The TML first interacts with the FEM/CEM and
acquires the necessary TML parameterization (<xref target="params"/>).
Next the TML uses the information it retrieved from the FEM/CEM
interface to initialize itself.
</t>
<t>
The TML on the FE proceeds to connect the 3 channels to the CE.
The socket interface is used for each of the channels. The TML
continues to re-try the connections to the CE until all 3 channels
are connected. It is advisable that the number of connection retry attempts
and the time between each retry is also configurable via the FEM.
On failure to connect one or more channels, and after the configured
number of retry thresholds is exceeded, the TML will return an
appropriate failure indicator to the PL.
On success (as shown in <xref target="bootstrap"/>), a success indication
is presented to the TML.
</t>
<figure anchor = "bootstrap" title="SCTP TML Bootstrapping">
<preamble> </preamble>
<artwork><![CDATA[
FE PL FE TML FEM CEM CE TML CE PL
| | | | | |
| | | | | Bootup |
| | | | |<-------------------|
| Bootup | | | | |
|----------->| | |get CEM info| |
| |get FEM info | |<-----------| |
| |------------>| ~ ~ |
| ~ ~ |----------->| |
| |<------------| | |
| | |-initialize TML |
| | |-create the 3 chans.|
| | | to listen to FEs |
| | | |
| |-initialize TML |Bootup success |
| |-create the 3 chans. locally |------------------->|
| |-connect 3 chans. remotely | |
| |------------------------------>| |
| ~ ~ - FE TML connected ~
| ~ ~ - FE TML info init ~
| | channels connected | |
| |<------------------------------| |
| Bootup | | |
| succeeded | | |
|<-----------| | |
| | | |
]]></artwork>
</figure>
<t>
On the CE things are slightly different.
After initializing from the CEM, the TML on the CE side
proceeds to initialize the 3 channels to listen to remote connections
from the FEs. The success or failure indication is passed on to the CE
PL level (in the same manner as was done in the FE).
</t>
<t>
Post boot-up, the CE TML waits for connections from the FEs.
Upon a successful connection by an FE, the CE TML
level keeps track of the transport level details of the FE.
Note, at this stage only transport level connection has been established;
ForCES level association follows using send/receive PL-TML interfaces
(refer to <xref target="sndrcv"/> and <xref target="sndrcvflw"/>).
</t>
</section>
<section title="TML Shutdown" anchor="tmlshut">
<t>
<xref target="FEshutdown"/> shows an example of an FE shutting
down the TML. It is assumed at this point that the ForCES
Association Teardown has been issued by the CE.
</t>
<t>
When the FE PL issues a shutdown to its TML for a specific PL ID, the
TML releases all the channel connections to the CE. This is achieved
by closing the sockets used to communicate to the CE.
</t>
<figure anchor = "FEshutdown" title="FE Shutting down">
<preamble> </preamble>
<artwork><![CDATA[
FE PL FE TML CE TML CE PL
| | | |
| Shutdown | | |
|----------->| | |
| |-disconnect 3 chans. | |
| |------------------------>| |
| | | |
| | |-FE TML info cleanup|
| | |-optionally tell PL |
| | |------------------->|
| |- clean up any state of | |
| | channels disconnected | |
| | | |
| |<------------------------| |
| Shutdown | | |
| succeeded | | |
|<-----------| | |
| | | |
]]></artwork>
</figure>
<t>
On the CE side, a TML level disconnection would result in possible
cleanup of the FE state. Optionally, depending on the implementation,
there may be need to inform the PL about the TML disconnection.
</t>
</section>
<section title="TML Sending and Receiving" anchor="sndrcv">
<t>
The TML is agnostic to the nature of the PL message it
delivers to the remote TML (which subsequently delivers the
message to its PL). <xref target="sndrcvflw"/> shows an
example of a message exchange originated at the FE and sent to
the CE (such as a ForCES association message) which illustrates
all the necessary service interfaces for sending and receiving.
</t>
<t>
When the FE PL sends a message to the TML, the TML is expected
to pick one of HP/MP/LP channels and send out the ForCES message.
</t>
<figure anchor = "sndrcvflw" title="Send and Recv Flow">
<preamble> </preamble>
<artwork><![CDATA[
FE PL FE TML CE TML CE PL
| | | |
|PL send | | |
|----------->| | |
| | | |
| |-Format msg. | |
| |-pick channel | |
| |-TML Send | |
| |------------->| |
| | |-TML Receive on chan. |
| | |-decapsulate |
| | |- mux to PL/PL recv |
| | |--------------------->|
| | | ~
| | | ~ PL Process
| | | ~
| | | PL send |
| | |<---------------------|
| | |-Format msg. for send |
| | |-pick chan to send on |
| | |-TML send |
| |<-------------| |
| |-TML Receive | |
| |-decapsulate | |
| |-mux to PL | |
| PL Recv | | |
|<---------- | | |
| | | |
]]></artwork>
</figure>
<t>
When the CE TML receives the ForCES message on the channel it was sent on,
it demultiplexes the message to the CE PL.
</t>
<t>
The CE PL, after some processing (in this example dealing with the
FE's association), sends to the TML the response. And as in the case
of FE PL, the CE TML picks the channel to send on before sending.
</t>
<t>
The processing of the ForCES message upon arriving at the FE TML and
delivery to the FE PL is similar to the CE side equivalent as shown
above in <xref target="sndrcv"/>.
</t>
</section>
</section>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-24 08:17:48 |