One document matched: draft-ietf-avtext-rtp-grouping-taxonomy-02.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<?rfc autobreaks="yes"?>
<rfc category="info" docName="draft-ietf-avtext-rtp-grouping-taxonomy-02"
ipr="trust200902">
<front>
<title abbrev="RTP Grouping Taxonomy">A Taxonomy of Grouping Semantics and
Mechanisms for Real-Time Transport Protocol (RTP) Sources</title>
<author fullname="Jonathan Lennox" initials="J." surname="Lennox">
<organization abbrev="Vidyo">Vidyo, Inc.</organization>
<address>
<postal>
<street>433 Hackensack Avenue</street>
<street>Seventh Floor</street>
<city>Hackensack</city>
<region>NJ</region>
<code>07601</code>
<country>US</country>
</postal>
<email>jonathan@vidyo.com</email>
</address>
</author>
<author fullname="Kevin Gross" initials="K." surname="Gross">
<organization abbrev="AVA">AVA Networks, LLC</organization>
<address>
<postal>
<street/>
<city>Boulder</city>
<region>CO</region>
<country>US</country>
</postal>
<email>kevin.gross@avanw.com</email>
</address>
</author>
<author fullname="Suhas Nandakumar" initials="S" surname="Nandakumar">
<organization>Cisco Systems</organization>
<address>
<postal>
<street>170 West Tasman Drive</street>
<city>San Jose</city>
<region>CA</region>
<code>95134</code>
<country>US</country>
</postal>
<email>snandaku@cisco.com</email>
</address>
</author>
<author fullname="Gonzalo Salgueiro" initials="G" surname="Salgueiro">
<organization>Cisco Systems</organization>
<address>
<postal>
<street>7200-12 Kit Creek Road</street>
<city>Research Triangle Park</city>
<region>NC</region>
<code>27709</code>
<country>US</country>
</postal>
<email>gsalguei@cisco.com</email>
</address>
</author>
<author fullname="Bo Burman" initials="B." surname="Burman">
<organization>Ericsson</organization>
<address>
<postal>
<street>Kistavagen 25</street>
<city>SE-164 80 Kista</city>
<country>Sweden</country>
</postal>
<phone>+46 10 714 13 11</phone>
<email>bo.burman@ericsson.com</email>
</address>
</author>
<date day="27" month="June" year="2014"/>
<area>Real Time Applications and Infrastructure (RAI)</area>
<keyword>I-D</keyword>
<keyword>Internet-Draft</keyword>
<!-- TODO: more keywords-->
<abstract>
<t>The terminology about, and associations among, Real-Time Transport
Protocol (RTP) sources can be complex and somewhat opaque. This document
describes a number of existing and proposed relationships among RTP
sources, and attempts to define common terminology for discussing
protocol entities and their relationships.</t>
</abstract>
</front>
<middle>
<section anchor="introduction" title="Introduction">
<t>The existing taxonomy of sources in RTP is often regarded as
confusing and inconsistent. Consequently, a deep understanding of how
the different terms relate to each other becomes a real challenge.
Frequently cited examples of this confusion are (1) how different
protocols that make use of RTP use the same terms to signify different
things and (2) how the complexities addressed at one layer are often
glossed over or ignored at another.</t>
<t>This document attempts to provide some clarity by reviewing the
semantics of various aspects of sources in RTP. As an organizing
mechanism, it approaches this by describing various ways that RTP
sources can be grouped and associated together.</t>
<t>All non-specific references to ControLling mUltiple streams for
tElepresence (CLUE) in this document map to <xref
target="I-D.ietf-clue-framework"/> and all references to Web Real-Time
Communications (WebRTC) map to <xref
target="I-D.ietf-rtcweb-overview"/>.</t>
</section>
<section title="Concepts">
<t>This section defines concepts that serve to identify and name various
transformations and streams in a given RTP usage. For each concept an
attempt is made to list any alternate definitions and usages that
co-exist today along with various characteristics that further describes
the concept. These concepts are divided into two categories, one related
to the chain of streams and transformations that media can be subject
to, the other for entities involved in the communication.</t>
<section title="Media Chain">
<t>In the context of this memo, Media is a sequence of synthetic or
<xref target="physical-stimulus">Physical Stimulus</xref> (sound
waves, photons, key-strokes), represented in digital form. Synthesized
Media is typically generated directly in the digital domain.</t>
<t>This section contains the concepts that can be involved in taking
Media at a sender side and transporting it to a receiver, which may
recover a sequence of physical stimulus. This chain of concepts is of
two main types, streams and transformations. Streams are time-based
sequences of samples of the physical stimulus in various
representations, while transformations changes the representation of
the streams in some way.</t>
<t>The below examples are basic ones and it is important to keep in
mind that this conceptual model enables more complex usages. Some will
be further discussed in later sections of this document. In general
the following applies to this model:<list style="symbols">
<t>A transformation may have zero or more inputs and one or more
outputs.</t>
<t>A stream is of some type.</t>
<t>A stream has one source transformation and one or more sink
transformations (with the exception of <xref
target="physical-stimulus">Physical Stimulus</xref> that may lack
source or sink transformation).</t>
<t>Streams can be forwarded from a transformation output to any
number of inputs on other transformations that support that
type.</t>
<t>If the output of a transformation is sent to multiple
transformations, those streams will be identical; it takes a
transformation to make them different.</t>
<t>There are no formal limitations on how streams are connected to
transformations, this may include loops if required by a
particular transformation.</t>
</list>It is also important to remember that this is a conceptual
model. Thus real-world implementations may look different and have
different structure.</t>
<t>To provide a basic understanding of the relationships in the chain
we below first introduce the concepts for the <xref
target="fig-sender-chain">sender side</xref>. This covers physical
stimulus until media packets are emitted onto the network.</t>
<figure align="center" anchor="fig-sender-chain"
title="Sender Side Concepts in the Media Chain">
<artwork align="center"><![CDATA[ Physical Stimulus
|
V
+--------------------+
| Media Capture |
+--------------------+
|
Raw Stream
V
+--------------------+
| Media Source |<- Synchronization Timing
+--------------------+
|
Source Stream
V
+--------------------+
| Media Encoder |
+--------------------+
|
Encoded Stream +-----------+
V | V
+--------------------+ | +--------------------+
| Media Packetizer | | | Media Redundancy |
+--------------------+ | +--------------------+
| | |
+------------+ Redundancy RTP Stream
Source RTP Stream |
V V
+--------------------+ +--------------------+
| Media Transport | | Media Transport |
+--------------------+ +--------------------+
]]></artwork>
</figure>
<t>In <xref target="fig-sender-chain"/> we have included a branched
chain to cover the concepts for using redundancy to improve the
reliability of the transport. The Media Transport concept is an
aggregate that is decomposed below in <xref
target="media-transport"/>.</t>
<t>Below we review a <xref target="fig-receiver-chain">receiver media
chain</xref> matching the sender side to look at the inverse
transformations and their attempts to recover possibly identical
streams as in the sender chain. Note that the streams out of a reverse
transformation, like the Source Stream out the Media Decoder are in
many cases not the same as the corresponding ones on the sender side,
thus they are prefixed with a "Received" to denote a potentially
modified version. The reason for not being the same lies in the
transformations that can be of irreversible type. For example, lossy
source coding in the Media Encoder prevents the Source Stream out of
the Media Decoder to be the same as the one fed into the Media
Encoder. Other reasons include packet loss or late loss in the Media
Transport transformation that even Media Repair, if used, fails to
repair. It should be noted that some transformations are not always
present, like Media Repair that cannot operate without Redundancy RTP
Streams.</t>
<figure align="center" anchor="fig-receiver-chain"
title="Receiver Side Concepts of the Media Chain">
<artwork align="center"><![CDATA[+--------------------+ +--------------------+
| Media Transport | | Media Transport |
+--------------------+ +--------------------+
| |
Received RTP Stream Received Redundancy RTP Stream
| |
| +-------------------+
V V
+--------------------+
| Media Repair |
+--------------------+
|
Repaired RTP Stream
V
+--------------------+
| Media Depacketizer |
+--------------------+
|
Received Encoded Stream
V
+--------------------+
| Media Decoder |
+--------------------+
|
Received Source Stream
V
+--------------------+
| Media Sink |--> Synchronization Information
+--------------------+
|
Received Raw Stream
V
+--------------------+
| Media Renderer |
+--------------------+
|
V
Physical Stimulus
]]></artwork>
</figure>
<section anchor="physical-stimulus" title="Physical Stimulus">
<t>The physical stimulus is a physical event that can be measured
and converted to digital form by an appropriate sensor or
transducer. This include sound waves making up audio, photons in a
light field that is visible, or other excitations or interactions
with sensors, like keystrokes on a keyboard.</t>
</section>
<section anchor="media-capture" title="Media Capture">
<t>Media Capture is the process of transforming the <xref
target="physical-stimulus">Physical Stimulus</xref> into digital
Media using an appropriate sensor or transducer. The Media Capture
performs a digital sampling of the physical stimulus, usually
periodically, and outputs this in some representation as a <xref
target="raw-stream">Raw Stream</xref>. This data is due to its
periodical sampling, or at least being timed asynchronous events,
some form of a stream of media data. The Media Capture is normally
instantiated in some type of device, i.e. media capture device.
Examples of different types of media capturing devices are digital
cameras, microphones connected to A/D converters, or keyboards.</t>
<t>Characteristics:<list style="symbols">
<t>A Media Capture is identified either by hardware/manufacturer
ID or via a session-scoped device identifier as mandated by the
application usage.</t>
<t>A Media Capture can generate an <xref
target="encoded-stream">Encoded Stream </xref> if the capture
device support such a configuration.</t>
</list></t>
</section>
<section anchor="raw-stream" title="Raw Stream">
<t>The time progressing stream of digitally sampled information,
usually periodically sampled and provided by a <xref
target="media-capture">Media Capture</xref>. A Raw Stream can also
contain synthesized Media that may not require any explicit Media
Capture, since it is already in an appropriate digital form.</t>
</section>
<section anchor="media-source" title="Media Source">
<t>A Media Source is the logical source of a reference clock
synchronized, time progressing, digital media stream, called a <xref
target="source-stream">Source Stream</xref>. This transformation
takes one or more <xref target="raw-stream">Raw Streams</xref> and
provides a Source Stream as output. This output has been
synchronized with some reference clock, even if just a system local
wall clock.</t>
<t>The output can be of different types. One type is directly
associated with a particular Media Capture's Raw Stream. Others are
more conceptual sources, like an <xref
target="fig-media-source-mixer">audio mix of multiple Raw
Streams</xref>, a mixed selection of the three loudest inputs
regarding speech activity, a selection of a particular video based
on the current speaker, i.e. typically based on other Media
Sources.</t>
<figure align="center" anchor="fig-media-source-mixer"
title="Conceptual Media Source in form of Audio Mixer">
<artwork align="center"><![CDATA[ Raw Raw Raw
Stream Stream Stream
| | |
V V V
+--------------------------+
| Media Source |<-- Reference Clock
| Mixer |
+--------------------------+
|
V
Source Stream
]]></artwork>
</figure>
<t>Characteristics:<list style="symbols">
<t>At any point, it can represent a physical captured source or
conceptual source.</t>
<!--MW: Put back a discussion of relation between Media Capture and Media sources?-->
</list></t>
</section>
<section anchor="source-stream" title="Source Stream">
<t>A time progressing stream of digital samples that has been
synchronized with a reference clock and comes from particular <xref
target="media-source">Media Source</xref>.</t>
</section>
<section anchor="media-encoder" title="Media Encoder">
<t>A Media Encoder is a transform that is responsible for encoding
the media data from a <xref target="source-stream">Source
Stream</xref> into another representation, usually more compact,
that is output as an <xref target="encoded-stream">Encoded
Stream</xref>.</t>
<t>The Media Encoder step commonly includes pre-encoding
transformations, such as scaling, resampling etc. The Media Encoder
can have a significant number of configuration options that affects
the properties of the encoded stream. This include properties such
as bit-rate, start points for decoding, resolution, bandwidth or
other fidelity affecting properties. The actually used codec is also
an important factor in many communication systems, not only its
parameters.</t>
<t>Scalable Media Encoders need special mentioning as they produce
multiple outputs that are potentially of different types. A scalable
Media Encoder takes one input Source Stream and encodes it into
multiple output streams of two different types; at least one Encoded
Stream that is independently decodable and one or more <xref
target="dependent-stream">Dependent Streams</xref> that requires at
least one Encoded Stream and zero or more Dependent Streams to be
possible to decode. A Dependent Stream's dependency is one of the
grouping relations this document discusses further in <xref
target="lms"/>.</t>
<figure align="center" anchor="fig-scalable-media-encoder"
title="Scalable Media Encoder Input and Outputs">
<artwork align="center"><![CDATA[ Source Stream
|
V
+--------------------------+
| Scalable Media Encoder |
+--------------------------+
| | ... |
V V V
Encoded Dependent Dependent
Stream Stream Stream
]]></artwork>
</figure>
<t>There are also other variants of encoders, like so-called
Multiple Description Coding (MDC). Such Media Encoder produce
multiple independent and thus individually decodable Encoded Streams
that are possible to combine into a Received Source Stream that is
somehow a better representation of the original Source Stream than
using only a single Encoded Stream.</t>
<t>Characteristics:<list style="symbols">
<t>A Media Source can be multiply encoded by different Media
Encoders to provide various encoded representations.</t>
</list></t>
</section>
<section anchor="encoded-stream" title="Encoded Stream">
<t>A stream of time synchronized encoded media that can be
independently decoded.</t>
<t>Characteristics:<list style="symbols">
<t>Due to temporal dependencies, an Encoded Stream may have
limitations in where decoding can be started. These entry
points, for example Intra frames from a video encoder, may
require identification and their generation may be event based
or configured to occur periodically.</t>
</list></t>
</section>
<section anchor="dependent-stream" title="Dependent Stream">
<t>A stream of time synchronized encoded media fragments that are
dependent on one or more <xref target="encoded-stream">Encoded
Streams</xref> and zero or more Dependent Streams to be possible to
decode.</t>
<t>Characteristics:<list style="symbols">
<t>Each Dependent Stream has a set of dependencies. These
dependencies must be understood by the parties in a multi-media
session that intend to use a Dependent Stream.</t>
</list></t>
</section>
<section anchor="media_packetizer" title="Media Packetizer">
<t>The transformation of taking one or more <xref
target="encoded-stream">Encoded</xref> or <xref
target="dependent-stream">Dependent Stream</xref> and put their
content into one or more sequences of packets, normally RTP packets,
and output <xref target="packet-stream">Source RTP Streams</xref>.
This step includes both generating RTP payloads as well as RTP
packets.</t>
<t>The Media Packetizer can use multiple inputs when producing a
single RTP Stream. One such example is <xref target="sstmst">SST
packetization when using SVC</xref>.</t>
<t>The Media Packetizer can also produce multiple RTP Streams, for
example when Encoded and/or Dependent Streams are distributed over
multiple RTP Streams. One example of this is <xref
target="sstmst">MST packetization when using SVC</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>The Media Packetizer will select which Synchronization
source(s) (SSRC) <xref target="RFC3550"/> in which RTP sessions
that are used.</t>
<t>Media Packetizer can combine multiple Encoded or Dependent
Streams into one or more RTP Streams.</t>
</list></t>
</section>
<section anchor="packet-stream" title="RTP Stream">
<t>A stream of RTP packets containing media data, source or
redundant. The RTP Stream is identified by an SSRC belonging to a
particular RTP session. The RTP session is identified as discussed
in <xref target="rtp-session"/>.</t>
<t>A Source RTP Stream is a RTP Stream containing at least some
content from an Encoded Stream. Source material is any media
material that is produced for transport over RTP without any
additional redundancy applied to cope with network transport losses.
Compare this with the <xref
target="redundancy-packet-stream">Redundancy RTP Stream</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>Each RTP Stream is identified by a unique Synchronization
source (SSRC) <xref target="RFC3550"/> that is carried in every
RTP and RTP Control Protocol (RTCP) packet header in a specific
RTP session context.</t>
<t>At any given point in time, a RTP Stream can have one and
only one SSRC. SSRC collision and <xref target="RFC7160">clock
rate change</xref> are examples of valid reasons to change SSRC
for a RTP Stream, since the RTP Stream itself is not changed in
any significant way, only the identifying SSRC number.</t>
<t>Each RTP Stream defines a unique RTP sequence numbering and
timing space.</t>
<t>Several RTP Streams may map to a single Media Source via the
source transformations.</t>
<t>Several RTP Streams can be carried over a single RTP
Session.</t>
</list></t>
</section>
<section anchor="media-redundancy" title="Media Redundancy">
<t>Media redundancy is a transformation that generates redundant or
repair packets sent out as a Redundancy RTP Stream to mitigate
network transport impairments, like packet loss and delay.</t>
<t>The Media Redundancy exists in many flavors; they may be
generating independent Repair Streams that are used in addition to
the Source Stream (<xref target="RFC4588">RTP Retransmission</xref>
and some <xref target="RFC5109">FEC</xref>), they may generate a new
Source Stream by combining redundancy information with source
information (Using <xref target="RFC5109">XOR FEC</xref> as a <xref
target="RFC2198">redundancy payload</xref>), or completely replace
the source information with only redundancy packets.</t>
</section>
<section anchor="redundancy-packet-stream"
title="Redundancy RTP Stream">
<t>A <xref target="packet-stream">RTP Stream</xref> that contains no
original source data, only redundant data that may be combined with
one or more <xref target="received-packet-stream">Received RTP
Stream</xref> to produce <xref
target="repaired-packet-stream">Repaired RTP Streams</xref>.</t>
</section>
<section anchor="media-transport" title="Media Transport">
<t>A Media Transport defines the transformation that the <xref
target="packet-stream">RTP Streams</xref> are subjected to by the
end-to-end transport from one RTP sender to one specific RTP
receiver (an RTP session may contain multiple RTP receivers per
sender). Each Media Transport is defined by a transport association
that is identified by a 5-tuple (source address, source port,
destination address, destination port, transport protocol). Each
transport association normally contains only a single RTP session,
although a proposal exists for sending <xref
target="I-D.westerlund-avtcore-transport-multiplexing">multiple RTP
sessions over one transport association</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>Media Transport transmits RTP Streams of RTP Packets from a
source transport address to a destination transport address.</t>
</list></t>
<t>The Media Transport concept sometimes needs to be decomposed into
more steps to enable discussion of what a sender emits that gets
transformed by the network before it is received by the receiver.
Thus we provide also this <xref target="fig-media-transport">Media
Transport decomposition</xref>.</t>
<figure align="center" anchor="fig-media-transport"
title="Decomposition of Media Transport">
<artwork align="center"><![CDATA[ RTP Stream
|
V
+--------------------------+
| Media Transport Sender |
+--------------------------+
|
Sent RTP Stream
V
+--------------------------+
| Network Transport |
+--------------------------+
|
Transported RTP Stream
V
+--------------------------+
| Media Transport Receiver |
+--------------------------+
|
V
Received RTP Stream
]]></artwork>
</figure>
</section>
<section anchor="media-transport-sender"
title="Media Transport Sender">
<t>The first transformation within the <xref
target="media-transport">Media Transport</xref> is the Media
Transport Sender, where the sending <xref
target="end-point">End-Point</xref> takes a RTP Stream and emits the
packets onto the network using the transport association established
for this Media Transport thus creating a <xref
target="sent-packet-stream">Sent RTP Stream</xref>. In this process
it transforms the RTP Stream in several ways. First, it gains the
necessary protocol headers for the transport association, for
example IP and UDP headers, thus forming IP/UDP/RTP packets. In
addition, the Media Transport Sender may queue, pace or otherwise
affect how the packets are emitted onto the network. Thus adding
delay, jitter and inter packet spacings that characterize the Sent
RTP Stream.</t>
</section>
<section anchor="sent-packet-stream" title="Sent RTP Stream">
<t>The Sent RTP Stream is the RTP Stream as entering the first hop
of the network path to its destination. The Sent RTP Stream is
identified using network transport addresses, like for IP/UDP the
5-tuple (source IP address, source port, destination IP address,
destination port, and protocol (UDP)).</t>
</section>
<section anchor="network-transport" title="Network Transport">
<t>Network Transport is the transformation that the <xref
target="sent-packet-stream">Sent RTP Stream</xref> is subjected to
by traveling from the source to the destination through the network.
These transformations include, loss of some packets, varying delay
on a per packet basis, packet duplication, and packet header or data
corruption. These transformations produces a <xref
target="transported-packet-stream">Transported RTP Stream</xref> at
the exit of the network path.</t>
</section>
<section anchor="transported-packet-stream"
title="Transported RTP Stream">
<t>The RTP Stream that is emitted out of the network path at the
destination, subjected to the <xref
target="network-transport">Network Transport's
transformation</xref>.</t>
</section>
<section anchor="transport-receiver" title="Media Transport Receiver">
<t>The receiver <xref target="end-point">End-Point's</xref>
transformation of the <xref
target="transported-packet-stream">Transported RTP Stream</xref> by
its reception process that result in the <xref
target="received-packet-stream">Received RTP Stream</xref>. This
transformation includes transport checksums being verified and if
non-matching, causing discarding of the corrupted packet. Other
transformations can include delay variations in receiving a packet
on the network interface and providing it to the application.</t>
</section>
<section anchor="received-packet-stream" title="Received RTP Stream">
<t>The <xref target="packet-stream">RTP Stream</xref> resulting from
the Media Transport's transformation, i.e. subjected to packet loss,
packet corruption, packet duplication and varying transmission delay
from sender to receiver.</t>
</section>
<section anchor="received-redundancy-ps"
title="Received Redundancy RTP Stream">
<t>The <xref target="redundancy-packet-stream">Redundancy RTP
Stream</xref> resulting from the Media Transport transformation,
i.e. subjected to packet loss, packet corruption, and varying
transmission delay from sender to receiver.</t>
</section>
<section anchor="media-repair" title="Media Repair">
<t>A Transformation that takes as input one or more <xref
target="packet-stream">Source RTP Streams</xref> as well as <xref
target="redundancy-packet-stream">Redundancy RTP Streams</xref> and
attempts to combine them to counter the transformations introduced
by the <xref target="media-transport">Media Transport</xref> to
minimize the difference between the <xref
target="source-stream">Source Stream</xref> and the <xref
target="received-source-stream">Received Source Stream</xref> after
<xref target="media-decoder">Media Decoder</xref>. The output is a
<xref target="repaired-packet-stream">Repaired RTP
Stream</xref>.</t>
</section>
<section anchor="repaired-packet-stream" title="Repaired RTP Stream">
<t>A <xref target="received-packet-stream">Received RTP
Stream</xref> for which <xref
target="received-redundancy-ps">Received Redundancy RTP
Stream</xref> information has been used to try to re-create the
<xref target="packet-stream">RTP Stream</xref> as it was before
<xref target="media-transport">Media Transport</xref>.</t>
</section>
<section anchor="media-depacketizer" title="Media Depacketizer">
<t>A Media Depacketizer takes one or more <xref
target="packet-stream">RTP Streams</xref> and depacketizes them and
attempts to reconstitute the <xref target="encoded-stream">Encoded
Streams</xref> or <xref target="dependent-stream">Dependent
Streams</xref> present in those RTP Streams.</t>
<t>It should be noted that in practical implementations, the Media
Depacketizer and the Media Decoder may be tightly coupled and share
information to improve or optimize the overall decoding process in
various ways. It is however not expected that there would be any
benefit in defining a taxonomy for those detailed (and likely very
implementation-dependent) steps.</t>
</section>
<section anchor="received-encoded-stream"
title="Received Encoded Stream">
<t>The received version of an <xref target="encoded-stream">Encoded
Stream</xref>.</t>
</section>
<section anchor="media-decoder" title="Media Decoder">
<t>A Media Decoder is a transformation that is responsible for
decoding <xref target="encoded-stream">Encoded Streams</xref> and
any <xref target="dependent-stream">Dependent Streams</xref> into a
<xref target="source-stream">Source Stream</xref>.</t>
<t>It should be noted that in practical implementations, the Media
Decoder and the Media Depacketizer may be tightly coupled and share
information to improve or optimize the overall decoding process in
various ways. It is however not expected that there would be any
benefit in defining a taxonomy for those detailed (and likely very
implementation-dependent) steps.</t>
<t>Characteristics:<list style="symbols">
<t>A Media Decoder is the entity that will have to deal with any
errors in the encoded streams that resulted from corruptions or
failures to repair packet losses. This as a media decoder
generally is forced to produce some output periodically. It thus
commonly includes concealment methods.</t>
</list></t>
</section>
<section anchor="received-source-stream"
title="Received Source Stream">
<t>The received version of a <xref target="source-stream">Source
Stream</xref>.</t>
</section>
<section anchor="media-sink" title="Media Sink">
<t>The Media Sink receives a <xref target="source-stream">Source
Stream</xref> that contains, usually periodically, sampled media
data together with associated synchronization information. Depending
on application, this Source Stream then needs to be transformed into
a <xref target="raw-stream">Raw Stream</xref> that is sent in
synchronization with the output from other Media Sinks to a <xref
target="media-render">Media Render</xref>. The media sink may also
be connected with a <xref target="media-source">Media Source</xref>
and be used as part of a conceptual Media Source.</t>
<t>Characteristics:<list style="symbols">
<t>The Media Sink can further transform the Source Stream into a
representation that is suitable for rendering on the Media
Render as defined by the application or system-wide
configuration. This include sample scaling, level adjustments
etc.</t>
</list></t>
</section>
<section anchor="received-raw-stream" title="Received Raw Stream">
<t>The received version of a <xref target="raw-stream">Raw
Stream</xref>.</t>
</section>
<section anchor="media-render" title="Media Render">
<t>A Media Render takes a <xref target="raw-stream">Raw
Stream</xref> and converts it into <xref
target="physical-stimulus">Physical Stimulus</xref> that a human
user can perceive. Examples of such devices are screens, D/A
converters connected to amplifiers and loudspeakers.</t>
<t>Characteristics:<list style="symbols">
<t>An End Point can potentially have multiple Media Renders for
each media type.</t>
</list></t>
</section>
</section>
<section anchor="communication-entities" title="Communication Entities">
<t>This section contains concept for entities involved in the
communication.</t>
<figure align="center" anchor="fig-p2p"
title="Example Point to Point Communication Session with two RTP Sessions">
<artwork align="center"><![CDATA[
+----------------------------------------------------------+
| Communication Session |
| |
| +----------------+ +----------------+ |
| | Participant A | +------------+ | Participant B | |
| | | | Multimedia | | | |
| | +-------------+|<=>| Session |<=>|+-------------+ | |
| | | End Point A || | | || End Point B | | |
| | | || +------------+ || | | |
| | | +-----------++--------------------++-----------+ | | |
| | | | RTP Session| | | | | |
| | | | Audio |--Media Transport-->| | | | |
| | | | |<--Media Transport--| | | | |
| | | +-----------++--------------------++-----------+ | | |
| | | || || | | |
| | | +-----------++--------------------++-----------+ | | |
| | | | RTP Session| | | | | |
| | | | Video |--Media Transport-->| | | | |
| | | | |<--Media Transport--| | | | |
| | | +-----------++--------------------++-----------+ | | |
| | +-------------+| |+-------------+ | |
| +----------------+ +----------------+ |
+----------------------------------------------------------+
]]></artwork>
</figure>
<t>The figure above shows a high-level example representation of a
very basic point-to-point Communication Session between Participants A
and B. It uses two different audio and video RTP Sessions between A's
and B's End Points, using separate Media Transports for those RTP
Sessions. The Multimedia Session shared by the participants can for
example be established using SIP (i.e., there is a SIP Dialog between
A and B). The terms used in that figure are further elaborated in the
sub-sections below.</t>
<section anchor="end-point" title="End Point">
<t><list style="empty">
<t>Editor's note: Consider if a single word, "Endpoint", is
preferable</t>
</list>A single addressable entity sending or receiving RTP
packets. It may be decomposed into several functional blocks, but as
long as it behaves as a single RTP stack entity it is classified as
a single "End Point".</t>
<t>Characteristics:<list style="symbols">
<t>End Points can be identified in several different ways. While
RTCP Canonical Names (CNAMEs) <xref target="RFC3550"/> provide a
globally unique and stable identification mechanism for the
duration of the Communication Session (see <xref
target="comm-session"/>), their validity applies exclusively
within a <xref target="syncontext">Synchronization
Context</xref>. Thus one End Point can handle multiple CNAMEs,
each of which can be shared among a set of End Points belonging
to the same <xref target="participant">Participant</xref>.
Therefore, mechanisms outside the scope of RTP, such as
application defined mechanisms, must be used to ensure End Point
identification when outside this Synchronization Context.</t>
<t>An End Point can be associated with at most one <xref
target="participant">Participant</xref> at any single point in
time.</t>
<t>In some contexts, an End Point would typically correspond to
a single "host".</t>
</list></t>
</section>
<section anchor="rtp-session" title="RTP Session">
<t><list style="empty">
<t>Editor's note: Re-consider if this is really a Communication
Entity, or if it is rather an existing concept that should be
described in <xref target="mapping"/>.</t>
</list>An RTP session is an association among a group of
participants communicating with RTP. It is a group communications
channel which can potentially carry a number of RTP Streams. Within
an RTP session, every participant can find meta-data and control
information (over RTCP) about all the RTP Streams in the RTP
session. The bandwidth of the RTCP control channel is shared between
all participants within an RTP Session.</t>
<t>Characteristics:<list style="symbols">
<t>Typically, an RTP Session can carry one ore more RTP
Streams.</t>
<t>An RTP Session shares a single SSRC space as defined in
RFC3550 <xref target="RFC3550"/>. That is, the End Points
participating in an RTP Session can see an SSRC identifier
transmitted by any of the other End Points. An End Point can
receive an SSRC either as SSRC or as a Contributing source
(CSRC) in RTP and RTCP packets, as defined by the endpoints'
network interconnection topology.</t>
<t>An RTP Session uses at least two <xref
target="media-transport">Media Transports</xref>, one for
sending and one for receiving. Commonly, the receiving one is
the reverse direction of the same one as used for sending. An
RTP Session may use many Media Transports and these define the
session's network interconnection topology. A single Media
Transport can normally not transport more than one RTP Session,
unless a solution for multiplexing multiple RTP sessions over a
single Media Transport is used. One example of such a scheme is
<xref
target="I-D.westerlund-avtcore-transport-multiplexing">Multiple
RTP Sessions on a Single Lower-Layer Transport</xref>.</t>
<t>Multiple RTP Sessions can be related.</t>
</list></t>
</section>
<section anchor="participant" title="Participant">
<t>A Participant is an entity reachable by a single signaling
address, and is thus related more to the signaling context than to
the media context.</t>
<t>Characteristics:<list style="symbols">
<t>A single signaling-addressable entity, using an
application-specific signaling address space, for example a SIP
URI.</t>
<t>A Participant can have several <xref
target="multimedia-session">Multimedia Sessions</xref>.</t>
<t>A Participant can have several associated <xref
target="end-point">End Points</xref>.</t>
</list></t>
</section>
<section anchor="multimedia-session" title="Multimedia Session">
<t>A multimedia session is an association among a group of
participants engaged in the communication via one or more <xref
target="rtp-session">RTP Sessions</xref>. It defines logical
relationships among <xref target="media-source">Media Sources</xref>
that appear in multiple RTP Sessions.</t>
<t>Characteristics:<list style="symbols">
<t>A Multimedia Session can be composed of several parallel RTP
Sessions with potentially multiple RTP Streams per RTP
Session.</t>
<t>Each participant in a Multimedia Session can have a multitude
of Media Captures and Media Rendering devices.</t>
<t>A single Multimedia Session can contain media from one or
more <xref target="syncontext">Synchronization Contexts</xref>.
An example of that is a Multimedia Session containing one set of
audio and video for communication purposes belonging to one
Synchronization Context, and another set of audio and video for
presentation purposes (like playing a video file) with a
separate Synchronization Context that has no strong timing
relationship and need not be strictly synchronized with the
audio and video used for communication.</t>
</list></t>
</section>
<section anchor="comm-session" title="Communication Session">
<t>A Communication Session is an association among group of
participants communicating with each other via a set of Multimedia
Sessions.</t>
<t>Characteristics:<list style="symbols">
<t>Each participant in a Communication Session is identified via
an application-specific signaling address.</t>
<t>A Communication Session is composed of at least one
Multimedia Session per participant, involving one or more
parallel RTP Sessions with potentially multiple RTP Streams per
RTP Session.</t>
</list></t>
<t>For example, in a full mesh communication, the Communication
Session consists of a set of separate Multimedia Sessions between
each pair of Participants. Another example is a centralized
conference, where the Communication Session consists of a set of
Multimedia Sessions between each Participant and the conference
handler.</t>
</section>
</section>
</section>
<section anchor="relations" title="Relations at Different Levels">
<t>This section uses the concepts from previous section and look at
different types of relationships among them. These relationships occur
at different levels and for different purposes. The section is organized
such as to look at the level where a relation is required. The reason
for the relationship may exist at another step in the media handling
chain. For example, using Simulcast (discussed in <xref
target="simulcast"/>) needs to determine relations at RTP Stream level,
however the reason to relate RTP Streams is that multiple Media Encoders
use the same Media Source, i.e. to be able to identify a common Media
Source.</t>
<t><xref target="media-source">Media Sources</xref> are commonly grouped
and related to an <xref target="end-point">End Point</xref> or a <xref
target="participant">Participant</xref>. This occurs for several
reasons; both due to application logic as well as for media handling
purposes.</t>
<t>At RTP Packetization time, there exists a possibility for a number of
different types of relationships between <xref
target="encoded-stream">Encoded Streams</xref>, <xref
target="dependent-stream">Dependent Streams</xref> and <xref
target="packet-stream">RTP Streams</xref>. These are caused by grouping
together or distributing these different types of streams into RTP
Streams.</t>
<t>The resulting RTP Streams will thus also have relations. This is a
common relation to handle in RTP due to that RTP Streams are separate
and have their own SSRC, implying independent sequence numbers and
timestamp spaces. The underlying reasons for the RTP Stream
relationships are different, as can be seen in the sub-sections
below.</t>
<t>RTP Streams may be protected by Redundancy RTP Streams during
transport. Several approaches listed below can be used to create
Redundancy RTP Streams; <list style="symbols">
<t>Duplication of the original RTP Stream</t>
<t>Duplication of the original RTP Stream with a time offset,</t>
<t>Forward Error Correction (FEC) techniques, and</t>
<t>Retransmission of lost packets (either globally or
selectively).</t>
</list></t>
<t>The different RTP Streams can be transported within the same RTP
Session or in different RTP Sessions to accomplish different transport
goals. This explicit separation of RTP Streams is further discussed in
<xref target="packet-stream-separation"/>.</t>
<section anchor="syncontext" title="Synchronization Context">
<t>A Synchronization Context defines a requirement on a strong timing
relationship between the Media Sources, typically requiring alignment
of clock sources. Such relationship can be identified in multiple ways
as listed below. A single Media Source can only belong to a single
Synchronization Context, since it is assumed that a single Media
Source can only have a single media clock and requiring alignment to
several Synchronization Contexts (and thus reference clocks) will
effectively merge those into a single Synchronization Context.</t>
<section anchor="cname" title="RTCP CNAME">
<t>RFC3550 <xref target="RFC3550"/> describes Inter-media
synchronization between RTP Sessions based on RTCP CNAME, RTP and
Network Time Protocol (NTP) <xref target="RFC5905"/> formatted
timestamps of a reference clock. As indicated in <xref
target="I-D.ietf-avtcore-clksrc"/>, despite using NTP format
timestamps, it is not required that the clock be synchronized to an
NTP source.</t>
</section>
<section title="Clock Source Signaling">
<t><xref target="I-D.ietf-avtcore-clksrc"/> provides a mechanism to
signal the clock source in SDP both for the reference clock as well
as the media clock, thus allowing a Synchronization Context to be
defined beyond the one defined by the usage of CNAME source
descriptions.</t>
</section>
<section title="Implicitly via RtcMediaStream">
<t>The WebRTC WG defines "RtcMediaStream" with one or more
"RtcMediaStreamTracks". All tracks in a "RtcMediaStream" are
intended to be possible to synchronize when rendered.</t>
</section>
<section title="Explicitly via SDP Mechanisms">
<t>RFC5888 <xref target="RFC5888"/> defines m=line grouping
mechanism called "Lip Synchronization (LS)" for establishing the
synchronization requirement across m=lines when they map to
individual sources.</t>
<t>RFC5576 <xref target="RFC5576"/> extends the above mechanism when
multiple media sources are described by a single m=line.</t>
</section>
</section>
<section title="End Point">
<t>Some applications requires knowledge of what Media Sources
originate from a particular <xref target="end-point">End Point</xref>.
This can include such decisions as packet routing between parts of the
topology, knowing the End Point origin of the RTP Streams.</t>
<t>In RTP, this identification has been overloaded with the <xref
target="syncontext">Synchronization Context</xref> through the usage
of the RTCP source description <xref target="cname">CNAME</xref> item.
This works for some usages, but sometimes it breaks down. For example,
if an End Point has two sets of Media Sources that have different
Synchronization Contexts, like the audio and video of the human
participant as well as a set of Media Sources of audio and video for a
shared movie. Thus, an End Point may have multiple CNAMEs. The CNAMEs
or the Media Sources themselves can be related to the End Point.</t>
</section>
<section title="Participant">
<t>In communication scenarios, it is commonly needed to know which
Media Sources that originate from which <xref
target="participant">Participant</xref>. Thus enabling the application
to for example display Participant Identity information correctly
associated with the Media Sources. This association is currently
handled through the signaling solution to point at a specific
Multimedia Session where the Media Sources may be explicitly or
implicitly tied to a particular End Point.</t>
<t>Participant information becomes more problematic due to Media
Sources that are generated through mixing or other conceptual
processing of Raw Streams or Source Streams that originate from
different Participants. This type of Media Sources can thus have a
dynamically varying set of origins and Participants. RTP contains the
concept of Contributing Sources (CSRC) that carries such information
about the previous step origin of the included media content on RTP
level.</t>
</section>
<section title="RtcMediaStream">
<t>An RtcMediaStream in WebRTC is an explicit grouping of a set of
Media Sources (RtcMediaStreamTracks) that share a common identifier
and a single <xref target="syncontext">Synchronization
Context</xref>.</t>
</section>
<section anchor="sstmst"
title="Single- and Multi-Session Transmission of SVC">
<t><xref target="RFC6190">Scalable Video Coding</xref> has a mode of
operation called Single Session Transmission (SST), where Encoded
Streams and Dependent Streams from the SVC Media Encoder are sent in a
single <xref target="rtp-session">RTP Session</xref> using the SVC RTP
Payload format. There is another mode of operation where Encoded
Streams and Dependent Streams are distributed across multiple RTP
Sessions, called Multi-Session Transmission (MST). SST denotes one or
more RTP Streams (SSRC) per Media Source in a single RTP Session. MST
denotes one or more RTP Streams (SSRC) per Media Source in each of
multiple RTP Sessions. This is not always clear from the <xref
target="RFC6190">SVC payload format text</xref>, but is what existing
deployments of that RFC have implemented.</t>
<t>To elaborate, what could be called SST-SingleStream (SST-SS) uses a
single RTP Stream in a single RTP Session to send all Encoded and
Dependent Streams from a single Media Source. Similarly,
SST-MultiStream (SST-MS) uses a single RTP Stream per Media Source in
a single RTP Session to send the Encoded and Dependent Streams. MST-SS
uses a single RTP Stream in each of multiple RTP Sessions, where each
RTP Stream can originate from any one of possibly multiple Media
Sources. Finally, MST-MS uses multiple RTP Streams in each of the
multiple RTP Sessions, where each RTP Stream can originate from any
one of possibly multiple Media Sources. This is summarized below:</t>
<texttable align="center" anchor="tab-sst-mst"
title="SST / MST Summary">
<ttcol>RTP Streams per Media Source</ttcol>
<ttcol>Single RTP Session</ttcol>
<ttcol>Multiple RTP Sessions</ttcol>
<c>Single</c>
<c>SST-SS</c>
<c>MST-SS</c>
<c>Multiple</c>
<c>SST-MS</c>
<c>MST-MS</c>
</texttable>
</section>
<section title="Multi-Channel Audio">
<t>There exist a number of RTP payload formats that can carry
multi-channel audio, despite the codec being a mono encoder.
Multi-channel audio can be viewed as multiple Media Sources sharing a
common Synchronization Context. These are independently encoded by a
Media Encoder and the different Encoded Streams are then packetized
together in a time synchronized way into a single Source RTP Stream
using the used codec's RTP Payload format. Example of such codecs are,
<xref target="RFC3551">PCMA and PCMU</xref>, <xref
target="RFC4867">AMR</xref>, and <xref
target="RFC5404">G.719</xref>.</t>
</section>
<section anchor="simulcast" title="Simulcast">
<t>A Media Source represented as multiple independent Encoded Streams
constitutes a simulcast of that Media Source. <xref
target="fig-simulcast"/> below represents an example of a Media Source
that is encoded into three separate and different Simulcast streams,
that are in turn sent on the same Media Transport flow. When using
Simulcast, the RTP Streams may be sharing RTP Session and Media
Transport, or be separated on different RTP Sessions and Media
Transports, or be any combination of these two. It is other
considerations that affect which usage is desirable, as discussed in
<xref target="packet-stream-separation"/>.</t>
<figure anchor="fig-simulcast"
title="Example of Media Source Simulcast">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
Source Stream |
+----------------------+----------------------+
| | |
V V V
+------------------+ +------------------+ +------------------+
| Media Encoder | | Media Encoder | | Media Encoder |
+------------------+ +------------------+ +------------------+
| Encoded | Encoded | Encoded
| Stream | Stream | Stream
V V V
+------------------+ +------------------+ +------------------+
| Media Packetizer | | Media Packetizer | | Media Packetizer |
+------------------+ +------------------+ +------------------+
| Source | Source | Source
| RTP | RTP | RTP
| Stream | Stream | Stream
+-----------------+ | +-----------------+
| | |
V V V
+-------------------+
| Media Transport |
+-------------------+
]]></artwork>
</figure>
<t>The simulcast relation between the RTP Streams is the common Media
Source. In addition, to be able to identify the common Media Source, a
receiver of the RTP Stream may need to know which configuration or
encoding goals that lay behind the produced Encoded Stream and its
properties. This to enable selection of the stream that is most useful
in the application at that moment.</t>
</section>
<section anchor="lms" title="Layered Multi-Stream">
<t>Layered Multi-Stream (LMS) is a mechanism by which different
portions of a layered encoding of a Source Stream are sent using
separate RTP Streams (sometimes in separate RTP Sessions). LMSs are
useful for receiver control of layered media.</t>
<t>A Media Source represented as an Encoded Stream and multiple
Dependent Streams constitutes a Media Source that has layered
dependencies. The figure below represents an example of a Media Source
that is encoded into three dependent layers, where two layers are sent
on the same Media Transport using different RTP Streams, i.e. SSRCs,
and the third layer is sent on a separate Media Transport, i.e. a
different RTP Session.</t>
<figure align="center" anchor="fig-ddp"
title="Example of Media Source Layered Dependency">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
|
|
V
+---------------------------------------------------------+
| Media Encoder |
+---------------------------------------------------------+
| | |
Encoded Stream Dependent Stream Dependent Stream
| | |
V V V
+----------------+ +----------------+ +----------------+
|Media Packetizer| |Media Packetizer| |Media Packetizer|
+----------------+ +----------------+ +----------------+
| | |
RTP Stream RTP Stream RTP Stream
| | |
+------+ +------+ |
| | |
V V V
+-----------------+ +-----------------+
| Media Transport | | Media Transport |
+-----------------+ +-----------------+
]]></artwork>
</figure>
<t>As an example, the <xref target="sstmst">SVC MST</xref> relation
needs to identify the common Media Encoder origin for the Encoded and
Dependent Streams. The SVC RTP Payload RFC is not particularly
explicit about how this relation is to be implemented. When using
different RTP Sessions, thus different Media Transports, and as long
as there is only one RTP Stream per Media Encoder and a single Media
Source in each RTP Session (<xref target="sstmst">MST-SS</xref>),
common SSRC and CNAMEs can be used to identify the common Media
Source. When multiple RTP Streams are sent from one Media Encoder in
the same RTP Session (SST-MS), then CNAME is the only currently
specified RTP identifier that can be used. In cases where multiple
Media Encoders use multiple Media Sources sharing Synchronization
Context, and thus having a common CNAME, additional heuristics need to
be applied to create the MST relationship between the RTP Streams.</t>
</section>
<section anchor="stream-dup" title="RTP Stream Duplication">
<t><xref target="RFC7198">RTP Stream Duplication</xref>, using the
same or different Media Transports, and optionally also <xref
target="RFC7197">delaying the duplicate</xref>, offers a simple way to
protect media flows from packet loss in some cases. It is a specific
type of redundancy and all but one <xref target="packet-stream">Source
RTP Stream</xref> are effectively <xref
target="redundancy-packet-stream">Redundancy RTP Streams</xref>, but
since both Source and Redundant RTP Streams are the same it does not
matter which is which. This can also be seen as a specific type of
<xref target="simulcast">Simulcast</xref> that transmits the same
<xref target="encoded-stream">Encoded Stream</xref> multiple
times.</t>
<figure anchor="fig-duplication"
title="Example of RTP Stream Duplication">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
Source Stream |
V
+----------------+
| Media Encoder |
+----------------+
Encoded Stream |
+-----------+-----------+
| |
V V
+------------------+ +------------------+
| Media Packetizer | | Media Packetizer |
+------------------+ +------------------+
Source | RTP Stream Source | RTP Stream
| V
| +-------------+
| | Delay (opt) |
| +-------------+
| |
+-----------+-----------+
|
V
+-------------------+
| Media Transport |
+-------------------+
]]></artwork>
</figure>
</section>
<section anchor="red" title="Redundancy Format">
<t>The <xref target="RFC2198">RTP Payload for Redundant Audio
Data</xref> defines how one can transport redundant audio data
together with primary data in the same RTP payload. The redundant data
can be a time delayed version of the primary or another time delayed
Encoded Stream using a different Media Encoder to encode the same
Media Source as the primary, as depicted below in <xref
target="fig-red-rfc2198"/>.</t>
<figure align="center" anchor="fig-red-rfc2198"
title="Concept for usage of Audio Redundancy with different Media Encoders">
<artwork align="center"><![CDATA[+--------------------+
| Media Source |
+--------------------+
|
Source Stream
|
+------------------------+
| |
V V
+--------------------+ +--------------------+
| Media Encoder | | Media Encoder |
+--------------------+ +--------------------+
| |
| +------------+
Encoded Stream | Time Delay |
| +------------+
| |
| +------------------+
V V
+--------------------+
| Media Packetizer |
+--------------------+
|
V
RTP Stream ]]></artwork>
</figure>
<t>The Redundancy format is thus providing the necessary meta
information to correctly relate different parts of the same Encoded
Stream, or in the case <xref target="fig-red-rfc2198">depicted
above</xref> relate the Received Source Stream fragments coming out of
different Media Decoders to be able to combine them together into a
less erroneous Source Stream.</t>
</section>
<section anchor="rtx" title="RTP Retransmission">
<t>The <xref target="fig-rtx">figure below</xref> represents an
example where a Media Source's Source RTP Stream is protected by a
<xref target="RFC4588">retransmission (RTX) flow</xref>. In this
example the Source RTP Stream and the Redundancy RTP Stream share the
same Media Transport.</t>
<figure align="center" anchor="fig-rtx"
title="Example of Media Source Retransmission Flows">
<artwork align="center"><![CDATA[+--------------------+
| Media Source |
+--------------------+
|
V
+--------------------+
| Media Encoder |
+--------------------+
| Retransmission
Encoded Stream +--------+ +---- Request
V | V V
+--------------------+ | +--------------------+
| Media Packetizer | | | RTP Retransmission |
+--------------------+ | +--------------------+
| | |
+------------+ Redundancy RTP Stream
Source RTP Stream |
| |
+---------+ +---------+
| |
V V
+-----------------+
| Media Transport |
+-----------------+
]]></artwork>
</figure>
<t>The <xref target="fig-rtx">RTP Retransmission example</xref> helps
illustrate that this mechanism works purely on the Source RTP Stream.
The RTP Retransmission transform buffers the sent Source RTP Stream
and upon requests emits a retransmitted packet with some extra payload
header as a Redundancy RTP Stream. The <xref target="RFC4588">RTP
Retransmission mechanism</xref> is specified so that there is a one to
one relation between the Source RTP Stream and the Redundancy RTP
Stream. Thus a Redundancy RTP Stream needs to be associated with its
Source RTP Stream upon being received. This is done based on CNAME
selectors and heuristics to match requested packets for a given Source
RTP Stream with the original sequence number in the payload of any new
Redundancy RTP Stream using the RTX payload format. In cases where the
Redundancy RTP Stream is sent in a separate RTP Session from the
Source RTP Stream, these sessions are related, e.g. using the <xref
target="RFC5888">SDP Media Grouping's</xref> FID semantics.</t>
</section>
<section anchor="fec" title="Forward Error Correction">
<t>The <xref target="fig-fec">figure below</xref> represents an
example where two Media Sources' Source RTP Streams are protected by
FEC. Source RTP Stream A has a Media Redundancy transformation in FEC
Encoder 1. This produces a Redundancy RTP Stream 1, that is only
related to Source RTP Stream A. The FEC Encoder 2, however takes two
Source RTP Streams (A and B) and produces a Redundancy RTP Stream 2
that protects them together, i.e. Redundancy RTP Stream 2 relate to
two Source RTP Streams (a FEC group). FEC decoding, when needed due to
packet loss or packet corruption at the receiver, requires knowledge
about which Source RTP Streams that the FEC encoding was based on.</t>
<t>In <xref target="fig-fec"/> all RTP Streams are sent on the same
Media Transport. This is however not the only possible choice.
Numerous combinations exist for spreading these RTP Streams over
different Media Transports to achieve the communication application's
goal.</t>
<figure align="center" anchor="fig-fec" title="Example of FEC Flows">
<artwork align="center"><![CDATA[+--------------------+ +--------------------+
| Media Source A | | Media Source B |
+--------------------+ +--------------------+
| |
V V
+--------------------+ +--------------------+
| Media Encoder A | | Media Encoder B |
+--------------------+ +--------------------+
| |
Encoded Stream Encoded Stream
V V
+--------------------+ +--------------------+
| Media Packetizer A | | Media Packetizer B |
+--------------------+ +--------------------+
| |
Source RTP Stream A Source RTP Stream B
| |
+-----+---------+-------------+ +---+---+
| V V V |
| +---------------+ +---------------+ |
| | FEC Encoder 1 | | FEC Encoder 2 | |
| +---------------+ +---------------+ |
| Redundancy | Redundancy | |
| RTP Stream 1 | RTP Stream 2 | |
V V V V
+----------------------------------------------------------+
| Media Transport |
+----------------------------------------------------------+
]]></artwork>
</figure>
<t>As FEC Encoding exists in various forms, the methods for relating
FEC Redundancy RTP Streams with its source information in Source RTP
Streams are many. The <xref target="RFC5109">XOR based RTP FEC Payload
format</xref> is defined in such a way that a Redundancy RTP Stream
has a one to one relation with a Source RTP Stream. In fact, the RFC
requires the Redundancy RTP Stream to use the same SSRC as the Source
RTP Stream. This requires to either use a separate RTP session or to
use the <xref target="RFC2198">Redundancy RTP Payload format</xref>.
The underlying relation requirement for this FEC format and a
particular Redundancy RTP Stream is to know the related Source RTP
Stream, including its SSRC.</t>
<t><!--MW: Here we could add something about FECFRAME and generalized block FEC that can
protect multiple RTP Streams with one Redundancy RTP Stream. However, that do require
use of explicit Source Packet Information.--></t>
</section>
<section anchor="packet-stream-separation" title="RTP Stream Separation">
<t>RTP Streams can be separated exclusively based on their SSRCs, at
the RTP Session level, or at the Multi-Media Session level.</t>
<t>When the RTP Streams that have a relationship are all sent in the
same RTP Session and are uniquely identified based on their SSRC only,
it is termed an SSRC-Only Based Separation. Such streams can be
related via RTCP CNAME to identify that the streams belong to the same
End Point. <xref target="RFC5576"/>-based approaches, when used, can
explicitly relate various such RTP Streams.</t>
<t>On the other hand, when RTP Streams that are related but are sent
in the context of different RTP Sessions to achieve separation, it is
known as RTP Session-based separation. This is commonly used when the
different RTP Streams are intended for different Media Transports.</t>
<t>Several mechanisms that use RTP Session-based separation rely on it
to enable an implicit grouping mechanism expressing the relationship.
The solutions have been based on using the same SSRC value in the
different RTP Sessions to implicitly indicate their relation. That
way, no explicit RTP level mechanism has been needed, only signaling
level relations have been established using semantics from <xref
target="RFC5888">Grouping of Media lines framework</xref>. Examples of
this are <xref target="RFC4588">RTP Retransmission</xref>, <xref
target="RFC6190">SVC Multi-Session Transmission</xref> and <xref
target="RFC5109">XOR Based FEC</xref>. RTCP CNAME explicitly relates
RTP Streams across different RTP Sessions, as explained in the
previous section. Such a relationship can be used to perform
inter-media synchronization.</t>
<t>RTP Streams that are related and need to be associated can be part
of different Multimedia Sessions, rather than just different RTP
sessions within the same Multimedia Session context. This puts further
demand on the scope of the mechanism(s) and its handling of
identifiers used for expressing the relationships.</t>
</section>
<section title="Multiple RTP Sessions over one Media Transport">
<t><xref target="I-D.westerlund-avtcore-transport-multiplexing"/>
describes a mechanism that allow several RTP Sessions to be carried
over a single underlying Media Transport. The main reasons for doing
this are related to the impact of using one or more Media Transports.
Thus using a common network path or potentially have different ones.
There is reduced need for NAT/FW traversal resources and no need for
flow based QoS.</t>
<t>However, Multiple RTP Sessions over one Media Transport makes it
clear that a single Media Transport 5-tuple is not sufficient to
express which RTP Session context a particular RTP Stream exists in.
Complexities in the relationship between Media Transports and RTP
Session already exist as one RTP Session contains multiple Media
Transports, e.g. even a Peer-to-Peer RTP Session with RTP/RTCP
Multiplexing requires two Media Transports, one in each direction. The
relationship between Media Transports and RTP Sessions as well as
additional levels of identifiers need to be considered in both
signaling design and when defining terminology.</t>
</section>
</section>
<section anchor="mapping" title="Mapping from Existing Terms">
<t>This section describes a selected set of terms from some relevant
IETF RFC and Internet Drafts (at the time of writing), using the
concepts from previous sections.</t>
<section title="Audio Capture">
<t>Telepresence specifications from CLUE WG uses this term to describe
an audio <xref target="media-source">Media Source</xref>.</t>
</section>
<section title="Capture Device">
<t>Telepresence specifications from CLUE WG use this term to identify
a physical entity performing a <xref target="media-capture">Media
Capture</xref> transformation.</t>
</section>
<section title="Capture Encoding">
<t>Telepresence specifications from CLUE WG uses this term to describe
an <xref target="encoded-stream">Encoded Stream</xref> related to CLUE
specific semantic information.</t>
</section>
<section title="Capture Scene">
<t>Telepresence specifications from CLUE WG uses this term to describe
a set of spatially related <xref target="media-source">Media
Sources</xref>.</t>
</section>
<section title="Endpoint">
<t>Telepresence specifications from CLUE WG use this term to describe
exactly one <xref target="participant">Participant</xref> and one or
more <xref target="end-point">End Points</xref>.</t>
</section>
<section title="Individual Encoding">
<t>Telepresence specifications from CLUE WG use this term to describe
the configuration information needed to perform a <xref
target="media-encoder">Media Encoder</xref> transformation.</t>
</section>
<section title="Multipoint Control Unit (MCU)">
<t>This term is commonly used to describe the central node in any type
of star <xref
target="I-D.ietf-avtcore-rtp-topologies-update">topology</xref>
conference. It describes a device that includes one <xref
target="participant">Participant</xref> (usually corresponding to a
so-called conference focus) and one or more related <xref
target="end-point">End Points</xref> (sometimes one or more per
conference participant).</t>
</section>
<section anchor="clue-media-capture" title="Media Capture">
<t>Telepresence specifications from CLUE WG uses this term to describe
either a <xref target="media-capture">Media Capture</xref> or a <xref
target="media-source">Media Source</xref>, depending on in which
context the term is used.</t>
</section>
<section title="Media Consumer">
<t>Telepresence specifications from CLUE WG use this term to describe
the media receiving part of an <xref target="end-point">End
Point</xref>.</t>
</section>
<section anchor="media-description" title="Media Description">
<t>A single <xref target="RFC4566">Source Description Protocol
(SDP)</xref> media description (or media block; an m-line and all
subsequent lines until the next m-line or the end of the SDP)
describes part of the necessary configuration and identification
information needed for a Media Encoder transformation, as well as the
necessary configuration and identification information for the Media
Decoder to be able to correctly interpret a received RTP Stream.</t>
<t>A Media Description typically relates to a single Media Source.
This is for example an explicit restriction in WebRTC. However,
nothing prevents that the same Media Description (and same RTP
Session) is re-used for <xref
target="I-D.ietf-avtcore-rtp-multi-stream">multiple Media
Sources</xref>. It can thus describe properties of one or more RTP
Streams, and can also describe properties valid for an entire RTP
Session (via <xref target="RFC5576"/> mechanisms, for example).</t>
</section>
<section title="Media Provider">
<t>Telepresence specifications from CLUE WG use this term to describe
the media sending part of an <xref target="end-point">End
Point</xref>.</t>
</section>
<section title="Media Stream">
<t><xref target="RFC3550">RTP</xref> uses media stream, audio stream,
video stream, and stream of (RTP) packets interchangeably, which are
all RTP Streams.</t>
</section>
<section title="Multimedia Session">
<t><xref target="RFC4566">SDP</xref> defines a multimedia session as a
set of multimedia senders and receivers and the data streams flowing
from senders to receivers, which would correspond to a set of End
Points and the RTP Streams that flow between them. In this memo,
Multimedia Session also assumes those End Points belong to a set of
Participants that are engaged in communication via a set of related
RTP Streams.</t>
<t><xref target="RFC3550">RTP</xref> defines a multimedia session as a
set of concurrent RTP Sessions among a common group of participants.
For example, a video conference may contain an audio RTP Session and a
video RTP Session. This would correspond to a group of Participants
(each using one or more End Points) sharing a set of concurrent RTP
Sessions. In this memo, Multimedia Session also defines those RTP
Sessions to have some relation and be part of a communication among
the Participants.</t>
</section>
<section title="Recording Device">
<t>WebRTC specifications use this term to refer to locally available
entities performing a <xref target="media-capture">Media
Capture</xref> transformation.</t>
</section>
<section title="RtcMediaStream">
<t>A WebRTC RtcMediaStreamTrack is a set of <xref
target="media-source">Media Sources</xref> sharing the same <xref
target="syncontext">Synchronization Context</xref>.</t>
</section>
<section title="RtcMediaStreamTrack">
<t>A WebRTC RtcMediaStreamTrack is a <xref target="media-source">Media
Source</xref>.</t>
</section>
<section title="RTP Sender">
<t><xref target="RFC3550">RTP</xref> uses this term, which can be seen
as the RTP protocol part of a <xref target="media_packetizer">Media
Packetizer</xref>.</t>
</section>
<section title="RTP Session">
<t>Within the context of SDP, a singe m=line can map to a single RTP
Session or multiple m=lines can map to a single RTP Session. The
latter is enabled via multiplexing schemes such as BUNDLE <xref
target="I-D.ietf-mmusic-sdp-bundle-negotiation"/>, for example, which
allows mapping of multiple m=lines to a single RTP Session.<list
style="empty">
<t>Editor's note: Consider if the contents of <xref
target="rtp-session"/> should be moved here, or if this section
should be kept and refer to the above.</t>
</list></t>
</section>
<section title="SSRC">
<t><xref target="RFC3550">RTP</xref> defines this as "the source of a
stream of RTP packets", which indicates that an SSRC is not only a
unique identifier for the <xref target="encoded-stream">Encoded
Stream</xref> carried in those packets, but is also effectively used
as a term to denote a <xref target="media_packetizer">Media
Packetizer</xref>.</t>
</section>
<section title="Stream">
<t>Telepresence specifications from CLUE WG use this term to describe
an <xref target="packet-stream">RTP Stream</xref>.</t>
</section>
<section title="Video Capture">
<t>Telepresence specifications from CLUE WG uses this term to describe
a video <xref target="media-source">Media Source</xref>.</t>
</section>
</section>
<section anchor="security" title="Security Considerations">
<t>This document simply tries to clarify the confusion prevalent in RTP
taxonomy because of inconsistent usage by multiple technologies and
protocols making use of the RTP protocol. It does not introduce any new
security considerations beyond those already well documented in the RTP
protocol <xref target="RFC3550"/> and each of the many respective
specifications of the various protocols making use of it.</t>
<t>Hopefully having a well-defined common terminology and understanding
of the complexities of the RTP architecture will help lead us to better
standards, avoiding security problems.</t>
</section>
<section title="Acknowledgement">
<t>This document has many concepts borrowed from several documents such
as WebRTC <xref target="I-D.ietf-rtcweb-overview"/>, CLUE <xref
target="I-D.ietf-clue-framework"/>, Multiplexing Architecture <xref
target="I-D.westerlund-avtcore-transport-multiplexing"/>. The authors
would like to thank all the authors of each of those documents.</t>
<t>The authors would also like to acknowledge the insights, guidance and
contributions of Magnus Westerlund, Roni Even, Paul Kyzivat, Colin
Perkins, Keith Drage, Harald Alvestrand, and Alex Eleftheriadis.</t>
</section>
<section title="Contributors">
<t>Magnus Westerlund has contributed the concept model for the media
chain using transformations and streams model, including rewriting
pre-existing concepts into this model and adding missing concepts. The
first proposal for updating the relationships and the topologies based
on this concept was also performed by Magnus.</t>
</section>
<section anchor="iana" title="IANA Considerations">
<t>This document makes no request of IANA.</t>
</section>
</middle>
<back>
<references title="Informative References">
<?rfc include='reference.RFC.2198'?>
<?rfc include='reference.RFC.3550'?>
<?rfc include='reference.RFC.3551'?>
<?rfc include='reference.RFC.4566'?>
<?rfc include='reference.RFC.4588'?>
<?rfc include='reference.RFC.4867'?>
<?rfc include='reference.RFC.5109'?>
<?rfc include='reference.RFC.5404'?>
<?rfc include='reference.RFC.5576'?>
<?rfc include='reference.RFC.5888'?>
<?rfc include="reference.RFC.5905"?>
<?rfc include='reference.RFC.6190'?>
<?rfc include='reference.RFC.7160'?>
<?rfc include='reference.RFC.7197'?>
<?rfc include='reference.RFC.7198'?>
<?rfc include='reference.I-D.ietf-clue-framework'?>
<?rfc include='reference.I-D.ietf-rtcweb-overview'?>
<?rfc include='reference.I-D.ietf-mmusic-sdp-bundle-negotiation'?>
<?rfc include='reference.I-D.ietf-avtcore-clksrc'?>
<?rfc include='reference.I-D.ietf-avtcore-rtp-multi-stream'?>
<?rfc include='reference.I-D.westerlund-avtcore-transport-multiplexing'?>
<?rfc include='reference.I-D.ietf-avtcore-rtp-topologies-update'?>
</references>
<section title="Changes From Earlier Versions">
<t>NOTE TO RFC EDITOR: Please remove this section prior to
publication.</t>
<section title="Modifications Between WG Version -01 and -02">
<t><list style="symbols">
<t>Major re-structure</t>
<t>Moved media chain Media Transport detailing up one section
level</t>
<t>Collapsed level 2 sub-sections of section 3 and thus moved
level 3 sub-sections up one level, gathering some introductory
text into the beginning of section 3</t>
<t>Added that not only SSRC collision, but also a clock rate
change [RFC7160] is a valid reason to change SSRC value for an RTP
stream</t>
<t>Added a sub-section on clock source signaling</t>
<t>Added a sub-section on RTP stream duplication</t>
<t>Elaborated a bit in section 2.2.1 on the relation between End
Points, Participants and CNAMEs</t>
<t>Elaborated a bit in section 2.2.4 on Multimedia Session and
synchronization contexts</t>
<t>Removed the section on CLUE scenes defining an implicit
synchronization context, since it was incorrect</t>
<t>Clarified text on SVC SST and MST according to list
discussions</t>
<t>Removed the entire topology section to avoid possible
inconsistencies or duplications with
draft-ietf-avtcore-rtp-topologies-update, but saved one example
overview figure of Communication Entities into that section</t>
<t>Added a section 4 on mapping from existing terms with one
sub-section per term, mainly by moving text from sections 2 and
3</t>
<t>Changed all occurrences of Packet Stream to RTP Stream</t>
<t>Moved all normative references to informative, since this is an
informative document</t>
<t>Added references to RFC 7160, RFC 7197 and RFC 7198, and
removed unused references</t>
</list></t>
</section>
<section title="Modifications Between WG Version -00 and -01">
<t><list style="symbols">
<t>WG version -00 text is identical to individual draft -03</t>
<t>Amended description of SVC SST and MST encodings with respect
to concepts defined in this text</t>
<t>Removed UML as normative reference, since the text no longer
uses any UML notation</t>
<t>Removed a number of level 4 sections and moved out text to the
level above</t>
</list></t>
</section>
<section title="Modifications Between Version -02 and -03">
<t><list style="symbols">
<t>Section 4 rewritten (and new communication topologies added) to
reflect the major updates to Sections 1-3</t>
<t>Section 8 removed (carryover from initial -00 draft)</t>
<t>General clean up of text, grammar and nits</t>
</list></t>
</section>
<section title="Modifications Between Version -01 and -02">
<t><list style="symbols">
<t>Section 2 rewritten to add both streams and transformations in
the media chain.</t>
<t>Section 3 rewritten to focus on exposing relationships.</t>
</list></t>
</section>
<section title="Modifications Between Version -00 and -01">
<t><list style="symbols">
<t>Too many to list</t>
<t>Added new authors</t>
<t>Updated content organization and presentation</t>
</list></t>
</section>
</section>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-23 10:16:52 |