One document matched: draft-ietf-avtext-rtp-grouping-taxonomy-07.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<?rfc autobreaks="yes"?>
<rfc category="info" docName="draft-ietf-avtext-rtp-grouping-taxonomy-07"
ipr="trust200902">
<front>
<title abbrev="RTP Taxonomy">A Taxonomy of Semantics and Mechanisms for
Real-Time Transport Protocol (RTP) Sources</title>
<author fullname="Jonathan Lennox" initials="J." surname="Lennox">
<organization abbrev="Vidyo">Vidyo, Inc.</organization>
<address>
<postal>
<street>433 Hackensack Avenue</street>
<street>Seventh Floor</street>
<city>Hackensack</city>
<region>NJ</region>
<code>07601</code>
<country>US</country>
</postal>
<email>jonathan@vidyo.com</email>
</address>
</author>
<author fullname="Kevin Gross" initials="K." surname="Gross">
<organization abbrev="AVA">AVA Networks, LLC</organization>
<address>
<postal>
<street/>
<city>Boulder</city>
<region>CO</region>
<country>US</country>
</postal>
<email>kevin.gross@avanw.com</email>
</address>
</author>
<author fullname="Suhas Nandakumar" initials="S." surname="Nandakumar">
<organization>Cisco Systems</organization>
<address>
<postal>
<street>170 West Tasman Drive</street>
<city>San Jose</city>
<region>CA</region>
<code>95134</code>
<country>US</country>
</postal>
<email>snandaku@cisco.com</email>
</address>
</author>
<author fullname="Gonzalo Salgueiro" initials="G." surname="Salgueiro">
<organization>Cisco Systems</organization>
<address>
<postal>
<street>7200-12 Kit Creek Road</street>
<city>Research Triangle Park</city>
<region>NC</region>
<code>27709</code>
<country>US</country>
</postal>
<email>gsalguei@cisco.com</email>
</address>
</author>
<author fullname="Bo Burman" initials="B." role="editor" surname="Burman">
<organization>Ericsson</organization>
<address>
<postal>
<street>Kistavagen 25</street>
<city>SE-16480 Stockholm</city>
<region/>
<code/>
<country>Sweden</country>
</postal>
<phone/>
<facsimile/>
<email>bo.burman@ericsson.com</email>
<uri/>
</address>
</author>
<date day="23" month="June" year="2015"/>
<area>Applications and Real-Time (ART)</area>
<keyword>I-D</keyword>
<keyword>Internet-Draft</keyword>
<keyword>Taxonomy</keyword>
<keyword>Terminology</keyword>
<keyword>RTP</keyword>
<keyword>Grouping</keyword>
<abstract>
<t>The terminology about, and associations among, Real-Time Transport
Protocol (RTP) sources can be complex and somewhat opaque. This document
describes a number of existing and proposed properties and relationships
among RTP sources, and defines common terminology for discussing
protocol entities and their relationships.</t>
</abstract>
</front>
<middle>
<section anchor="introduction" title="Introduction">
<t>The existing taxonomy of sources in <xref target="RFC3550">Real-Time
Transport Protocol (RTP)</xref> has previously often been regarded as
confusing and inconsistent. Consequently, a deep understanding of how
the different terms relate to each other becomes a real challenge.
Frequently cited examples of this confusion are (1) how different
protocols that make use of RTP use the same terms to signify different
things and (2) how the complexities addressed at one layer are often
glossed over or ignored at another.</t>
<t>This document provides some clarity by reviewing the semantics of
various aspects of sources in RTP. As an organizing mechanism, it
approaches this by describing various ways that RTP sources are
transformed on their way between sender and receiver, and how they can
be grouped and associated together.</t>
<t>All non-specific references to ControLling mUltiple streams for
tElepresence (CLUE) in this document map to <xref
target="I-D.ietf-clue-framework"/> and all references to Web Real-Time
Communications (WebRTC) map to <xref
target="I-D.ietf-rtcweb-overview"/>.</t>
</section>
<section title="Concepts">
<t>This section defines concepts that serve to identify and name various
transformations and streams in a given RTP usage. For each concept an
attempt is made to list any alternate definitions and usages that
co-exist today along with various characteristics that further describes
the concept. These concepts are divided into two categories, one related
to the chain of streams and transformations that media can be subject
to, the other for entities involved in the communication.</t>
<section title="Media Chain">
<t>In the context of this memo, Media is a sequence of synthetic or
<xref target="physical-stimulus">Physical Stimuli</xref> (sound waves,
photons, key-strokes), represented in digital form. Synthesized Media
is typically generated directly in the digital domain.</t>
<t>This section contains the concepts that can be involved in taking
Media at a sender side and transporting it to a receiver, which may
recover a sequence of physical stimuli. This chain of concepts is of
two main types, streams and transformations. Streams are time-based
sequences of samples of the physical stimulus in various
representations, while transformations changes the representation of
the streams in some way.</t>
<t>The below examples are basic ones and it is important to keep in
mind that this conceptual model enables more complex usages. Some will
be further discussed in later sections of this document. In general
the following applies to this model:<list style="symbols">
<t>A transformation may have zero or more inputs and one or more
outputs.</t>
<t>A stream is of some type, such as audio, video, real-time text,
etc.</t>
<t>A stream has one source transformation and one or more sink
transformations (with the exception of <xref
target="physical-stimulus">Physical Stimulus</xref> that may lack
source or sink transformation).</t>
<t>Streams can be forwarded from a transformation output to any
number of inputs on other transformations that support that
type.</t>
<t>If the output of a transformation is sent to multiple
transformations, those streams will be identical; it takes a
transformation to make them different.</t>
<t>There are no formal limitations on how streams are connected to
transformations.</t>
</list>It is also important to remember that this is a conceptual
model. Thus real-world implementations may look different and have
different structure.</t>
<t>To provide a basic understanding of the relationships in the chain
we first introduce the concepts for the <xref
target="fig-sender-chain">sender side</xref>. This covers physical
stimuli until media packets are emitted onto the network.</t>
<figure align="center" anchor="fig-sender-chain"
title="Sender Side Concepts in the Media Chain">
<artwork align="center"><![CDATA[ Physical Stimulus
|
V
+----------------------+
| Media Capture |
+----------------------+
|
Raw Stream
V
+----------------------+
| Media Source |<- Synchronization Timing
+----------------------+
|
Source Stream
V
+----------------------+
| Media Encoder |
+----------------------+
|
Encoded Stream +------------+
V | V
+----------------------+ | +----------------------+
| Media Packetizer | | | RTP-based Redundancy |
+----------------------+ | +----------------------+
| | |
+-------------+ Redundancy RTP Stream
Source RTP Stream |
V V
+----------------------+ +----------------------+
| RTP-based Security | | RTP-based Security |
+----------------------+ +----------------------+
| |
Secured RTP Stream Secured Redundancy RTP Stream
V V
+----------------------+ +----------------------+
| Media Transport | | Media Transport |
+----------------------+ +----------------------+
]]></artwork>
</figure>
<t>In <xref target="fig-sender-chain"/> we have included a branched
chain to cover the concepts for using redundancy to improve the
reliability of the transport. The Media Transport concept is an
aggregate that is decomposed in <xref target="media-transport"/>.</t>
<t>In <xref target="fig-receiver-chain"/> we review a receiver media
chain matching the sender side, to look at the inverse transformations
and their attempts to recover identical streams as in the sender
chain, subject to what may be lossy compression and imperfect Media
Transport. Note that the streams out of a reverse transformation, like
the Source Stream out the Media Decoder are in many cases not the same
as the corresponding ones on the sender side, thus they are prefixed
with a "Received" to denote a potentially modified version. The reason
for not being the same lies in the transformations that can be of
irreversible type. For example, lossy source coding in the Media
Encoder prevents the Source Stream out of the Media Decoder to be the
same as the one fed into the Media Encoder. Other reasons include
packet loss or late loss in the Media Transport transformation that
even RTP-based Repair, if used, fails to repair. However, some
transformations are not always present, like RTP-based Repair that
cannot operate without Redundancy RTP Streams.</t>
<figure align="center" anchor="fig-receiver-chain"
title="Receiver Side Concepts of the Media Chain">
<artwork align="center"><![CDATA[+----------------------+ +----------------------+
| Media Transport | | Media Transport |
+----------------------+ +----------------------+
Received | Received | Secured
Secured RTP Stream Redundancy RTP Stream
V V
+----------------------+ +----------------------+
| RTP-based Validation | | RTP-based Validation |
+----------------------+ +----------------------+
| |
Received RTP Stream Received Redundancy RTP Stream
| |
| +--------------------+
V V
+----------------------+
| RTP-based Repair |
+----------------------+
|
Repaired RTP Stream
V
+----------------------+
| Media Depacketizer |
+----------------------+
|
Received Encoded Stream
V
+----------------------+
| Media Decoder |
+----------------------+
|
Received Source Stream
V
+----------------------+
| Media Sink |--> Synchronization Information
+----------------------+
|
Received Raw Stream
V
+----------------------+
| Media Renderer |
+----------------------+
|
V
Physical Stimulus
]]></artwork>
</figure>
<section anchor="physical-stimulus" title="Physical Stimulus">
<t>The physical stimulus is a physical event that can be sampled and
converted to digital form by an appropriate sensor or transducer.
This include sound waves making up audio, photons in a light field,
or other excitations or interactions with sensors, like keystrokes
on a keyboard.</t>
</section>
<section anchor="media-capture" title="Media Capture">
<t>Media Capture is the process of transforming the <xref
target="physical-stimulus">Physical Stimulus</xref> into digital
Media using an appropriate sensor or transducer. The Media Capture
performs a digital sampling of the physical stimulus, usually
periodically, and outputs this in some representation as a <xref
target="raw-stream">Raw Stream</xref>. This data is considered
"Media", because it includes data that is periodically sampled, or
made up of a set of timed asynchronous events. The Media Capture is
normally instantiated in some type of device, i.e. media capture
device. Examples of different types of media capturing devices are
digital cameras, microphones connected to A/D converters, or
keyboards.</t>
<t>Characteristics:<list style="symbols">
<t>A Media Capture is identified either by hardware/manufacturer
ID or via a session-scoped device identifier as mandated by the
application usage.</t>
<t>A Media Capture can generate an <xref
target="encoded-stream">Encoded Stream </xref> if the capture
device supports such a configuration.</t>
<t>The nature of the Media Capture may impose constraints on the
clock handling in some of the subsequent steps. For example,
many audio or video capture devices are not completely free in
selecting the sample rate.</t>
</list></t>
</section>
<section anchor="raw-stream" title="Raw Stream">
<t>The time progressing stream of digitally sampled information,
usually periodically sampled and provided by a <xref
target="media-capture">Media Capture</xref>. A Raw Stream can also
contain synthesized Media that may not require any explicit Media
Capture, since it is already in an appropriate digital form.</t>
</section>
<section anchor="media-source" title="Media Source">
<t>A Media Source is the logical source of a time progressing
digital media stream synchronized to a reference clock. This stream
is called a <xref target="source-stream">Source Stream</xref>. This
transformation takes one or more <xref target="raw-stream">Raw
Streams</xref> and provides a Source Stream as output. The output is
<xref target="sync-context">synchronized with a reference
clock</xref>, which can be as simple as a system local wall clock or
as complex as an NTP synchronized clock.</t>
<t>The output can be of different types. One type is directly
associated with a particular Media Capture's Raw Stream. Others are
more conceptual sources, like an <xref
target="fig-media-source-mixer">audio mix of multiple Source
Streams</xref>. Mixing multiple streams typically requires that the
input streams are possible to relate in time, meaning that they have
to be <xref target="source-stream">Source Streams</xref> rather than
Raw Streams. In <xref target="fig-media-source-mixer"/>, the
generated Source Stream is a mix of the three input Source
Streams.</t>
<figure align="center" anchor="fig-media-source-mixer"
title="Conceptual Media Source in form of Audio Mixer">
<artwork align="center"><![CDATA[ Source Source Source
Stream Stream Stream
| | |
V V V
+--------------------------+
| Media Source |<-- Reference Clock
| Mixer |
+--------------------------+
|
V
Source Stream
]]></artwork>
</figure>
<t>Another possible example of a conceptual Media Source is a video
surveillance switch, where the input is multiple Source Streams from
different cameras, and the output is one of those Source Streams
based on some selection criteria, like a round-robin or based on
some video activity measure.</t>
</section>
<section anchor="source-stream" title="Source Stream">
<t>A stream of digital samples that has been synchronized with a
reference clock and comes from particular <xref
target="media-source">Media Source</xref>.</t>
</section>
<section anchor="media-encoder" title="Media Encoder">
<t>A Media Encoder is a transform that is responsible for encoding
the media data from a <xref target="source-stream">Source
Stream</xref> into another representation, usually more compact,
that is output as an <xref target="encoded-stream">Encoded
Stream</xref>.</t>
<t>The Media Encoder step commonly includes pre-encoding
transformations, such as scaling, resampling etc. The Media Encoder
can have a significant number of configuration options that affects
the properties of the Encoded Stream. This include properties such
as codec, bit-rate, start points for decoding, resolution, bandwidth
or other fidelity affecting properties.</t>
<t>Scalable Media Encoders need special attention as they produce
multiple outputs that are potentially of different types. As shown
in <xref target="fig-scalable-media-encoder"/>, a scalable Media
Encoder takes one input Source Stream and encodes it into multiple
output streams of two different types; at least one Encoded Stream
that is independently decodable and one or more <xref
target="dependent-stream">Dependent Streams</xref>. Decoding
requires at least one Encoded Stream and zero or more Dependent
Streams. A Dependent Stream's dependency is one of the grouping
relations this document discusses further in <xref
target="layered-multi-stream"/>.</t>
<figure align="center" anchor="fig-scalable-media-encoder"
title="Scalable Media Encoder Input and Outputs">
<artwork align="center"><![CDATA[ Source Stream
|
V
+--------------------------+
| Scalable Media Encoder |
+--------------------------+
| | ... |
V V V
Encoded Dependent Dependent
Stream Stream Stream
]]></artwork>
</figure>
<t>There are also other variants of encoders, like so-called
Multiple Description Coding (MDC). Such Media Encoder produce
multiple independent and thus individually decodable Encoded
Streams. However, (logically) combining multiple of these Encoded
Streams into a single Received Source Stream during decoding leads
to an improvement in perceptual reproduced quality when compared to
decoding a single Encoded Stream.</t>
<t>Creating multiple Encoded Streams from the same Source Stream,
where the Encoded Streams are neither in a scalable nor in an MDC
relationship is commonly utilized in <xref
target="I-D.ietf-mmusic-sdp-simulcast">Simulcast</xref>
environments.</t>
</section>
<section anchor="encoded-stream" title="Encoded Stream">
<t>A stream of time synchronized encoded media that can be
independently decoded.</t>
<t>Due to temporal dependencies, an Encoded Stream may have
limitations in where decoding can be started. These entry points,
for example Intra frames from a video encoder, may require
identification and their generation may be event based or configured
to occur periodically.</t>
</section>
<section anchor="dependent-stream" title="Dependent Stream">
<t>A stream of time synchronized encoded media fragments that are
dependent on one or more <xref target="encoded-stream">Encoded
Streams</xref> and zero or more Dependent Streams to be possible to
decode.</t>
<t>Each Dependent Stream has a set of dependencies. These
dependencies must be understood by the parties in a Multimedia
Session that intend to use a Dependent Stream.</t>
</section>
<section anchor="media-packetizer" title="Media Packetizer">
<t>The transformation of taking one or more <xref
target="encoded-stream">Encoded</xref> or <xref
target="dependent-stream">Dependent Streams</xref> and put their
content into one or more sequences of packets, normally RTP packets,
and output <xref target="rtp-stream">Source RTP Streams</xref>. This
step includes both generating RTP payloads as well as RTP packets.
The Media Packetizer then selects which Synchronization source(s)
(SSRC) <xref target="RFC3550"/> and RTP Sessions to use.</t>
<t>The Media Packetizer can combine multiple Encoded or Dependent
Streams into one or more RTP Streams:<list style="symbols">
<t>The Media Packetizer can use multiple inputs when producing a
single RTP Stream. One such example is <xref
target="layered-multi-stream">SRST packetization when using
Scalable Video Coding (SVC)</xref>.</t>
<t>The Media Packetizer can also produce multiple RTP Streams,
for example when Encoded and/or Dependent Streams are
distributed over multiple RTP Streams. One example of this is
<xref target="layered-multi-stream">MRMT packetization when
using SVC</xref>.</t>
</list></t>
</section>
<section anchor="rtp-stream" title="RTP Stream">
<t>A stream of RTP packets containing media data, source or
redundant. The RTP Stream is identified by an SSRC belonging to a
particular RTP Session. The RTP Session is identified as discussed
in <xref target="rtp-session"/>.</t>
<t>A Source RTP Stream is an RTP Stream containing at least some
content from an <xref target="encoded-stream">Encoded Stream</xref>
at some point during its lifetime. Source material is any media
material that is produced for transport over RTP without any
additional RTP-based redundancy applied. Note that RTP-based
redundancy excludes the type of redundancy that most suitable <xref
target="media-encoder">Media Encoders</xref> may add to the media
format of the Encoded Stream that makes it cope better with
inevitable RTP packet losses. This is further described in <xref
target="rtp-based-redundancy">RTP-based Redundancy</xref> and <xref
target="redundancy-rtp-stream">Redundancy RTP Stream</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>Each RTP Stream is identified by a Synchronization source
(SSRC) <xref target="RFC3550"/> that is carried in every RTP and
RTP Control Protocol (RTCP) packet header. The SSRC is unique in
a specific RTP Session context.</t>
<t>At any given point in time, a RTP Stream can have one and
only one SSRC, but SSRCs for a given RTP Stream can change over
time. SSRC collision and <xref target="RFC7160">clock rate
change</xref> are examples of valid reasons to change SSRC for
an RTP Stream. In those cases, the RTP Stream itself is not
changed in any significant way, only the identifying SSRC
number.</t>
<t>Each SSRC defines a unique RTP sequence numbering and timing
space.</t>
<t>Several RTP Streams, each with their own SSRC, may represent
a single Media Source.</t>
<t>Several RTP Streams, each with their own SSRC, can be carried
in a single RTP Session.</t>
</list></t>
</section>
<section anchor="rtp-based-redundancy" title="RTP-based Redundancy">
<t>RTP-based Redundancy is defined here as a transformation that
generates redundant or repair packets sent out as a <xref
target="redundancy-rtp-stream">Redundancy RTP Stream</xref> to
mitigate network transport impairments, like packet loss and
delay.</t>
<t>The RTP-based Redundancy exists in many flavors; they may be
generating independent Repair Streams that are used in addition to
the Source Stream (like <xref target="rtx">RTP Retransmission</xref>
and some special types of Forward Error Correction, like <xref
target="stream-dup">RTP stream duplication</xref>), they may
generate a new Source Stream by combining redundancy information
with source information (Using <xref target="fec">XOR FEC</xref> as
a <xref target="red">redundancy payload</xref>), or completely
replace the source information with only redundancy packets.</t>
</section>
<section anchor="redundancy-rtp-stream" title="Redundancy RTP Stream">
<t>A <xref target="rtp-stream">RTP Stream</xref> that contains no
original source data, only redundant data, which may either be used
standalone or be combined with one or more <xref
target="received-rtp-stream">Received RTP Streams</xref> to produce
<xref target="repaired-rtp-stream">Repaired RTP Streams</xref>.</t>
</section>
<section anchor="rtp-based-security" title="RTP-based Security">
<t>The optional RTP-based Security transformation applies security
services such as authentication, integrity protection and
confidentiality to an input RTP Stream, like what is specified in
<xref target="RFC3711">The Secure Real-time Transport Protocol
(SRTP)</xref>, producing a <xref target="secured-rtp-stream">Secured
RTP Stream</xref>. Either an <xref target="rtp-stream">RTP
Stream</xref> or a <xref target="redundancy-rtp-stream">Redundancy
RTP Stream</xref> can be used as input to this transformation.</t>
<t>In SRTP and the related Secure RTCP (SRTCP), all of the above
mentioned security services are optional, except for integrity
protection of SRTCP, which is mandatory. Also confidentiality
(encryption) is effectively optional in SRTP, since it is possible
to use a NULL encryption algorithm. As described in <xref
target="RFC7201"/>, the strength of SRTP data origin authentication
depends on the cryptographic transform and key management used, for
example in group communication where it is sometimes possible to
authenticate group membership but not the actual RTP Stream
sender.</t>
<t>RTP-based Security and RTP-based Redundancy can be combined in a
few different ways. One way is depicted in <xref
target="fig-sender-chain"/>, where an RTP Stream and its
corresponding Redundancy RTP Stream are protected by separate
RTP-based Security transforms. In other cases, like when a Media
Translator is adding FEC in Section 3.2.1.3 of <xref
target="I-D.ietf-avtcore-rtp-topologies-update"/>, a middlebox can
apply RTP-based Redundancy to an already Secured RTP Stream instead
of a Source RTP Stream. One example of that is depicted in <xref
target="fig-secure-redundancy"/> below.</t>
<figure align="center" anchor="fig-secure-redundancy"
title="Adding Redundancy to a Secured RTP Stream">
<artwork align="center"><![CDATA[ Source RTP Stream +------------+
V | V
+----------------------+ | +----------------------+
| RTP-based Security | | | RTP-based Redundancy |
+----------------------+ | +----------------------+
| | |
| | Redundancy RTP Stream
+-------------+ |
| V
| +----------------------+
Secured RTP Stream | RTP-based Security |
| +----------------------+
| |
| Secured Redundancy RTP Stream
V V
+----------------------+ +----------------------+
| Media Transport | | Media Transport |
+----------------------+ +----------------------+
]]></artwork>
</figure>
<t>In this case, the Redundancy RTP Stream may already have been
secured for confidentiality (encrypted) by the first RTP-based
Security, and it may therefore not be necessary to apply additional
confidentiality protection in the second RTP-based Security. To
avoid attacks and negative impact on <xref
target="rtp-based-repair">RTP-based Repair</xref> and the resulting
<xref target="repaired-rtp-stream">Repaired RTP Stream</xref>, it is
however still necessary to have this second RTP-based Security apply
both authentication and integrity protection to the Redundancy RTP
Stream.</t>
</section>
<section anchor="secured-rtp-stream" title="Secured RTP Stream">
<t>A Secured RTP Stream is a Source or Redundancy RTP Stream that is
protected through <xref target="rtp-based-security">RTP-based
Security </xref> by one or more of the confidentiality, integrity,
or authentication security services.</t>
</section>
<section anchor="media-transport" title="Media Transport">
<t>A Media Transport defines the transformation that the <xref
target="rtp-stream">RTP Streams</xref> are subjected to by the
end-to-end transport from one RTP sender to one specific RTP
receiver (an <xref target="rtp-session">RTP Session</xref> may
contain multiple RTP receivers per sender). Each Media Transport is
defined by a transport association that is normally identified by a
5-tuple (source address, source port, destination address,
destination port, transport protocol), but a proposal exists for
sending <xref
target="I-D.westerlund-avtcore-transport-multiplexing">multiple
transport associations on a single 5-tuple</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>Media Transport transmits RTP Streams of RTP Packets from a
source transport address to a destination transport address.</t>
<t>Each Media Transport contains only a single RTP Session.</t>
<t>A single RTP Session can span multiple Media Transports.</t>
</list></t>
<t>The Media Transport concept sometimes needs to be decomposed into
more steps to enable discussion of what a sender emits that gets
transformed by the network before it is received by the receiver.
Thus we provide also this <xref target="fig-media-transport">Media
Transport decomposition</xref>.</t>
<figure align="center" anchor="fig-media-transport"
title="Decomposition of Media Transport">
<artwork align="center"><![CDATA[ RTP Stream
|
V
+--------------------------+
| Media Transport Sender |
+--------------------------+
|
Sent RTP Stream
V
+--------------------------+
| Network Transport |
+--------------------------+
|
Transported RTP Stream
V
+--------------------------+
| Media Transport Receiver |
+--------------------------+
|
V
Received RTP Stream
]]></artwork>
</figure>
</section>
<section anchor="media-transport-sender"
title="Media Transport Sender">
<t>The first transformation within the <xref
target="media-transport">Media Transport</xref> is the Media
Transport Sender. The sending <xref
target="endpoint">Endpoint</xref> takes an RTP Stream and emits the
packets onto the network using the transport association established
for this Media Transport, thereby creating a <xref
target="sent-rtp-stream">Sent RTP Stream</xref>. In the process, it
transforms the RTP Stream in several ways. First, it generates the
necessary protocol headers for the transport association, for
example IP and UDP headers, thus forming IP/UDP/RTP packets. In
addition, the Media Transport Sender may queue, pace or otherwise
affect how the packets are emitted onto the network, thereby
potentially introducing delay, jitter and inter packet spacings that
characterize the Sent RTP Stream.</t>
</section>
<section anchor="sent-rtp-stream" title="Sent RTP Stream">
<t>The Sent RTP Stream is the RTP Stream as entering the first hop
of the network path to its destination. The Sent RTP Stream is
identified using network transport addresses, like for IP/UDP the
5-tuple (source IP address, source port, destination IP address,
destination port, and protocol (UDP)).</t>
</section>
<section anchor="network-transport" title="Network Transport">
<t>Network Transport is the transformation that subjects the <xref
target="sent-rtp-stream">Sent RTP Stream</xref> to traveling from
the source to the destination through the network. This
transformation can result in loss of some packets, varying delay on
a per packet basis, packet duplication, and packet header or data
corruption. This transformation produces a <xref
target="transported-rtp-stream">Transported RTP Stream</xref> at the
exit of the network path.</t>
</section>
<section anchor="transported-rtp-stream"
title="Transported RTP Stream">
<t>The RTP Stream that is emitted out of the network path at the
destination, subjected to the <xref
target="network-transport">Network Transport's
transformation</xref>.</t>
</section>
<section anchor="media-transport-receiver"
title="Media Transport Receiver">
<t>The receiver <xref target="endpoint">Endpoint's</xref>
transformation of the <xref
target="transported-rtp-stream">Transported RTP Stream</xref> by its
reception process, which results in the <xref
target="received-rtp-stream">Received RTP Stream</xref>. This
transformation includes transport checksums being verified. Sensible
system designs typically either discard packets with mis-matching
checksums, or pass them on while somehow marking them in the
resulting Received RTP Stream so to alert subsequent transformations
about the possible corrupt state. In this context it is worth noting
that there is typically some probability for corrupt packets to pass
through undetected (with a seemingly correct checksum). Other
transformations can compensate for delay variations in receiving a
packet on the network interface and providing it to the application
(de-jitter buffer).</t>
</section>
<section anchor="received-secured-rtp-stream"
title="Received Secured RTP Stream">
<t>This is the <xref target="secured-rtp-stream">Secured RTP
Stream</xref> resulting from the <xref
target="media-transport">Media Transport</xref> aggregate
transformation.</t>
</section>
<section anchor="rtp-based-validation" title="RTP-based Validation">
<t>RTP-based Validation is the reverse transformation of <xref
target="rtp-based-security">RTP-based Security</xref>. If this
transformation fails, the result is either not usable and must be
discarded, or may be usable but cannot be trusted. If the
transformation succeeds, the result can be a <xref
target="received-rtp-stream">Received RTP Stream</xref> or a <xref
target="received-redundancy-rs">Received Redundancy RTP
Stream</xref>, depending on what was input to the corresponding
RTP-based Security transformation, but can also be a <xref
target="received-secured-rtp-stream">Received Secured RTP
Stream</xref> in case several RTP-based Security transformations
were applied.</t>
</section>
<section anchor="received-rtp-stream" title="Received RTP Stream">
<t>The <xref target="rtp-stream">RTP Stream</xref> resulting from
the <xref target="media-transport">Media Transport's aggregate
transformation</xref>, i.e. subjected to packet loss, packet
corruption, packet duplication and varying transmission delay from
sender to receiver.</t>
</section>
<section anchor="received-redundancy-rs"
title="Received Redundancy RTP Stream">
<t>The <xref target="redundancy-rtp-stream">Redundancy RTP
Stream</xref> resulting from the Media Transport transformation,
i.e. subjected to packet loss, packet corruption, and varying
transmission delay from sender to receiver.</t>
</section>
<section anchor="rtp-based-repair" title="RTP-based Repair">
<t>RTP-based Repair is a Transformation that takes as input zero or
more <xref target="received-rtp-stream">Received RTP Streams</xref>
and one or more <xref target="received-redundancy-rs">Received
Redundancy RTP Streams</xref>, and produces one or more <xref
target="repaired-rtp-stream">Repaired RTP Streams</xref> that are as
close to the corresponding sent <xref target="rtp-stream">Source RTP
Streams</xref> as possible, using different RTP-based repair
methods, for example the ones referred in <xref
target="rtp-based-redundancy">RTP-based Redundancy</xref>.</t>
</section>
<section anchor="repaired-rtp-stream" title="Repaired RTP Stream">
<t>A <xref target="received-rtp-stream">Received RTP Stream</xref>
for which <xref target="received-redundancy-rs">Received Redundancy
RTP Stream</xref> information has been used to try to recover the
<xref target="rtp-stream">Source RTP Stream</xref> as it was before
<xref target="media-transport">Media Transport</xref>.</t>
</section>
<section anchor="media-depacketizer" title="Media Depacketizer">
<t>A Media Depacketizer takes one or more <xref
target="rtp-stream">RTP Streams</xref>, depacketizes them, and
attempts to reconstitute the <xref target="encoded-stream">Encoded
Streams</xref> or <xref target="dependent-stream">Dependent
Streams</xref> present in those RTP Streams.</t>
<t>In practical implementations, the Media Depacketizer and the
Media Decoder may be tightly coupled and share information to
improve or optimize the overall decoding and error concealment
process. It is, however, not expected that there would be any
benefit in defining a taxonomy for those detailed (and likely very
implementation-dependent) steps.</t>
</section>
<section anchor="received-encoded-stream"
title="Received Encoded Stream">
<t>The received version of an <xref target="encoded-stream">Encoded
Stream</xref>.</t>
</section>
<section anchor="media-decoder" title="Media Decoder">
<t>A Media Decoder is a transformation that is responsible for
decoding <xref target="encoded-stream">Encoded Streams</xref> and
any <xref target="dependent-stream">Dependent Streams</xref> into a
<xref target="source-stream">Source Stream</xref>.</t>
<t>In practical implementations, the Media Decoder and the Media
Depacketizer may be tightly coupled and share information to improve
or optimize the overall decoding process in various ways. It is
however not expected that there would be any benefit in defining a
taxonomy for those detailed (and likely very
implementation-dependent) steps.</t>
<t>A Media Decoder has to deal with any errors in the Encoded
Streams that resulted from corruption or failure to repair packet
losses. Therefore, it commonly is robust to error and losses, and
includes concealment methods.</t>
</section>
<section anchor="received-source-stream"
title="Received Source Stream">
<t>The received version of a <xref target="source-stream">Source
Stream</xref>.</t>
</section>
<section anchor="media-sink" title="Media Sink">
<t>The Media Sink receives a <xref target="source-stream">Source
Stream</xref> that contains, usually periodically, sampled media
data together with associated synchronization information. Depending
on application, this Source Stream then needs to be transformed into
a <xref target="raw-stream">Raw Stream</xref> that is conveyed to
the <xref target="media-render">Media Render</xref>, synchronized
with the output from other Media Sinks. The Media Sink may also be
connected with a <xref target="media-source">Media Source</xref> and
be used as part of a conceptual Media Source.</t>
<t>The Media Sink can further transform the Source Stream into a
representation that is suitable for rendering on the Media Render as
defined by the application or system-wide configuration. This
include sample scaling, level adjustments etc.</t>
</section>
<section anchor="received-raw-stream" title="Received Raw Stream">
<t>The received version of a <xref target="raw-stream">Raw
Stream</xref>.</t>
</section>
<section anchor="media-render" title="Media Render">
<t>A Media Render takes a <xref target="raw-stream">Raw
Stream</xref> and converts it into <xref
target="physical-stimulus">Physical Stimulus</xref> that a human
user can perceive. Examples of such devices are screens, and D/A
converters connected to amplifiers and loudspeakers.</t>
<t>An Endpoint can potentially have multiple Media Renders for each
media type.</t>
</section>
</section>
<section anchor="communication-entities" title="Communication Entities">
<t>This section contains concepts for entities involved in the
communication.</t>
<figure align="center" anchor="fig-p2p"
title="Example Point to Point Communication Session with two RTP Sessions">
<artwork align="center"><![CDATA[
+------------------------------------------------------------+
| Communication Session |
| |
| +----------------+ +----------------+ |
| | Participant A | +------------+ | Participant B | |
| | | | Multimedia | | | |
| | +------------+ |<==>| Session |<==>| +------------+ | |
| | | Endpoint A | | | | | | Endpoint B | | |
| | | | | +------------+ | | | | |
| | | +----------+-+----------------------+-+----------+ | | |
| | | | RTP | | | | | | | |
| | | | Session |-+---Media Transport----+>| | | | |
| | | | Audio |<+---Media Transport----+-| | | | |
| | | | | | ^ | | | | | |
| | | +----------+-+----------|-----------+-+----------+ | | |
| | | | | v | | | | |
| | | | | +-----------------+ | | | | |
| | | | | | Synchronization | | | | | |
| | | | | | Context | | | | | |
| | | | | +-----------------+ | | | | |
| | | | | ^ | | | | |
| | | +----------+-+----------|-----------+-+----------+ | | |
| | | | RTP | | v | | | | | |
| | | | Session |<+---Media Transport----+-| | | | |
| | | | Video |-+---Media Transport----+>| | | | |
| | | | | | | | | | | |
| | | +----------+-+----------------------+-+----------+ | | |
| | +------------+ | | +------------+ | |
| +----------------+ +----------------+ |
+------------------------------------------------------------+
]]></artwork>
</figure>
<t><xref target="fig-p2p"/> shows a high-level example representation
of a very basic point-to-point Communication Session between
Participants A and B. It uses two different audio and video RTP
Sessions between A's and B's Endpoints, using separate Media
Transports for those RTP Sessions. The Multimedia Session shared by
the Participants can, for example, be established using SIP (i.e.,
there is a SIP Dialog between A and B). The terms used in <xref
target="fig-p2p"/> are further elaborated in the sub-sections
below.</t>
<section anchor="endpoint" title="Endpoint">
<t>A single addressable entity sending or receiving RTP packets. It
may be decomposed into several functional blocks, but as long as it
behaves as a single RTP stack entity it is classified as a single
"Endpoint".</t>
<t>Characteristics:<list style="symbols">
<t>Endpoints can be identified in several different ways. While
RTCP Canonical Names (CNAMEs) <xref target="RFC3550"/> provide a
globally unique and stable identification mechanism for the
duration of the Communication Session (see <xref
target="comm-session"/>), their validity applies exclusively
within a <xref target="sync-context">Synchronization
Context</xref>. Thus one Endpoint can handle multiple CNAMEs,
each of which can be shared among a set of Endpoints belonging
to the same <xref target="participant">Participant</xref>.
Therefore, mechanisms outside the scope of RTP, such as
application defined mechanisms, must be used to provide Endpoint
identification when outside this Synchronization Context.</t>
<t>An Endpoint can be associated with at most one <xref
target="participant">Participant</xref> at any single point in
time.</t>
<t>In some contexts, an Endpoint would typically correspond to a
single "host", for example a computer using a single network
interface and being used by a single human user. In other
contexts, a single "host" can serve multiple Participants, in
which case each Participant's Endpoint may share properties, for
example the IP address part of a transport address.</t>
</list></t>
</section>
<section anchor="rtp-session" title="RTP Session">
<t>An RTP Session is an association among a group of Participants
communicating with RTP. It is a group communications channel which
can potentially carry a number of RTP Streams. Within an RTP
Session, every Participant can find meta-data and control
information (over RTCP) about all the RTP Streams in the RTP
Session. The bandwidth of the RTCP control channel is shared between
all Participants within an RTP Session.</t>
<t>Characteristics:<list style="symbols">
<t>An RTP Session can carry one ore more RTP Streams.</t>
<t>An RTP Session shares a single SSRC space as defined in
RFC3550 <xref target="RFC3550"/>. That is, the Endpoints
participating in an RTP Session can see an SSRC identifier
transmitted by any of the other Endpoints. An Endpoint can
receive an SSRC either as SSRC or as a Contributing source
(CSRC) in RTP and RTCP packets, as defined by the Endpoints'
network interconnection topology.</t>
<t>An RTP Session uses at least two <xref
target="media-transport">Media Transports</xref>, one for
sending and one for receiving. Commonly, the receiving Media
Transport is the reverse direction of the Media Transport used
for sending. An RTP Session may use many Media Transports and
these define the session's network interconnection topology.</t>
<t>A single Media Transport always carries a single RTP
Session.</t>
<t>Multiple RTP Sessions can be conceptually related, for
example originating from or targeted for the same <xref
target="participant">Participant</xref> or <xref
target="endpoint">Endpoint</xref>, or by containing RTP Streams
that are somehow <xref target="relations">related</xref>.</t>
</list></t>
</section>
<section anchor="participant" title="Participant">
<t>A Participant is an entity reachable by a single signaling
address, and is thus related more to the signaling context than to
the media context.</t>
<t>Characteristics:<list style="symbols">
<t>A single signaling-addressable entity, using an
application-specific signaling address space, for example a SIP
URI.</t>
<t>A Participant can participate in several <xref
target="multimedia-session">Multimedia Sessions</xref>.</t>
<t>A Participant can be comprised of several associated <xref
target="endpoint">Endpoints</xref>.</t>
</list></t>
</section>
<section anchor="multimedia-session" title="Multimedia Session">
<t>A Multimedia Session is an association among a group of <xref
target="participant">Participants</xref> engaged in the
communication via one or more <xref target="rtp-session">RTP
Sessions</xref>. It defines logical relationships among <xref
target="media-source">Media Sources</xref> that appear in multiple
RTP Sessions.</t>
<t>Characteristics:<list style="symbols">
<t>A Multimedia Session can be composed of several RTP Sessions
with potentially multiple RTP Streams per RTP Session.</t>
<t>Each Participant in a Multimedia Session can have a multitude
of Media Captures and Media Rendering devices.</t>
<t>A single Multimedia Session can contain media from one or
more <xref target="sync-context">Synchronization
Contexts</xref>. An example of that is a Multimedia Session
containing one set of audio and video for communication purposes
belonging to one Synchronization Context, and another set of
audio and video for presentation purposes (like playing a video
file) with a separate Synchronization Context that has no strong
timing relationship and need not be strictly synchronized with
the audio and video used for communication.</t>
</list></t>
</section>
<section anchor="comm-session" title="Communication Session">
<t>A Communication Session is an association among two or more <xref
target="participant">Participants</xref> communicating with each
other via one or more <xref target="multimedia-session">Multimedia
Sessions</xref>.</t>
<t>Characteristics:<list style="symbols">
<t>Each Participant in a Communication Session is identified via
an application-specific signaling address.</t>
<t>A Communication Session is composed of Participants that
share at least one Multimedia Session, involving one or more
parallel RTP Sessions with potentially multiple RTP Streams per
RTP Session.</t>
</list></t>
<t>For example, in a full mesh communication, the Communication
Session consists of a set of separate Multimedia Sessions between
each pair of Participants. Another example is a centralized
conference, where the Communication Session consists of a set of
Multimedia Sessions between each Participant and the conference
handler.</t>
</section>
</section>
</section>
<section anchor="relations" title="Concepts of Inter-Relations">
<t>This section uses the concepts from previous sections, and looks at
different types of relationships among them. These relationships occur
at different abstraction levels and for different purposes, but the
reason for the needed relationship at a certain step in the media
handling chain may exist at another step. For example, the use of <xref
target="simulcast">Simulcast</xref>) implies a need to determine
relations at RTP Stream level, but the underlying reason is that
multiple Media Encoders use the same Media Source, i.e. to be able to
identify a common Media Source.</t>
<section anchor="sync-context" title="Synchronization Context">
<t>A Synchronization Context defines a requirement on a strong timing
relationship between the Media Sources, typically requiring alignment
of clock sources. Such a relationship can be identified in multiple
ways as listed below. A single Media Source can only belong to a
single Synchronization Context, since it is assumed that a single
Media Source can only have a single media clock and requiring
alignment to several Synchronization Contexts (and thus reference
clocks) will effectively merge those into a single Synchronization
Context.</t>
<section anchor="cname" title="RTCP CNAME">
<t>RFC3550 <xref target="RFC3550"/> describes Inter-media
synchronization between RTP Sessions based on RTCP CNAME, RTP and
Network Time Protocol (NTP) <xref target="RFC5905"/> formatted
timestamps of a reference clock. As indicated in <xref
target="RFC7273"/>, despite using NTP format timestamps, it is not
required that the clock be synchronized to an NTP source.</t>
</section>
<section title="Clock Source Signaling">
<t><xref target="RFC7273"/> provides a mechanism to signal the clock
source in <xref target="RFC4566">Session Description Protocol
(SDP)</xref> both for the reference clock as well as the media
clock, thus allowing a Synchronization Context to be defined beyond
the one defined by the usage of CNAME source descriptions.</t>
</section>
<section title="Implicitly via RtcMediaStream">
<t>WebRTC defines "RtcMediaStream" with one or more
"RtcMediaStreamTracks". All tracks in a "RtcMediaStream" are
intended to be synchronized when rendered, implying that they must
be generated such that synchronization is possible.</t>
</section>
<section title="Explicitly via SDP Mechanisms">
<t><xref target="RFC5888">The SDP Grouping Framework</xref> defines
an <xref target="media-description">m= line</xref> grouping
mechanism called "Lip Synchronization" (with LS identification-tag)
for establishing the synchronization requirement across m= lines
when they map to individual sources.</t>
<t><xref target="RFC5576">Source-Specific Media Attributes in
SDP</xref> extends the above mechanism when multiple Media Sources
are described by a single m= line.</t>
</section>
</section>
<section title="Endpoint">
<t>Some applications requires knowledge of what Media Sources
originate from a particular <xref target="endpoint">Endpoint</xref>.
This can include such decisions as packet routing between parts of the
topology, knowing the Endpoint origin of the RTP Streams.</t>
<t>In RTP, this identification has been overloaded with the <xref
target="sync-context">Synchronization Context</xref> through the usage
of the RTCP source description <xref target="cname">CNAME</xref>. This
works for some usages, but in others it breaks down. For example, if
an Endpoint has two sets of Media Sources that have different
Synchronization Contexts, like the audio and video of the human
Participant as well as a set of Media Sources of audio and video for a
shared movie, CNAME would not be an appropriate identification for
that Endpoint. Therefore, an Endpoint may have multiple CNAMEs. The
CNAMEs or the Media Sources themselves can be related to the
Endpoint.</t>
</section>
<section title="Participant">
<t>In communication scenarios, it is commonly needed to know which
Media Sources originate from which <xref
target="participant">Participant</xref>. One reason is, for example,
to enable the application to display Participant Identity information
correctly associated with the Media Sources. This association is
handled through the signaling solution to point at a specific
Multimedia Session where the Media Sources may be explicitly or
implicitly tied to a particular Endpoint.</t>
<t>Participant information becomes more problematic due to Media
Sources that are generated through mixing or other conceptual
processing of Raw Streams or Source Streams that originate from
different Participants. This type of Media Sources can thus have a
dynamically varying set of origins and Participants. RTP contains the
concept of CSRC that carry information about the previous step origin
of the included media content on RTP level.</t>
</section>
<section title="RtcMediaStream">
<t>An RtcMediaStream in WebRTC is an explicit grouping of a set of
Media Sources (RtcMediaStreamTracks) that share a common identifier
and a single <xref target="sync-context">Synchronization
Context</xref>.</t>
</section>
<section title="Multi-Channel Audio">
<t>There exist a number of RTP payload formats that can carry
multi-channel audio, despite the codec being a single-channel (mono)
encoder. Multi-channel audio can be viewed as multiple Media Sources
sharing a common Synchronization Context. These are independently
encoded by a Media Encoder and the different Encoded Streams are
packetized together in a time synchronized way into a single Source
RTP Stream, using the used codec's RTP Payload format. Examples of
codecs that support multi-channel audio are <xref
target="RFC3551">PCMA and PCMU</xref>, <xref
target="RFC4867">AMR</xref>, and <xref
target="RFC5404">G.719</xref>.</t>
</section>
<section anchor="simulcast" title="Simulcast">
<t>A Media Source represented as multiple independent Encoded Streams
constitutes a <xref
target="I-D.ietf-mmusic-sdp-simulcast">Simulcast</xref> or MDC of that
Media Source. <xref target="fig-simulcast"/> shows an example of a
Media Source that is encoded into three separate Simulcast streams,
that are in turn sent on the same Media Transport flow. When using
Simulcast, the RTP Streams may be sharing RTP Session and Media
Transport, or be separated on different RTP Sessions and Media
Transports, or any combination of these two. One major reason to use
separate Media Transports is to make use of different Quality of
Service for the different Source RTP Streams. Some considerations on
separating related RTP Streams are discussed in <xref
target="rtp-stream-separation"/>.</t>
<figure anchor="fig-simulcast"
title="Example of Media Source Simulcast">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
Source Stream |
+----------------------+----------------------+
| | |
V V V
+------------------+ +------------------+ +------------------+
| Media Encoder | | Media Encoder | | Media Encoder |
+------------------+ +------------------+ +------------------+
| Encoded | Encoded | Encoded
| Stream | Stream | Stream
V V V
+------------------+ +------------------+ +------------------+
| Media Packetizer | | Media Packetizer | | Media Packetizer |
+------------------+ +------------------+ +------------------+
| Source | Source | Source
| RTP | RTP | RTP
| Stream | Stream | Stream
+-----------------+ | +-----------------+
| | |
V V V
+-------------------+
| Media Transport |
+-------------------+
]]></artwork>
</figure>
<t>The Simulcast relation between the RTP Streams is the common Media
Source. In addition, to be able to identify the common Media Source, a
receiver of the RTP Stream may need to know which configuration or
encoding goals that lay behind the produced Encoded Stream and its
properties. This enables selection of the stream that is most useful
in the application at that moment.</t>
</section>
<section anchor="layered-multi-stream" title="Layered Multi-Stream">
<t>Layered Multi-Stream (LMS) is a mechanism by which different
portions of a layered or scalable encoding of a Source Stream are sent
using separate RTP Streams (sometimes in separate RTP Sessions). LMSs
are useful for receiver control of layered media.</t>
<t>A Media Source represented as an Encoded Stream and multiple
Dependent Streams constitutes a Media Source that has layered
dependencies. <xref target="fig-ddp"/> represents an example of a
Media Source that is encoded into three dependent layers, where two
layers are sent on the same Media Transport using different RTP
Streams, i.e. SSRCs, and the third layer is sent on a separate Media
Transport.</t>
<figure align="center" anchor="fig-ddp"
title="Example of Media Source Layered Dependency">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
|
|
V
+---------------------------------------------------------+
| Media Encoder |
+---------------------------------------------------------+
| | |
Encoded Stream Dependent Stream Dependent Stream
| | |
V V V
+----------------+ +----------------+ +----------------+
|Media Packetizer| |Media Packetizer| |Media Packetizer|
+----------------+ +----------------+ +----------------+
| | |
RTP Stream RTP Stream RTP Stream
| | |
+------+ +------+ |
| | |
V V V
+-----------------+ +-----------------+
| Media Transport | | Media Transport |
+-----------------+ +-----------------+
]]></artwork>
</figure>
<t>It is sometimes useful to make a distinction between using a single
Media Transport or multiple separate Media Transports when (in both
cases) using multiple RTP Streams to carry Encoded Streams and
Dependent Streams for a Media Source. Therefore, the following new
terminology is defined here:</t>
<t><list style="hanging">
<t hangText="SRST:">Single RTP Stream on a Single Media
Transport</t>
<t hangText="MRST:">Multiple RTP Streams on a Single Media
Transport</t>
<t hangText="MRMT:">Multiple RTP Streams on Multiple Media
Transports</t>
</list></t>
<t>MRST and MRMT relations needs to identify the common Media Encoder
origin for the Encoded and Dependent Streams. When using different RTP
Sessions (MRMT), a single RTP Stream per Media Encoder, and a single
Media Source in each RTP Session, common SSRC and CNAMEs can be used
to identify the common Media Source. When multiple RTP Streams are
sent from one Media Encoder in the same RTP Session (MRST), then CNAME
is the only currently specified RTP identifier that can be used. In
cases where multiple Media Encoders use multiple Media Sources sharing
Synchronization Context, and thus having a common CNAME, additional
heuristics or identification need to be applied to create the MRST or
MRMT relationships between the RTP Streams.</t>
</section>
<section anchor="stream-dup" title="RTP Stream Duplication">
<t><xref target="RFC7198">RTP Stream Duplication</xref>, using the
same or different Media Transports, and optionally also <xref
target="RFC7197">delaying the duplicate</xref>, offers a simple way to
protect media flows from packet loss in some cases (see <xref
target="fig-duplication"/>). This is a specific type of redundancy.
All but one <xref target="rtp-stream">Source RTP Stream</xref> are
effectively <xref target="redundancy-rtp-stream">Redundancy RTP
Streams</xref>, but since both Source and Redundant RTP Streams are
the same, it does not matter which one is which. This can also be seen
as a specific type of <xref target="simulcast">Simulcast</xref> that
transmits the same <xref target="encoded-stream">Encoded Stream</xref>
multiple times.</t>
<figure anchor="fig-duplication"
title="Example of RTP Stream Duplication">
<artwork align="center"><![CDATA[ +----------------+
| Media Source |
+----------------+
Source Stream |
V
+----------------+
| Media Encoder |
+----------------+
Encoded Stream |
+-----------+-----------+
| |
V V
+------------------+ +------------------+
| Media Packetizer | | Media Packetizer |
+------------------+ +------------------+
Source | RTP Stream Source | RTP Stream
| V
| +-------------+
| | Delay (opt) |
| +-------------+
| |
+-----------+-----------+
|
V
+-------------------+
| Media Transport |
+-------------------+
]]></artwork>
</figure>
</section>
<section anchor="red" title="Redundancy Format">
<t>The <xref target="RFC2198">RTP Payload for Redundant Audio
Data</xref> defines a transport for redundant audio data together with
primary data in the same RTP payload. The redundant data can be a time
delayed version of the primary or another time delayed Encoded Stream
using a different Media Encoder to encode the same Media Source as the
primary, as depicted in <xref target="fig-red-rfc2198"/>.</t>
<figure align="center" anchor="fig-red-rfc2198"
title="Concept for usage of Audio Redundancy with different Media Encoders">
<artwork align="center"><![CDATA[+--------------------+
| Media Source |
+--------------------+
|
Source Stream
|
+------------------------+
| |
V V
+--------------------+ +--------------------+
| Media Encoder | | Media Encoder |
+--------------------+ +--------------------+
| |
| +------------+
Encoded Stream | Time Delay |
| +------------+
| |
| +------------------+
V V
+--------------------+
| Media Packetizer |
+--------------------+
|
V
RTP Stream ]]></artwork>
</figure>
<t>The Redundancy format is thus providing the necessary meta
information to correctly relate different parts of the same Encoded
Stream. The case <xref target="fig-red-rfc2198">depicted above</xref>
relates the Received Source Stream fragments coming out of different
Media Decoders, to be able to combine them together into a less
erroneous Source Stream.</t>
</section>
<section anchor="rtx" title="RTP Retransmission">
<t><xref target="fig-rtx"/> shows an example where a Media Source's
Source RTP Stream is protected by a <xref
target="RFC4588">retransmission (RTX) flow</xref>. In this example the
Source RTP Stream and the Redundancy RTP Stream share the same Media
Transport.</t>
<figure align="center" anchor="fig-rtx"
title="Example of Media Source Retransmission Flows">
<artwork align="center"><![CDATA[+--------------------+
| Media Source |
+--------------------+
|
V
+--------------------+
| Media Encoder |
+--------------------+
| Retransmission
Encoded Stream +--------+ +---- Request
V | V V
+--------------------+ | +--------------------+
| Media Packetizer | | | RTP Retransmission |
+--------------------+ | +--------------------+
| | |
+------------+ Redundancy RTP Stream
Source RTP Stream |
| |
+---------+ +---------+
| |
V V
+-----------------+
| Media Transport |
+-----------------+
]]></artwork>
</figure>
<t>The <xref target="fig-rtx">RTP Retransmission example</xref>
illustrates that this mechanism works purely on the Source RTP Stream.
The RTP Retransmission transform buffers the sent Source RTP Stream
and, upon request, emits a retransmitted packet with an extra payload
header as a Redundancy RTP Stream. The <xref target="RFC4588">RTP
Retransmission mechanism</xref> is specified such that there is a one
to one relation between the Source RTP Stream and the Redundancy RTP
Stream. Therefore, a Redundancy RTP Stream needs to be associated with
its Source RTP Stream. This is done based on CNAME selectors and
heuristics to match requested packets for a given Source RTP Stream
with the original sequence number in the payload of any new Redundancy
RTP Stream using the RTX payload format. In cases where the Redundancy
RTP Stream is sent in a different RTP Session than the Source RTP
Stream, the RTP Session relation is signaled by using the <xref
target="RFC5888">SDP Media Grouping's</xref> Flow Identification (FID
identification-tag) semantics.</t>
</section>
<section anchor="fec" title="Forward Error Correction">
<t><xref target="fig-fec"/> shows an example where two Media Sources'
Source RTP Streams are protected by Forward Error Correction (FEC).
Source RTP Stream A has a RTP-based Redundancy transformation in FEC
Encoder 1. This produces a Redundancy RTP Stream 1, that is only
related to Source RTP Stream A. The FEC Encoder 2, however, takes two
Source RTP Streams (A and B) and produces a Redundancy RTP Stream 2
that protects them jointly, i.e. Redundancy RTP Stream 2 relates to
two Source RTP Streams (a FEC group). FEC decoding, when needed due to
packet loss or packet corruption at the receiver, requires knowledge
about which Source RTP Streams that the FEC encoding was based on.</t>
<t>In <xref target="fig-fec"/> all RTP Streams are sent on the same
Media Transport. This is however not the only possible choice.
Numerous combinations exist for spreading these RTP Streams over
different Media Transports to achieve the communication application's
goal.</t>
<figure align="center" anchor="fig-fec"
title="Example of FEC Redundancy RTP Streams">
<artwork align="center"><![CDATA[+--------------------+ +--------------------+
| Media Source A | | Media Source B |
+--------------------+ +--------------------+
| |
V V
+--------------------+ +--------------------+
| Media Encoder A | | Media Encoder B |
+--------------------+ +--------------------+
| |
Encoded Stream Encoded Stream
V V
+--------------------+ +--------------------+
| Media Packetizer A | | Media Packetizer B |
+--------------------+ +--------------------+
| |
Source RTP Stream A Source RTP Stream B
| |
+-----+---------+-------------+ +---+---+
| V V V |
| +---------------+ +---------------+ |
| | FEC Encoder 1 | | FEC Encoder 2 | |
| +---------------+ +---------------+ |
| Redundancy | Redundancy | |
| RTP Stream 1 | RTP Stream 2 | |
V V V V
+----------------------------------------------------------+
| Media Transport |
+----------------------------------------------------------+
]]></artwork>
</figure>
<t>As FEC Encoding exists in various forms, the methods for relating
FEC Redundancy RTP Streams with its source information in Source RTP
Streams are many. The <xref target="RFC5109">XOR based RTP FEC Payload
format</xref> is defined in such a way that a Redundancy RTP Stream
has a one to one relation with a Source RTP Stream. In fact, the RFC
requires the Redundancy RTP Stream to use the same SSRC as the Source
RTP Stream. This requires the use of either a separate RTP Session, or
the <xref target="RFC2198">Redundancy RTP Payload format</xref>. The
underlying relation requirement for this FEC format and a particular
Redundancy RTP Stream is to know the related Source RTP Stream,
including its SSRC.</t>
</section>
<section anchor="rtp-stream-separation" title="RTP Stream Separation">
<t>RTP Streams can be separated exclusively based on their SSRCs, at
the RTP Session level, or at the Multi-Media Session level.</t>
<t>When the RTP Streams that have a relationship are all sent in the
same RTP Session and are uniquely identified based on their SSRC only,
it is termed an SSRC-Only Based Separation. Such streams can be
related via RTCP CNAME to identify that the streams belong to the same
Endpoint. <xref target="RFC5576">SSRC-based approaches </xref>, when
used, can explicitly relate various such RTP Streams.</t>
<t>On the other hand, when RTP Streams that are related are sent in
the context of different RTP Sessions to achieve separation, it is
known as RTP Session-based separation. This is commonly used when the
different RTP Streams are intended for different Media Transports.</t>
<t>Several mechanisms that use RTP Session-based separation rely on it
to enable an implicit grouping mechanism expressing the relationship.
The solutions have been based on using the same SSRC value in the
different RTP Sessions to implicitly indicate their relation. That
way, no explicit RTP level mechanism has been needed, only signaling
level relations have been established using semantics from <xref
target="RFC5888">Grouping of Media lines framework</xref>. Examples of
this are <xref target="RFC4588">RTP Retransmission</xref>, <xref
target="RFC6190">SVC Multi-Session Transmission</xref> and <xref
target="RFC5109">XOR Based FEC</xref>. RTCP CNAME explicitly relates
RTP Streams across different RTP Sessions, as explained in the
previous section. Such a relationship can be used to perform
inter-media synchronization.</t>
<t>RTP Streams that are related and need to be associated can be part
of different Multimedia Sessions, rather than just different RTP
Sessions within the same Multimedia Session context. This puts further
demand on the scope of the mechanism(s) and its handling of
identifiers used for expressing the relationships.</t>
</section>
<section title="Multiple RTP Sessions over one Media Transport">
<t><xref target="I-D.westerlund-avtcore-transport-multiplexing"/>
describes a mechanism that allows several RTP Sessions to be carried
over a single underlying Media Transport. The main reasons for doing
this are related to the impact of using one or more Media Transports
(using a common network path or potentially have different ones). The
fewer Media Transports used, the less need for NAT/FW traversal
resources and smaller number of flow based Quality of Service
(QoS).</t>
<t>However, Multiple RTP Sessions over one Media Transport imply that
a single Media Transport 5-tuple is not sufficient to express in which
RTP Session context a particular RTP Stream exists. Complexities in
the relationship between Media Transports and RTP Session already
exist as one RTP Session contains multiple Media Transports, e.g. even
a Peer-to-Peer RTP Session with RTP/RTCP Multiplexing requires two
Media Transports, one in each direction. The relationship between
Media Transports and RTP Sessions as well as additional levels of
identifiers need to be considered in both signaling design and when
defining terminology.</t>
</section>
</section>
<section anchor="mapping" title="Mapping from Existing Terms">
<t>This section describes a selected set of terms from some relevant
IETF RFC and Internet Drafts (at the time of writing), using the
concepts from previous sections.</t>
<section title="Telepresence Terms">
<t>The terms in this sub-section are used in the context of <xref
target="I-D.ietf-clue-framework">CLUE</xref>.</t>
<section title="Audio Capture">
<t>Defined in CLUE as a <xref target="clue-media-capture">Media
Capture</xref> for audio. Describes an audio <xref
target="media-source">Media Source</xref>.</t>
</section>
<section anchor="clue-capture-device" title="Capture Device">
<t>Defined in CLUE as a device that converts physical input into an
electrical signal. Identifies a physical entity performing a <xref
target="media-capture">Media Capture</xref> transformation.</t>
</section>
<section anchor="clue-capture-encoding" title="Capture Encoding">
<t>Defined in CLUE as a specific <xref
target="clue-encoding">encoding</xref> of a <xref
target="clue-media-capture">Media Capture</xref>. Describes an <xref
target="encoded-stream">Encoded Stream</xref> related to CLUE
specific semantic information.</t>
</section>
<section title="Capture Scene">
<t>Defined in CLUE as a structure representing a spatial region
captured by one or more <xref target="clue-capture-device">Capture
Devices</xref>, each capturing media representing a portion of the
region. Describes a set of spatially related <xref
target="media-source">Media Sources</xref>.</t>
</section>
<section title="Endpoint">
<t>Defined in CLUE as a CLUE-capable device which is the logical
point of final termination through receiving, decoding and rendering
and/or initiation through capturing, encoding, and sending of media
<xref target="clue-stream">streams</xref>. CLUE further defines it
to consist of one or more physical devices with source and sink
media streams, and exactly one <xref target="RFC4353"/> Participant.
Describes exactly one <xref target="participant">Participant</xref>
and one or more <xref target="endpoint">Endpoints</xref>.</t>
</section>
<section anchor="clue-encoding" title="Individual Encoding">
<t>Defined in CLUE as a set of parameters representing a way to
encode a <xref target="clue-media-capture">Media Capture</xref> to
become a <xref target="clue-capture-encoding">Capture
Encoding</xref>. Describes the configuration information needed to
perform a <xref target="media-encoder">Media Encoder</xref>
transformation.</t>
</section>
<section anchor="clue-media-capture" title="Media Capture">
<t>Defined in CLUE as a source of media, such as from one or more
<xref target="clue-capture-device">Capture Devices</xref> or
constructed from other media <xref
target="clue-stream">streams</xref>. Describes either a <xref
target="media-capture">Media Capture</xref> or a <xref
target="media-source">Media Source</xref>, depending on in which
context the term is used.</t>
</section>
<section anchor="clue-media-consumer" title="Media Consumer">
<t>Defined in CLUE as a CLUE-capable device that intends to receive
<xref target="clue-capture-encoding">Capture Encodings</xref>.
Describes the media receiving part of an <xref
target="endpoint">Endpoint</xref>.</t>
</section>
<section anchor="clue-media-provider" title="Media Provider">
<t>Defined in CLUE as a CLUE-capable device that intends to send
<xref target="clue-capture-encoding">Capture Encodings</xref>.
Describes the media sending part of an <xref
target="endpoint">Endpoint</xref>.</t>
</section>
<section anchor="clue-stream" title="Stream">
<t>Defined in CLUE as a <xref target="clue-capture-encoding">Capture
Encoding</xref> sent from a <xref target="clue-media-provider">Media
Provider</xref> to a <xref target="clue-media-consumer">Media
Consumer</xref> via RTP. Describes an <xref target="rtp-stream">RTP
Stream</xref>.</t>
</section>
<section title="Video Capture">
<t>Defined in CLUE as a <xref target="clue-media-capture">Media
Capture</xref> for video. Describes a video <xref
target="media-source">Media Source</xref>.</t>
</section>
</section>
<section anchor="media-description" title="Media Description">
<t>A single <xref target="RFC4566">Session Description Protocol
(SDP)</xref> media description (or media block; an m-line and all
subsequent lines until the next m-line or the end of the SDP)
describes part of the necessary configuration and identification
information needed for a Media Encoder transformation, as well as the
necessary configuration and identification information for the Media
Decoder to be able to correctly interpret a received RTP Stream.</t>
<t>A Media Description typically relates to a single Media Source.
This is for example an explicit restriction in WebRTC. However,
nothing prevents that the same Media Description (and same RTP
Session) is re-used for <xref
target="I-D.ietf-avtcore-rtp-multi-stream">multiple Media
Sources</xref>. It can thus describe properties of one or more RTP
Streams, and can also describe properties valid for an entire RTP
Session (via <xref target="RFC5576"/> mechanisms, for example).</t>
</section>
<section title="Media Stream">
<t><xref target="RFC3550">RTP</xref> uses media stream, audio stream,
video stream, and stream of (RTP) packets interchangeably, which are
all RTP Streams.</t>
</section>
<section title="Multimedia Conference">
<t>A Multimedia Conference is a <xref
target="comm-session">Communication Session</xref> between two or more
<xref target="participant">Participants</xref>, along with the
software they are using to communicate.</t>
</section>
<section title="Multimedia Session">
<t><xref target="RFC4566">SDP</xref> defines a Multimedia Session as a
set of multimedia senders and receivers and the data streams flowing
from senders to receivers, which would correspond to a set of
Endpoints and the RTP Streams that flow between them. In this memo,
<xref target="multimedia-session">Multimedia Session</xref> also
assumes those Endpoints belong to a set of Participants that are
engaged in communication via a set of related RTP Streams.</t>
<t><xref target="RFC3550">RTP</xref> defines a Multimedia Session as a
set of concurrent RTP Sessions among a common group of Participants.
For example, a video conference may contain an audio RTP Session and a
video RTP Session. This would correspond to a group of Participants
(each using one or more Endpoints) sharing a set of concurrent RTP
Sessions. In this memo, Multimedia Session also defines those RTP
Sessions to have some relation and be part of a communication among
the Participants.</t>
</section>
<section title="Multipoint Control Unit (MCU)">
<t>This term is commonly used to describe the central node in any type
of star <xref
target="I-D.ietf-avtcore-rtp-topologies-update">topology</xref>
conference. It describes a device that includes one <xref
target="participant">Participant</xref> (usually corresponding to a
so-called conference focus) and one or more related <xref
target="endpoint">Endpoints</xref> (sometimes one or more per
conference Participant).</t>
</section>
<section anchor="mst" title="Multi-Session Transmission (MST)">
<t>One of two transmission modes defined in <xref
target="RFC6190">H.264 based SVC</xref>, the other mode being <xref
target="sst">SST</xref>. In Multi-Session Transmission (MST), the SVC
Media Encoder sends Encoded Streams and Dependent Streams distributed
across two or more RTP Streams in one or more RTP Sessions. The term
"MST" is ambiguous in RFC 6190, especially since the name indicates
the use of multiple "sessions", while MST type packetization is in
fact required whenever two or more RTP Streams are used for the
Encoded and Dependent Streams, regardless if those are sent in one or
more RTP Sessions. Corresponds either to <xref
target="layered-multi-stream">MRST or MRMT</xref> stream relations
defined in this specification. The <xref target="RFC6190">SVC RTP
Payload RFC</xref> is not particularly explicit about how the common
<xref target="media-encoder">Media Encoder</xref> relation between
<xref target="encoded-stream">Encoded Streams</xref> and <xref
target="dependent-stream">Dependent Streams</xref> is to be
implemented.</t>
</section>
<section title="Recording Device">
<t>WebRTC specifications use this term to refer to locally available
entities performing a <xref target="media-capture">Media
Capture</xref> transformation.</t>
</section>
<section title="RtcMediaStream">
<t>A WebRTC RtcMediaStream is a set of <xref
target="media-source">Media Sources</xref> sharing the same <xref
target="sync-context">Synchronization Context</xref>.</t>
</section>
<section title="RtcMediaStreamTrack">
<t>A WebRTC RtcMediaStreamTrack is a <xref target="media-source">Media
Source</xref>.</t>
</section>
<section title="RTP Sender">
<t><xref target="RFC3550">RTP</xref> uses this term, which can be seen
as the RTP protocol part of a <xref target="media-packetizer">Media
Packetizer</xref>.</t>
</section>
<section title="RTP Session">
<t>Within the context of SDP, a singe m= line can map to a single
<xref target="rtp-session">RTP Session</xref> or multiple m= lines can
map to a single RTP Session. The latter is enabled via multiplexing
schemes such as BUNDLE <xref
target="I-D.ietf-mmusic-sdp-bundle-negotiation"/>, for example, which
allows mapping of multiple m= lines to a single RTP Session.</t>
</section>
<section anchor="sst" title="Single Session Transmission (SST)">
<t>One of two transmission modes defined in <xref
target="RFC6190">H.264 based SVC</xref>, the other mode being <xref
target="mst">MST</xref>. In Single Session Transmission (SST), the SVC
Media Encoder sends <xref target="encoded-stream">Encoded
Streams</xref> and <xref target="dependent-stream">Dependent
Streams</xref> combined into a single <xref target="rtp-stream">RTP
Stream</xref> in a single <xref target="rtp-session">RTP
Session</xref>, using the SVC RTP Payload format. The term "SST" is
ambiguous in RFC 6190, in that it sometimes refers to the use of a
single RTP Stream, like in sections relating to packetization, and
sometimes appears to refer to use of a single RTP Session, like in the
context of discussing SDP. Closely corresponds to <xref
target="layered-multi-stream">SRST</xref> defined in this
specification.</t>
</section>
<section title="SSRC">
<t><xref target="RFC3550">RTP</xref> defines this as "the source of a
stream of RTP packets", which indicates that an SSRC is not only a
unique identifier for the <xref target="encoded-stream">Encoded
Stream</xref> carried in those packets, but is also effectively used
as a term to denote a <xref target="media-packetizer">Media
Packetizer</xref>.</t>
</section>
</section>
<section anchor="security" title="Security Considerations">
<t>This document simply tries to clarify the confusion prevalent in RTP
taxonomy because of inconsistent usage by multiple technologies and
protocols making use of the RTP protocol. It does not introduce any new
security considerations beyond those already well documented in the RTP
protocol <xref target="RFC3550"/> and each of the many respective
specifications of the various protocols making use of it.</t>
<t>Hopefully having a well-defined common terminology and understanding
of the complexities of the RTP architecture will help lead us to better
standards, avoiding security problems.</t>
</section>
<section title="Acknowledgement">
<t>This document has many concepts borrowed from several documents such
as WebRTC <xref target="I-D.ietf-rtcweb-overview"/>, CLUE <xref
target="I-D.ietf-clue-framework"/>, and Multiplexing Architecture <xref
target="I-D.westerlund-avtcore-transport-multiplexing"/>. The authors
would like to thank all the authors of each of those documents.</t>
<t>The authors would also like to acknowledge the insights, guidance and
contributions of Magnus Westerlund, Roni Even, Paul Kyzivat, Colin
Perkins, Keith Drage, Harald Alvestrand, Alex Eleftheriadis, Mo Zanaty,
Stephan Wenger, and Bernard Aboba.</t>
</section>
<section title="Contributors">
<t>Magnus Westerlund has contributed the concept model for the media
chain using transformations and streams model, including rewriting
pre-existing concepts into this model and adding missing concepts. The
first proposal for updating the relationships and the topologies based
on this concept was also performed by Magnus.</t>
</section>
<section anchor="iana" title="IANA Considerations">
<t>This document makes no request of IANA.</t>
</section>
</middle>
<back>
<references title="Informative References">
<?rfc include='reference.RFC.2198'?>
<?rfc include='reference.RFC.3550'?>
<?rfc include='reference.RFC.3551'?>
<?rfc include='reference.RFC.3711'?>
<?rfc include='reference.RFC.4353'?>
<?rfc include='reference.RFC.4566'?>
<?rfc include='reference.RFC.4588'?>
<?rfc include='reference.RFC.4867'?>
<?rfc include='reference.RFC.5109'?>
<?rfc include='reference.RFC.5404'?>
<?rfc include='reference.RFC.5576'?>
<?rfc include='reference.RFC.5888'?>
<?rfc include="reference.RFC.5905"?>
<?rfc include='reference.RFC.6190'?>
<?rfc include='reference.RFC.7160'?>
<?rfc include='reference.RFC.7197'?>
<?rfc include='reference.RFC.7198'?>
<?rfc include='reference.RFC.7201'?>
<?rfc include='reference.RFC.7273'?>
<?rfc include='reference.I-D.ietf-clue-framework'?>
<?rfc include='reference.I-D.ietf-rtcweb-overview'?>
<?rfc include='reference.I-D.ietf-mmusic-sdp-bundle-negotiation'?>
<?rfc include='reference.I-D.ietf-avtcore-rtp-multi-stream'?>
<?rfc include='reference.I-D.ietf-mmusic-sdp-simulcast'?>
<?rfc include='reference.I-D.westerlund-avtcore-transport-multiplexing'?>
<?rfc include='reference.I-D.ietf-avtcore-rtp-topologies-update'?>
</references>
<section title="Changes From Earlier Versions">
<t>NOTE TO RFC EDITOR: Please remove this section prior to
publication.</t>
<section title="Modifications Between WG Version -06 and -07">
<t>Addresses comments from AD review and GenArt review.<list
style="symbols">
<t>Added RTP-based Security and RTP-based Validation transform
sections, as well as Secured RTP Stream and Received Secured RTP
Stream sections.</t>
<t>Improved wording in Abstract and Introduction sections.</t>
<t>Clarified what is considered "media" in section 2.1.2 Media
Capture.</t>
<t>Changed a number of "Characteristics" lists to more suitable
prose text.</t>
<t>Re-worded text around use of Encoded and Dependent RTP Streams
in section 2.1.9 Media Packetizer.</t>
<t>Clarified description of Source RTP Stream in section
2.1.10.</t>
<t>Clarified motivation to use separate Media Transports for
Simulcast in section 3.6.</t>
<t>Added local descriptions of terms imported from CLUE
framework.</t>
<t>Editorial improvements.</t>
</list></t>
</section>
<section title="Modifications Between WG Version -05 and -06">
<t><list style="symbols">
<t>Clarified that a Redundancy RTP Stream can be used standalone
to generate Repaired RTP Streams.</t>
<t>Clarified that (in accordance with above) RTP-based Repair
takes zero or more Received RTP Streams and one or more Received
Redundancy RTP Streams as input.</t>
<t>Changed Figure 6 to more clearly show that Media Transport is
terminated in the Endpoint, not in the Participant.</t>
<t>Added a sentence to Endpoint section that clarifies there may
be contexts where a single "host" can serve multiple Participants,
making those Endpoints share some properties.</t>
<t>Merged previous section 3.5 on SST/MST with previous section
3.8 on Layered Multi-Stream into a common section discussing the
scalable/layered stream relation, and moved improved, descriptive
text on SST and MST to new sub-sections 4.7 and 4.13, describing
them as existing terms.</t>
<t>Editorial improvements.</t>
</list></t>
</section>
<section title="Modifications Between WG Version -04 and -05">
<t><list style="symbols">
<t>Editorial improvements.</t>
</list></t>
</section>
<section title="Modifications Between WG Version -03 and -04">
<t><list style="symbols">
<t>Changed "Media Redundancy" and "Media Repair" to "RTP-based
Redundancy" and "RTP-based Repair", since those terms are more
specific and correct.</t>
<t>Changed "End Point" to "Endpoint" and removed Editor's Note on
this.</t>
<t>Clarified that a Media Capture may impose constraints on clock
handling.</t>
<t>Clarified that mixing multiple Raw Streams into a Source Stream
is not possible, since that requires mixed streams to have a
timing relation, requiring them to be Source Streams, and added an
example.</t>
<t>Clarified that RTP-based Redundancy excludes the type of
encoding redundancy found within the encoded media format in an
Encoded Stream.</t>
<t>Clarified that a Media Transport contains only a single RTP
Session, but a single RTP Session can span multiple Media
Transports.</t>
<t>Clarified that packets with seemingly correct checksum that are
received by a Media Transport Receiver may still be corrupt.</t>
<t>Clarified that a corrupt packet in a Media Transport Receiver
is typically either discarded or somehow marked and passed on in
the Received RTP Stream.</t>
<t>Added Synchronization Context to Figure 6.</t>
<t>Editorial improvements and clarifications.</t>
</list></t>
</section>
<section title="Modifications Between WG Version -02 and -03">
<t><list style="symbols">
<t>Changed section 3.5, removing SST-SS/MS and MST-SS/MS,
replacing them with SRST, MRST, and MRMT.</t>
<t>Updated section 3.8 to align with terminology changes in
section 3.5.</t>
<t>Added a new section 4.12, describing the term Multimedia
Conference.</t>
<t>Changed reference from I-D to now published RFC 7273.</t>
<t>Editorial improvements and clarifications.</t>
</list></t>
</section>
<section title="Modifications Between WG Version -01 and -02">
<t><list style="symbols">
<t>Major re-structure</t>
<t>Moved media chain Media Transport detailing up one section
level</t>
<t>Collapsed level 2 sub-sections of section 3 and thus moved
level 3 sub-sections up one level, gathering some introductory
text into the beginning of section 3</t>
<t>Added that not only SSRC collision, but also a clock rate
change [RFC7160] is a valid reason to change SSRC value for an RTP
stream</t>
<t>Added a sub-section on clock source signaling</t>
<t>Added a sub-section on RTP stream duplication</t>
<t>Elaborated a bit in section 2.2.1 on the relation between End
Points, Participants and CNAMEs</t>
<t>Elaborated a bit in section 2.2.4 on Multimedia Session and
synchronization contexts</t>
<t>Removed the section on CLUE scenes defining an implicit
synchronization context, since it was incorrect</t>
<t>Clarified text on SVC SST and MST according to list
discussions</t>
<t>Removed the entire topology section to avoid possible
inconsistencies or duplications with
draft-ietf-avtcore-rtp-topologies-update, but saved one example
overview figure of Communication Entities into that section</t>
<t>Added a section 4 on mapping from existing terms with one
sub-section per term, mainly by moving text from sections 2 and
3</t>
<t>Changed all occurrences of Packet Stream to RTP Stream</t>
<t>Moved all normative references to informative, since this is an
informative document</t>
<t>Added references to RFC 7160, RFC 7197 and RFC 7198, and
removed unused references</t>
</list></t>
</section>
<section title="Modifications Between WG Version -00 and -01">
<t><list style="symbols">
<t>WG version -00 text is identical to individual draft -03</t>
<t>Amended description of SVC SST and MST encodings with respect
to concepts defined in this text</t>
<t>Removed UML as normative reference, since the text no longer
uses any UML notation</t>
<t>Removed a number of level 4 sections and moved out text to the
level above</t>
</list></t>
</section>
<section title="Modifications Between Version -02 and -03">
<t><list style="symbols">
<t>Section 4 rewritten (and new communication topologies added) to
reflect the major updates to Sections 1-3</t>
<t>Section 8 removed (carryover from initial -00 draft)</t>
<t>General clean up of text, grammar and nits</t>
</list></t>
</section>
<section title="Modifications Between Version -01 and -02">
<t><list style="symbols">
<t>Section 2 rewritten to add both streams and transformations in
the media chain.</t>
<t>Section 3 rewritten to focus on exposing relationships.</t>
</list></t>
</section>
<section title="Modifications Between Version -00 and -01">
<t><list style="symbols">
<t>Too many to list</t>
<t>Added new authors</t>
<t>Updated content organization and presentation</t>
</list></t>
</section>
</section>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-23 10:14:55 |