One document matched: draft-jennings-rtcweb-plan-01.xml


<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>

<!-- 
make clear most recent sender wins with  multi render 

example with changing SSRC and how to correlate 

add msid to example 
-->

<?rfc toc="yes" ?>
<?rfc symrefs="yes" ?>
<?rfc strict="yes" ?>
<?rfc compact="no" ?>
<?rfc sortrefs="yes" ?>
<?rfc colonspace="no" ?>
<?rfc rfcedstyle="no" ?>
<?rfc tocdepth="4"?>

<rfc category="info" docName="draft-jennings-rtcweb-plan-01"
     ipr="noDerivativesTrust200902">

  <front>
    <title abbrev="RTCWeb Plan">Proposed Plan for Usage of SDP and RTP</title>

    <author fullname="Cullen Jennings" initials="C." surname="Jennings">
      <organization>Cisco</organization>

      <address>
        <postal>
          <street>400 3rd Avenue SW, Suite 350</street>

          <city>Calgary</city>

          <region>AB</region>

          <code>T2P 4H2</code>

          <country>Canada</country>
        </postal>

        <email>fluffy@iii.ca</email>
      </address>
    </author>
    
    <date day="25" month="February" year="2013" />

    <area>RAI</area>

    <abstract>
      <t>This draft outlines a bunch of the remaining issues in RTCWeb related
      to how the the W3C APIs map to various usages of RTP and the associated
      SDP. It proposes one possible solution to that problem and outlines
      several chunks of work that would need to be put into other drafts or
      result in new drafts being written. The underlying design guideline is to,
      as much as possible, re-use what is already defined in existing SDP
      [RFC4566] and RTP [RFC3550] specifications.</t>

      <t>This draft is not intended to become an specification but is meant for
      working group discussion to help build the specifications. It is being
      discussed on the rtcweb@ietf.org mailing list though it has topics
      relating to the CLUE WG, MMUSIC WG, AVT* WG, and WebRTC WG at W3C. </t>
    </abstract>
  </front>

  <middle>
    <section title="Overview">
      <t>The reoccurring theme of this draft is that SDP <xref
      target="RFC4566"></xref> already has a way of solving many of the problems
      being discussed at the RTCWeb WG and we SHOULD not try to invent something new
      but rather re-use the existing methods for describing RTP <xref
      target="RFC3550" /> media flows.</t>

      <t> The general theory is that, roughly speaking,  the m-line corresponds
      to flow of packets that can be handled by the application in the same
      way. This often results in more m-lines than there are media sources such
      as microphones or cameras. Forward Error Correction (FEC) is done with
      multiple M-lines as shown in <xref target="RFC4756"></xref>. Retransmission 
      (RTX) is done with multiple m-lines as shown in <xref target="RFC4588"></xref>. 
      Layered coding is done with multiple m-lines as shown in <xref target="RFC5583"></xref>. 
      Simulcast, which is really just multiple video stream from the same camera, much 
      like layered coding but with no inter m-line dependency, is done with multiple
      m-lines modeled after the Layered coding defined in in <xref target="RFC5583"></xref>. 
      </t>

      <t> The significant addition to SDP semantics is an multi-render media
      level attribute that allows a device to indicate that it makes sense to
      simultaneously use multiple stream of video that will be simultaneously
      displayed but share the same SDP characteristics and semantics such that
      they can all be negotiated under a single m-line. When using features like
      RTX, FEC, and Simulcast in a multi-render situation, there needs to be a
      way to correlate a given related media flow with the correct "base"
      media-flow. This is accomplished by having the related flows carry, in the
      CSRC, the SSRC of their base flow. An example SDP might look like as provided
      in the example <xref target="sec-mult-render-example" />. </t>

      <t> This draft also propose that advanced usages, including WebRTC to
      WebRTC scenarios, uses a Media Stream Identifier (MSID) that is signaled
      in SDP and also attempts to negotiate the usage of a RTP header extension
      to include the MSID in the RTP packet. This resolves many long term issues. </t>

      <t>This does results in lots of m lines but all the alternatives designs resulted
      in an roughly equivalent number of SSRC lines with a possibility of
      redefining most of the media level attributes. So it's really hard to see
      the big benefits defining something new over what we have. 

      One of the concerns about this approach is the time to collect all the ICE
      candidates needed for the initial offer. <xref target="sec-mult-render-example" /> provides mitigations
      to reduce the number of ports needed to be the same as an alternative SSRC
      based design.

      This assumes that it is perfectly feasible to transport
      SDP that much larger than a single MTU. The SIP <xref
      target="RFC3261"></xref> usage of SDP has successfully passed over this
      long ago. In the cases where the SDP is passed over web mechanisms, it is
      easy to use compression and the size of SDP is more of an optimization
      criteria than a limiting issue.</t>

    </section>

    <section title="Terminology">
      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT",
      "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
      interpreted as described in <xref target="RFC2119"></xref>.</t>

      <t>This draft uses the API and terminology described in <xref
      target="webrtc-api"></xref>.</t>

      <t>Transport-Flow: An transport 5 Tuple representing the UDP source and
      destination IP address and port over which RTP is flowing.</t>

      <t>5-tuple: A collection of the following values: source IP address,
      source transport port, destination IP address, destination transport port
      and transport protocol.</t>

      <t>PC-Track: A source of media (audio and/or video) that is contained in a
      PC-Stream. A PC-Track represents content comprising one or more
      PC-Channels.</t>

      <t>PC-Stream: Represents stream of data of audio and/or video added to a
      Peer Connection by local or remote media source(s). A PC-Stream is made up
      of zero or more PC-Tracks.</t>

      <t>m-line: An <xref target="RFC4566">SDP</xref> media description
      identifier that starts with "m=" field and conveys following values:media
      type,transport port,transport protocol and media format descriptions.</t>

      <t>m-block: An <xref target="RFC4566">SDP</xref> media description
      that starts with an m-line and is terminated by either the next m-line or
      by the end of the session description.</t>

      <t>Offer: An <xref target="RFC3264"></xref> SDP message generated by the
      participant who wishes to initiate a multimedia communication session.
      An Offer describes participants capabilities for engaging in a multimedia
      session.</t>

      <t>Answer: An <xref target="RFC3264"></xref> SDP message generated by the
      participant in response to an Offer. An Answer describes participants
      capabilities in continuing with the multimedia session with in the
      constraints of the Offer.</t>

      <t hangText="Note">This draft avoids using terms that implementors do
      not have a clear idea of exactly what they are - for example RTP
      Session.</t>
    </section>

    <section anchor="sec-req" title="Requirements">

      <t>The requirements listed here are a collection of requirements that
      have come from WebRTC, CLUE, and the general community that uses RTP for
      interactive communications based on Offer/Answer. It does not try to
      meet the needs of streaming usages or usages involving multicast. This
      list also does not try to list every possible requirement but instead
      outlines the ones that might influence the design. <list style="symbols">

          <t>Devices with multiple audio and/or video sources</t>

          <t>Devices that display multiple streams of video and/or render
          multiple streams of audio</t>

          <t>Simulcast, wherein a video from a single camera is sent in a few
          independent video streams typically at different resolutions and
          frame rates.</t>

          <t>Layered Codec such as H.264 SVC</t>

          <t>One way media flows and bi-directional media flows</t>

          <t> Support asymmetry, i.e. to send a different number of type of
          media streams that you receive. </t>

          <t>Mapping W3C PeerConnection (PC) aspects into SDP and RTP. It is
          important that the SDP be descriptive enough that both sides can get
          the same view of various identifiers for PC-Tracks, PC-Streams and
          their relationships.</t>

          <t>Support of Interactive Connectivity Establishment (ICE) <xref
          target="RFC5245"></xref></t>

          <t> Support of multiplexing multiple media flows, possible of
          different media types, on same 5-tuple. </t>

          <t>Synchronization - It needs to be clear how implementations deal
          with synchronization, in particular usages of both CNAME and LS group.
          The sender needs be able to indicate which Media Flows are intended 
          to be synchronized and which are not.</t>

          <t>Redundant codings - The ability to send some media, such as the
          audio from a microphone, multiple times. For example it may be sent
          with a high quality wideband codec and a low bandwidth codec. If
          packets are lost from the high bandwidth steam, the low bandwidth
          stream can be used to fill in the missing gaps of audio. This is
          very similar to simulcast.</t>

          <t>Forward Error Correction - Support for various RTP FEC
          schemes.</t>

          <t>RSVP QoS - Ability to signal various QoS mechanism such Single
          Reservation Flow (SRF) group</t>

          <t>Desegregated Media (FID group) - There is a growing desire to deal
          with endpoints that are distributed - for example a video phone where
          the incoming video is displayed on the an IP TV but the outgoing video
          comes from a tablet computer. This results in situations where the SDP
          sets up a session with not all the media transmitted to a single IP
          address.</t>

          <t>In flight change of codec: Support for system that can negotiate
          the uses of more than one codec for a given media flow and then the
          sender can arbitrarily switch between them when they are sending but
          they only send with one codec as at time.</t>

          <t> Distinguish simulcast (e.g. multiple encoding of same source) from
          multiple different sources </t>

          <t>Support for Sequential and Parallel forking at the SIP level</t>

          <t>Support for Early Media</t>

          <t>Conferencing environments with Transcoding MCU that
          decodes/mixes/recodes the media</t>

          <t>Conferencing environments with Switching MCU where the MCU mucks
          the header information of the media and do not decode/recode all the
          media</t>

        </list></t>
    </section>

    <section title="Background/Solution Overview">
      <t>
	The basic unit of media description in SDP is the m-line/m-block.
	This allows any entity defined by a single m-block to be
	individually negotiated. This negotiation applies not only
	to individual sources (e.g., cameras) but also to individual
	components that come from a single source, such as layers
	in SVC.
      </t>
      <t>
	For example, consider negotiation of FEC as defined in
	<xref target="RFC4756"/>. 
      </t>
        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

       v=0
       o=adam 289083124 289083124 IN IP4 host.example.com
       s=ULP FEC Seminar
       t=0 0
       c=IN IP4 192.0.2.0
       a=group:FEC 1 2
       a=group:FEC 3 4

       m=audio 30000 RTP/AVP 0
       a=mid:1

       m=audio 30002 RTP/AVP 100
       a=rtpmap:100 ulpfec/8000
       a=mid:2

       m=video 30004 RTP/AVP 31
       a=mid:3

       m=video 30004 RTP/AVP 101
       c=IN IP4 192.0.2.1
       a=rtpmap:101 ulpfec/8000
       a=mid:4
]]></artwork>
	</figure>
	<t>
	  When FEC is expressed this way,
	  the answerer can selectively accept or reject the various
	  streams by setting the port in the m-line to zero.
	  RTX <xref target="RFC4588"></xref>, layered coding <xref target="RFC5583"></xref>,
	  and Simulcast are all handled the same way.
	  Note that while it is also possible to represent FEC and SVC using
	  source-specific attributes <xref target="RFC5576"/>,
	  that mechanism is less flexible because it does not permit
	  selective acceptance and rejection as described in
	  [RFC 5576; Section 8]. Most deployed systems which
	  implement FEC, layered coding, etc. do so with each
	  component on a separate m-line.
	</t>
	<t>
	  Unfortunately, this strategy runs into problems when
	  combined with two new features that are desired for
	  WebRTC:
	</t>
	<t>
	  <list style="hanging">
	    <t hangText ="m-line multiplexing (bundle):"></t>
	    <t>The ability to send media described in multiple
	    media over the same 5-tuple.</t>

	    <t></t>
	    <t hangText="multi-render:"></t>
	    <t>The ability to have large numbers of multiple similar media flows 
	    (e.g., multiple cameras).
	    The paradigmatic case here is multiple video thumbnails.</t>
	  </list>
	</t>
	<t>
	  Obviously, this strategy does not scale to large numbers
	  For instance, consider the case where
	  we want to be able to transmit 35 video thumbnails (this is large,
	  but not insane). In the model described above, each of these
	  flows would need its own m-line and its own set of codecs.
	  If each side supports three separate codecs (e.g.,
	  H.261, H.263, and VP8), then we have just consumed 105
	  payload types, which exceeds the available dynamic payload
	  space.
	</t>
	<t>
	  In order to resolve this issue, it is necessary to have multiple
	  flows (e.g., multiple thumbnails) indicated by the same m-line
	  and using the same set of payload types (see Section XXX for
	  proposed syntax for this.) Because each source has its own
	  SSRC, it is possible to divide the RTP packets into individual
	  flows. However, this solution still leaves us with two problems:
	</t>
	<t>
	  <list style="symbols">
	    <t>How to individually address specific RTP flows in order to,
	    for instance, order them on a page or display flow-specific
	    captions.</t>
	    
	    <t>How to determine the relationship between multiple variants
	    of the same stream. For instance, if we have multiple cameras
	    each of which is present in a layered encoding, we need to
	    be able to determine which layers go together.
	    </t>
	  </list>
	</t>
	<t>
	  For reasons described in <xref target="sec-overall-design"/>,
	  the SSRC learned visa SDP is not suitable for individually addressing RTP flows.
	  Instead, we introduce a new identifier, the MSID, which can
	  be carried both in the SDP and the RTP and therefore can
	  be used to correlate SDP elements to RTP elements.
	  See <xref target="sec-msid"/>
	</t>
	<t>
	  By contrast, we can use RTP-only mechanisms to express
	  the correlation between RTP flows: while all the flows
	  associated with a given camera have distinct SSIDs,
	  we can use the CSRC to indicate which flows belong
	  together. This is described in <xref target="sec-multi-render"/>
	</t>
    </section>
    <section title="Overall Design" anchor="sec-overall-design">
      <t>
	The basic unit of media description in SDP is the m-line/m-block and this
	document continues with that assumption. In general, different cameras,
	microphones, etc. are carried on different m-lines. The exceptions to
	this rule is when using the multi-render extension in which case:
      </t>
      <t>
	<list style="symbols">
	  <t>Multiple sources which are semantically equivalent and multiplexed
	  on a time-wise basis.  For instance, if an MCU mixes multiple camera
	  feeds but only some subset is displayed at a time, they can all appear
	  on the same m-line.</t>

	</list>
      </t>
      <t>
	By contrast, multiple sources which are semantically distinct cannot
	appear on the same m-line because that does not allow for clear
	negotiation of which sources are acceptable, or which sets of RTP SSRCs
	correspond to which flow.
      </t>
      <t>
	The second basic assumption is that SSRCs cannot always be safely used to
	associate RTP flows with information in the SDP.  There are two reasons
	for this. First, in an offer/answer setting, RTP can appear at the
	offerer before the answer is received; if SSRC information from the
	offerer is required, then these RTP packets cannot be interpreted.
	The second reason  is that RTP permits SSRCs to be changed at
	any time.
      </t>
      <t>
	This assumption makes clear why the two exceptions to the "one flow per
	m-line" rule work. In the case of time-based multiplexing (multi render)
	of camera sources, all the cameras are equivalent from the receiver's
	perspective; he merely needs to know which ones to display now and he
	does that based on which ones have been most recently received. In the
	case of multiple versions of the same content, 
        payload types or payload types plus SSRC can be used
	to distinguish the different versions. 
      </t>
    </section>

    <section title="Example Mappings">
      <t>
	This section shows a number of sample mappings in
	abstract form.
      </t>
      <section title="One Audio, One Video, No bundle/multiplexing">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->    Speaker      >  5-Tuple
			          
Camera     -->    m=video    -->    Window       >  5-Tuple
		   ]]></artwork>
        </figure>      
      </section>

      <section title="One Audio, One Video, Bundle/multiplexing">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->    Speaker       \ 
                                                   > 5-Tuple	    
Camera     -->    m=video    -->    Window       /
		   ]]></artwork>
        </figure>      
      </section>


      <section title="One Audio, One Video, Simulcast, Bundle/multiplexing">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->    Speaker       \ 
                                                   |
Camera     +->    m=video    -\                     > 5-Tuple
           |                   ?->  Window         |
           +->    m=video    -/                   /
		   ]]></artwork>
        </figure>      
      </section>

      <section title="One Audio, One Video, Bundle/multiplexing, Lip-Sync">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->     Speaker     \             
                                                  > 5-Tuple, Lip-Sync
Camera     -->    m=video    -->     Window      /           group
		   ]]></artwork>
        </figure>      
      </section>

      <section title="One Audio, One Active Video, 5 Thumbnails, Bundle/multiplexing">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->       Speaker          \     
                                                         |
Camera     -->    m=video    -->       Window            |
                                                          >  5-Tuple
Camera     -->    m=video    -->       5 Small Windows   |
Camera            a=multi-render:5                    |
...                                                      /
		   ]]></artwork>
        </figure>      
	<t>
	  Note that in this case the payload types must be distinct between
	  the two video m-lines, because that is what is used to demultiplex.
	</t>
      </section>

      <section title="One Audio, One Active Video, 
                      5 Thumbnails, Main Speaker Lip-Sync, Bundle/multiplexing">
	<figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Microphone -->    m=audio    -->       Speaker        \   \  Lip-sync
                                                       |   > group
Camera     -->    m=video    -->       Window          |  /  
                                                        >  5-Tuple
Camera     -->    m=video    -->       5 Small Windows |
Camera            a=multi-render:5                  |
...                                                   /
		   ]]></artwork>
        </figure>      
      </section>


      
    </section>
    
    <section anchor="sec-solutions" title="Solutions">
      <t>This section outlines a set of rules for the usage of SDP and RTP that
      seems to deal with the various problems and issues that have been
      discussed. Most of these are not new and are pretty much how many systems
      do it today. Some of them are new, but all the items requiring new
      standardization work are called out in the <xref
      target="sec-tasks"/>. </t>
      <t>
        Approach:
        <list
          style="numbers">
          <t>If a system wants to offer to send two sources, such as two camera, it MUST use a
          separate m-block for each source. The means that each  PC-Track corresponds to one or more m-blocks.</t>

          <t> In cases such as FEC, simulcast,
          SVC, each repair stream, layer, or simulcast media flow will get an m-block per media flow. </t>

          <t>If a systems wants to receive two streams of video to display in
          two different windows or screens, it MUST use separate m-blocks for
          each unless explicitly signaled to be otherwise (see <xref
          target="sec-multi-render"/>). </t>

          <t>Unless explicitly signaled otherwise (see <xref
          target="sec-multi-render"/>), if a given m-line receives media from
          multiple SSRCs, only media from the most recently received SSRC SHOULD
          be rendered and other SSRC SHOULD NOT and if it is video it SHOULD be
          rendered in the same window or screen.</t>

          <t>If a camera is sending simulcast video and three resolutions, each
          resolution MUST get its own m-block and all the three m-blocks will be
          grouped. A new SDP group will be defined for this.  </t>

          <t>If a camera is using a layered codec with three layers, there
          MUST be an m-block for each, and they will be grouped using
          standard SDP for grouping layers.</t>

          <t>To aid in synchronized playback, there is exactly one, and only
          one, LS group for each PC-Stream. All the m-blocks for all the
          PC-Tracks in a given PC-Stream are synchronized so they are all put in
          one LS group. All the PC-Tracks in a given PC-Stream have the same
          CNAME.  If a PC-Track appears in more than one PC-Stream, then all the
          PC-Streams with that PC-Track MUST have the same CNAME. </t>

          <t>One way media MUST use the sendonly or recvonly attributes.</t>

          <t>Media lines that are not currently in use but may be used later, so that
          the resources need to be kept allocated, SHOULD use the inactive
          attribute.</t>

          <t>If an m-line will not be used, or it is rejected, it MUST have its
          port set to zero.</t>

          <t>If a video switching MCU produces a virtual "active speaker" media
          flow, that media flow should have its own SSRC but include the SSRC
          of the current speaker's video in the CSRC packets it produces.</t>

          <t>For each PC-Track, the W3C API MUST provide a way to set and read
          the CSRC list, set and read the content RFC 4574 "label", and read the
          SSRC of last packet received on a PC-Track.</t>

          <t>The W3C API should have a constraint or API method to allow a
          PC-Stream to indicate the number of multi-render video streams it can
          accept. Each time a new stream is received up to the maximum, a new
          PC-Track will be created.</t>

          <t>Applications MAY signal all the SSRC they intend to send using RFC
          5576, but receivers need to be careful in their usage of the SSRC in
          signaling, as the SSRC can change when there is a collision and it
          takes time before that will be updated in signaling. </t>

          <t>Applications can get out of band "roster information" that maps the
          names of various speakers or other information to the MSID and/or
          SSRCs that a user is using</t>

          <t>Applications MAY use RFC 4574 content labels to indicate the
          purpose of the video. The additional content types, main-left and
          main-right, need to be added to support two- and three-screen
          systems.</t>

          <t>The CLUE WG might want to consider SDP to signal the 3D location
          and field of view parameters for captures and renderers.</t>

          <t> The W3C API allows a "label" to be set for the PC-Track. This MUST
          be mapped to the SDP label attribute. </t>

        </list></t>

        <section anchor="sec-msid" title="Correlation and Multiplexing">

          <t> The port number that RTP is received on provides the primary
          mechanism for correlating it to the correct m-line. However, when the
          port does not uniquely male the RTP packet to the correct m-block
          (such as in multiplexing and other cases), the next thing that can be
          looked at is the PT number. Finally there are cases where SSRC can be
          used if that was signaled. </t>

          <t> There are some complications when using SSRC for correlation with
          signaling. First, the offerer may end up receiving RTP packets before
          receiving the signaling with the SSRC correlation information. This is
          because the sender of the RTP chooses the SSRC; there is no way for
          the receiver to signal how some of the bits in the SSRC should be
          set. Numerous attempts to provide a way to do this have been made, but
          they have all been rejected for various reasons, so this situation is
          unlikely to change. The second issue is that the signaled SSRC can
          change, particularly in collision cases, and there is no good way to
          know when SSRC are changing, such that the currently signaled SSRC
          usage maps to the actual RTP SSRC usage. Finally SSRC does not always
          provide correlation information between media flows - take for example
          trying to look at SSRC to tell that an audio media flow and video
          media flow came from the same camera. The nice thing about SSRC is
          that they are also included in the RTP. </t>

          <t> The proposal here is to extend the MSID draft to meet these needs:
          each media flow would have a unique MSID and the MSID would have some
          level of internal structure, which would allow various forms of
          correlation, including what WebRTC needs to be able to recreate the
          MS-Stream / MS-Track hierarchy to be the same on both sides. In
          addition, this work proposes creating an optional RTP header extension
          that could be used to carry the MSID for a media flow in the RTP
          packets. This is not absolutely needed for the WebRTC use cases but it
          helps in the case where media arrives before signaling and it helps
          resolve a broader category of web conferencing use cases. </t>

          <t> The MSID consists of three things and can be extended to have
          more. It has a device identifier, which corresponds to a unique
          identifier of the device that created the offer; one or more
          synchronization context identifiers, which is a number that helps
          correlate different synchronized media flows; and a media flow
          identifier. The synchronization identifier and flow identifier are
          scoped within the context of the device identifier, but the device
          identifier is globally unique. The suggested device identifier is a
          64-bit random number. The synchronization group is an integer that is
          the same for all media flows that have this device identifier and are
          meant to be synchronized. Right now there can be more than one
          synchronization identifier, but the open issues suggest that one would
          be preferable. The flow identifier is an integer that uniquely
          identifies this media flow within the context of the device
          identifier.  </t>

          <t> Open Issues: how to know if the MSID RTP Header Extension should
          be included in the RTP? </t>

          <t>
            An example MSID for a device identifier of 12345123451234512345,
            synchronization group of 1, and a media flow id of 3 would be:
            <list> 
              <t> a=msid:12345123451234512345 s:1 f:3 </t>
            </list>
          </t>
          <t> When the MSID is used in an answer, the MSID also has the remote
          device identifier included. In the case where the device ID of
          the device sending the answer was 22222333334444455555, the MSID would
          look like:  <list> 
            <t> a=msid:22222333334444455555 s:1 f:3 r:12345123451234512345</t>
            </list>
          </t>

          <t> Note: The 64 bit size for the device identifier was chosen as it
          allows less than a one in a million chance of collision with greater
          than 10,000 flows (actually it allows this probability with more like
          6 million flows). Much smaller numbers could be used but 32 bits is
          probably too small. More discussion on the size of this and the color
          of the bike shed is needed. </t>

          <t> When used in the WebRTC context, each PeerConnection should
          generate a unique device identifier. Each PC-Stream in the
          PeerConnection will get a a unique synchronization group identifier,
          and each PC-Track in the Peer Connection will get a unique flow
          identifier. Together these will be used to form the MSID. The MSID
          MUST be included in the SDP offer or answer so that the WebRTC
          connection on the remote side can form the correct structure of remote
          PC-Streams and PC-Tracks. If a WebRTC client receives an Offer with no
          MSID information and no LS group information, it MUST put all the
          remote PC-Tracks into a single PC-Stream. If there is LS group
          information but no MSID, a PC-Stream for each LS group MUST be created
          and the PC-Tracks put in the appropriate PC-Stream.  </t>

          <t> The W3C specs should be updated to have the ID attribute of the
          MS-Stream be the MSID with no flow identifier, and the ID attribute of
          the MS-Track be the MSID. </t>

          <t> In addition, the SDP will attempt to negotiate sending the MSID
          in the RTP using a RTP Header Extension. WebRTC clients SHOULD also
          include the a=ssrc attributes if they know which SSRC they plan to
          send but they can not rely on this not changing, being compete, or
          existing in all offers or answers they receive - particularly when
          working with SIP endpoints. </t>

          <t> When using multiplexing, the SDP MUST be distinct enough where the
          combination of payload type number and SSRC allows for unique
          demultiplexing of all the media on the same transport flow without use
          of MSID though the MSID can help in several use cases. </t>

        </section>

      <section anchor="sec-multi-render" title="Multiple Render">
        <t>There are cases - such as a grid of security cameras or thumbnails in
        a video conference - where a receiver is willing to receive and display
        several media flows of video. The proposal here is to create a new media
        level attribute called multi-render that includes an integer that
        indicates how many streams can be rendered at the same time.</t>

        <t>As an example of a m-block, a system that could display 16 thumbnails at the same
        time and was willing to receive H261 or H264 might offer</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

 m=video 52886 RTP/AVP 98 99 
 a=multi-render:16 
 a=rtpmap:98 H261/90000
 a=rtpmap:99 H264/90000 
 a=fmtp:99 profile-level-id=4de00a;
        packetization-mode=0; mst-mode=NI-T;
        sprop-parameter-sets={sps0},{pps0};
 ]]></artwork>
        </figure>      
       
        <t>When combining this multi-render feature with multiplexing, the
        answer will might not know all the SSRCs that will be send to this m-block
        so it is best to use payload type (PT) numbers that are unique for the
        SDP: the demultiplexing may have to only use the PT if the SSRCs are
        unknown.</t>

        <t> The intention is that the most recently sent SSRC are the ones that
        are rendered. Some switching MCU will likely only send the correct
        number of SSRC and not change the SSRC but will instead update the CSRC
        as the switching MCU select a different participant to include in the
        particular video stream. </t>

        <t>The receiver displays, in different windows, the video from the most
        recent 16 SSRC to send video to m-block.</t>

        <t>This allows a switching MCU to know how many thumbnail type streams
        would be appropriate to send to this endpoint.</t>

        <section  anchor="sec-mult-render-example" title="Complex Multi Render Example">

          <t> The following shows a single multi render m-line that can display
          up to three video streams, and send 3 streams, and support 2 layers of
          simulcast with FEC on the high resolution layer and bundle. Note that
          only host candidates are provided for the FEC and lower resolution
          simulcast so if the device is behind a NAT, those streams will not be
          used. </t>

          <figure>
            <artwork align="left" alt="" height="" name="" type="" width=""
                     xml:space="preserve"><![CDATA[
Offer

v=0	
o=alice 20519 0 IN IP4 0.0.0.0	
s=ULP FEC
t=0 0	
a=ice-ufrag:074c6550	
a=ice-pwd:a28a397a4c3f31747d1ee3474af08a068	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:f7:
                    c9:c7:70:9d:1f:66:79:a8:07	
c= IN IP4 24.23.204.141	
a=group:BUNDLE vid1 vid2 vid3
a=group:FEC vid1 vid2
a=group:SIMULCAST vid1 vid3

m=video 62537 RTP/SAVPF 96
a=mid:vid1
a=multi-render:3	
a=rtcp-mux
a=msid:12345123451234512345 s:1 f:1
a=rtpmap:96 VP8/90000	
a=fmtp:96 max-fr=30;max-fs=3600;
a=imageattr:96 [x=1280,y=720]
a=candidate:0 1 UDP  2113667327 192.168.1.4 62537 typ host
a=candidate:1 1 UDP  694302207 24.23.204.141 62537 
                     typ srflx raddr 192.168.1.4 rport 62537
a=candidate:0 2 UDP 2113667326 192.168.1.4 64678 typ host
a=candidate:1 2 UDP  1694302206 24.23.204.141 64678 
                     typ rflx raddr 192.168.1.4 rport 64678

m=video 62541 RTP/SAVPF 97
a=mid:vid2	
a=multi-render:3	
a=rtcp-mux
a=msid:34567345673456734567 s:1 f:2
a=rtpmap:97 uplfec/90000
a=candidate:0 1 UDP 2113667327 192.168.1.4 62541 typ host	

m=video 62545 RTP/SAVPF 98
a=mid:vid3	
a=multi-render:3	
a=rtcp-mux
a=msid:333444558899000991122 s:1 f:3
a=rtpmap:98 VP8/90000
a=fmtp:98 max-fr=15;max-fs=300;
a=imageattr:96 [x=320,y=240]
a=candidate:0 1 UDP 2113667327 192.168.1.4 62545 typ host	
 ]]></artwork>
        </figure>  

        <t> The following shows an answer to the above offer that accepts
        everything and plans to send video from five different cameras in to
        this m-line (but only three at a time). </t>

          <figure>
            <artwork align="left" alt="" height="" name="" type="" width=""
                     xml:space="preserve"><![CDATA[
Answer 

v=0	
o=Bob 20519 0 IN IP4 0.0.0.0	
s=ULP FEC
t=0 0	
a=ice-ufrag:c300d85b	
a=ice-pwd:de4e99bd291c325921d5d47efbabd9a2	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:f7:
                    c9:c7:70:9d:1f:66:79:a8:07
c= IN IP4 98.248.92.77	
a=group:BUNDLE vid1 vid2 vid3
a=group:FEC vid1 vid2
a=group:SIMULCAST vid1 vid3

m=video 42537 RTP/SAVPF 96
a=mid:vid1	
a=multi-render:3	
a=rtcp-mux
a=msid:54321543215432154321 s:1 f:1 r:12345123451234512345
a=rtpmap:96 VP8/90000	
a=fmtp:96 max-fr=30;max-fs=3600;
a=imageattr:96 [x=1280,y=720]
a=candidate:0 1 UDP 2113667327 192.168.1.7 42537 typ host
a=candidate:1 1 UDP 1694302207 98.248.92.77 42537
                    typ srflx raddr 192.168.1.7 rport 42537
a=candidate:0 2 UDP 2113667326 192.168.1.7 60065 typ host
a=candidate:1 2 UDP 1694302206 98.248.92.77 60065 
                    typ srflx raddr 192.168.1.7 rport 60065

m=video 42539 RTP/SAVPF 97
a=mid:vid2	
a=multi-render:3	
a=rtcp-mux
a=msid:11111122222233333444444 s:1 f:2 r:34567345673456734567
a=rtpmap:97 uplfec/90000
a=candidate:0 1 UDP 2113667327 192.168.1.7 42539 typ host	

m=video 42537 RTP/SAVPF 98
a=mid:vid3	
a=multi-render:3	
a=rtcp-mux
a=msid:777777888888999999111111 s:1 f:3 r:333444558899000991122
a=rtpmap:98 VP8/90000
a=fmtp:98 max-fr=15;max-fs=300;
a=imageattr:98 [x=320,y=240]
a=candidate:0 1 UDP 2113667327 192.168.1.7 42537 typ host	
a=candidate:1 1 UDP 1694302207 98.248.92.77 42537 
                    typ srflx raddr 192.168.1.7 rport 42537
a=candidate:0 2 UDP 2113667326 192.168.1.7 60065 typ host
a=candidate:1 2 UDP 1694302206 98.248.92.77 60065 
                    typ srflx raddr 192.168.1.7 rport 60065

 ]]></artwork>
        </figure>  


        <t> The following shows an answer to the above by a client that does not
        support simulcast, FEC, bundle, or msid. </t>

          <figure>
            <artwork align="left" alt="" height="" name="" type="" width=""
                     xml:space="preserve"><![CDATA[
Answer 

v=0	
o=Bob 20519 0 IN IP4 0.0.0.0	
s=ULP FEC
t=0 0	
a=ice-ufrag:c300d85b	
a=ice-pwd:de4e99bd291c325921d5d47efbabd9a2	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:f7:
                    c9:c7:70:9d:1f:66:79:a8:07
c= IN IP4 98.248.92.77	

m=video 42537 RTP/SAVPF 96
a=mid:vid1	
a=rtcp-mux
a=recvonly
a=rtpmap:96 VP8/90000	
a=fmtp:96 max-fr=30;max-fs=3600;
a=candidate:0 1 UDP 2113667327 192.168.1.7 42537 typ host
a=candidate:1 1 UDP 1694302207 98.248.92.77 42537 
                    typ srflx raddr 192.168.1.7 rport 42537
a=candidate:0 2 UDP 2113667326 192.168.1.7 60065 typ host
a=candidate:1 2 UDP 1694302206 98.248.92.77 60065 
                    typ srflx raddr 192.168.1.7 rport 60065

m=video 0 RTP/SAVPF 97
a=mid:vid2	
a=rtcp-mux
a=rtpmap:97 uplfec/90000

m=video 0 RTP/SAVPF 98
a=mid:vid3	
a=rtcp-mux
a=rtpmap:98 H264/90000
a=fmtp:98 profile-level-id=428014; 
       max-fs=3600; max-mbps=108000; max-br=14000
 ]]></artwork>
        </figure>  



        </section>

      </section>

      <section anchor="sec-dirt" title="Dirty Little Secrets">
        <t>If SDP offer/answers are of type AVP or AVPF but contain a crypto of
        fingerprint attribute, they should be treated as if they were SAVP or
        SAVPF respectively. The Answer should have the same type as the offer
        but for all practical purposes the implementation should treat it as the
        secure variant.</t>

        <t>If SDP offer/answers are of type AVP or SAVP, but contain an a=rtcp-fb
        attribute, they should be treated as if they were AVPF or SAVPF
        respectively. The SDP Answer should have the same type as the Offer but
        for all practical purposes the implementation should treat it as the
        feedback variant.</t>

        <t>If an SDP Offer has both a fingerprint and a crypto attribute, it
        means the Offerer supports both DTLS-SRTP and SDES and the answer should
        select one and return an Answer with only an attribute for the selected
        keying mechanism.</t>

        <t> These may not look appealing but the alternative is to make cap-neg
        mandatory to implement in WebRTC. </t>
      </section>

      <section anchor="sec-issues" title="Open Issues">
      
        <t>What do do with unrecognized media received at W3C PerrConnection
        level? Suggestion is it creates a new track in whatever stream the MSID
        would indicate if present and the default stream if no MSID header
        extension in the RTP. </t>

      </section>

      <section anchor="sec-confusions" title="Confusions">
      
        <t> You can decrypt DTLS-SRTP media before receiving an answer, you
        can't determine if it is secure or not till you have the fingerprint and
        have verified it </t>

        <t> You can use RTCP-FB to do things like PLI without signaling the
        SSRC. The PLI packets gets the sender SSRC from the incoming media that
        is trying to signal the PLI for. </t>

      </section>


    </section>

    <section anchor="sec-examples" title="Examples">


<!-- 
      <t> 2 camera, 2 simulcast streams </t>

  <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer

v=0	
o=alice 20519 0 IN IP4 0.0.0.0	
s=	
t=0 0	
a=ice-ufrag:074c6550	
a=ice-pwd:a28a397a4c3f31747d1ee3474af08a068	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:f7:
                    c9:c7:70:9d:1f:66:79:a8:07	
c= IN IP4 24.23.204.141	
a=group:BUNDLE vid1 vid2 

m=video 62537 RTP/SAVPF 96 98 99 100
a=mid:vid1	
a=rtpmap:96 H264/90000	
a=rtpmap:98 H261/90000
a=rtpmap:99 rtx/90000
a=fmtp:99 apt=96;rtx-time=3000
a=rtpmap:100 rtx/90000
a=fmtp:100 apt=98;rtx-time=3000
a=ssrc:12345 cname:user1@host1.example.com	
a=ssrc:12345 imageattr:* [x=1280,y=720]	
a=ssrc:12345 framerate:30	
a=ssrc:67890 cname:user1@host1.example.com	
a=ssrc:67890 imageattr:* [x=320,y=240]	
a=ssrc:67890 framerate:15	
a=sendonly	
a=candidate:0 1 UDP 2113667327 192.168.1.4 62537 typ host	

m=video 62539 RTP/SAVPF 97
a=mid:vid2 	
c= IN IP4 24.23.204.141	
a=rtpmap:97 H264/90000	
a=ssrc:54321 cname:user1@host1.example.com	
a=ssrc:54321 imageattr:* [x=1080,y=720]	
a=ssrc:54321 framerate:30	
a=ssrc:98760 cname:user1@host1.example.com	
a=ssrc:98760 imageattr:* [x=320,y=240]	
a=ssrc:98760 framerate:15	
a=candidate:0 1 UDP 2113667327 192.168.1.4 62539 typ host	

]]></artwork>
      </figure>

 <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Answer 

v=0	
o=bob 16833 0 IN IP4 0.0.0.0	
s=	
t=0 0	
a=ice-ufrag:c300d85b	
a=ice-pwd:de4e99bd291c325921d5d47efbabd9a2	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:
                    f7:c9:c7:70:9d:1f:66:79:a8:07	
c= IN IP4 98.248.92.771	
a=rtcp-mux	
a=group:BUNDLE vid1 vid2 

m=video 55123 RTP/SAVPF 96
a=mid:vid1	
a=rtpmap:96 H264/90000	
a=remote-ssrc:12345 recv:on
a=remote-ssrc:67890 recv:off	
a=recvonly	
a=candidate:0 1 UDP 2113667327 192.168.1.7 55123 typ host	
	
m=video 55123 RTP/SAVPF 97
a=mid:vid2	
a=rtpmap:97 H264/90000	
a=remote-ssrc:54321 recv:off
a=remote-ssrc:67890 recv:on
a=ssrc:64123 cname:user2@host2.example.com	
a=ssrc:64123 imageattr:* [x=320,y=240]	
a=ssrc:64123 framerate:15	
a=candidate:0 1 UDP 2113667327 192.168.1.7 55123 typ host	
a=rtcp-fb:120 nack pli	
a=rtcp-fb:120 ccm fir	

]]></artwork>
      </figure>


 <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Answer 

v=0	
o=bob 16833 0 IN IP4 0.0.0.0	
s=	
t=0 0	
a=ice-ufrag:c300d85b	
a=ice-pwd:de4e99bd291c325921d5d47efbabd9a2	
a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d:
                    f7:c9:c7:70:9d:1f:66:79:a8:07	
c= IN IP4 98.248.92.771	
a=rtcp-mux	

m=video 63130 RTP/SAVPF 96	
a=rtpmap:96 H264/90000	
a=candidate:0 1 UDP 2113667327 192.168.1.7 63130 typ host	
a=recvonly	

m=video 63133 RTP/SAVPF 97	
a=rtpmap:97 H264/90000	
a=candidate:0 1 UDP 2113667326 192.168.1.7 63133 typ host	
a=rtcp-fb:120 nack pli	
a=rtcp-fb:120 ccm fir
	
]]></artwork>
      </figure>

-->

      <t>Example of a video client joining a video conference. The client can
      produce and receive two streams of video, one from the slides and the
      other of the person. The video of the person is synchronized with the
      audio. In addition, the client can display up to 10 thumbnails of video.
      The main video is simulcast at HD size and a thumbnail size.</t>

      <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer

     v=0
     o=alice 2890844526 2890844527 IN IP4 host.example.com
     s=
     c=IN IP4 host.atlanta.example.com
     t=0 0
     a=group:LS 1,2,3
     a=group:SIMULCAST 2,3

     m=audio 49170 RTP/AVP 96      <- This is the Audio 
     a=mid:1
     a=rtpmap:96 iLBC/8000
     a=content:main

     m=video 51372 RTP/AVP 97      <- This is the main video 
     a=mid:2
     a=rtpmap:97 VP8/90000
     a=fmtp:97 max-fr=30;max-fs=3600;
     a=imageattr:97 [x=1080,y=720]
     a=content:main

     m=video 51372 RTP/AVP 98       <- This is the slides 
     a=mid:2
     a=rtpmap:98 VP8/90000
     a=fmtp:98 max-fr=30;max-fs=3600;
     a=imageattr:98 [x=1080,y=720]
     a=content:slides

     m=video 51372 RTP/AVP 99       <- This is the simulcast of main
     a=mid:3
     a=rtpmap:99 VP8/90000
     a=fmtp:99 max-fr=15;max-fs=300;
     a=imageattr:99 [x=320,y=240]

     m=video 51372 RTP/AVP 100      <- This is the 10 thumbnails 
     a=mid:4
     a=multi-render:10
     a=recvonly
     a=rtpmap:100 VP8/90000
     a=fmtp:100 max-fr=15;max-fs=300;
     a=imageattr:100 [x=320,y=240]
]]></artwork>
      </figure>

<!--    
      <t>
        SDP Answer from the server indicating two video streams with the speaker
        and the slides. Also signaled is the lip-sync for speakers audio and
        video streams.
      </t>
      
      <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Answer 

TODO - fix this ans 
     v=0
     o=bob 2808844564 2808844565 IN IP4 host.biloxi.example.com
     s=
     c=IN IP4 host.biloxi.example.com
     t=0 0
     a=group:LS a,b

     m=audio 49172 RTP/AVP 99
     a=mid:a
     a=rtpmap:99 iLBC/8000

     m=video 51374 RTP/AVP 96
     a=mid:b
     a=content:speaker
     a=rtpmap:96 H264/90000

     m=video 51376 RTP/AVP 97
     a=mid:c
     a=content:slides
     a=rtpmap:97 H264/90000

]]></artwork>
      </figure>
-->      
      <t>Example of a three-screen video endpoint connecting to a two-screen
      system which ends up selecting the left and middle screens.</t>
  
      <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer

     v=0
     o=alice 2890844526 2890844527 IN IP4 host.atlanta.example.com
     s=
     c=IN IP4 host.atlanta.example.com
     t=0 0
     a=rtcp-fb

     m=audio 49100 RTP/SAVPF 96
     a=rtpmap:96 iLBC/8000

     m=video 49102 RTP/SAVPF 97
     a=content:main
     a=rtpmap:97 H261/90000

     m=video 49104 RTP/SAVPF 98
     a=content:left
     a=rtpmap:98 H261/90000
    
     m=video 49106 RTP/SAVPF 99
     a=content:right
     a=rtpmap:99 H261/90000
]]></artwork>
      </figure>

   <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[  
Answer 

     v=0
     o=bob 2808844564 2808844565 IN IP4 host.biloxi.example.com
     s=
     c=IN IP4 host.biloxi.example.com
     t= 0 0
     a=rtcp-fb

     m=audio 50100 RTP/SAVPF 96
     a=rtpmap:96 iLBC/8000

     m=video 50102 RTP/SAVPF 97
     a=content:main
     a=rtpmap:97 H261/90000

     m=video 50104 RTP/SAVPF 98
     a=content:left
     a=rtpmap:98 H261/90000
    
     m=video 0 RTP/SAVPF 99
     a=content:right
     a=rtpmap:99 H261/90000
    
]]></artwork>
      </figure>

      <t>Example of a client that supports SRTP-DTLS and SDES connecting
      to a client that supports SRTP-DTLS.</t>

      <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

     v=0
     o=alice 2890844526 2890844527 IN IP4 host.atlanta.example.com
     s=
     c=IN IP4 host.atlanta.example.com
     t=0 0

     m=audio 49170 RTP/AVP 99
     a=fingerprint:sha-1 99:41:49:83:4a:97:0e:1f:ef:6d
                         :f7:c9:c7:70:9d:1f:66:79:a8:07
     a=crypto:1 AES_CM_128_HMAC_SHA1_80
       inline:d0RmdmcmVCspeEc3QGZiNWpVLFJhQX1cfHAwJSoj|2^20|1:32
     a=rtpmap:99 iLBC/8000

     m=video 51372 RTP/AVP 96
     a=fingerprint:sha-1 92:81:49:83:4a:23:0a:0f:1f:9d:f7:
                          c0:c7:70:9d:1f:66:79:a8:07
     a=crypto:1 AES_CM_128_HMAC_SHA1_32
       inline:NzB4d1BINUAvLEw6UzF3WSJ+PSdFcGdUJShpX1Zj|2^20|1:32
     a=rtpmap:96 H261/90000
]]></artwork>
      </figure>

   <figure>
        <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[      
Answer 

     v=0
     o=bob 2808844564 2808844565 IN IP4 host.biloxi.example.com
     s=
     c=IN IP4 host.biloxi.example.com
     t=0 0

     m=audio 49172 RTP/AVP 99
     a=crypto:1 AES_CM_128_HMAC_SHA1_80
       inline:d0RmdmcmVCspeEc3QGZiNWpVLFJhQX1cfHAwJSoj|2^20|1:32
     a=rtpmap:99 iLBC/8000

     m=video 51374 RTP/AVP 96
     a=crypto:1 AES_CM_128_HMAC_SHA1_80
       inline:d0RmdmcmVCspeEc3QGZiNWpVLFJhQX1cfHAwJSoj|2^20|1:32
     a=rtpmap:96 H261/90000

]]></artwork>
      </figure>
    </section>

    <section anchor="sec-tasks" title="Tasks">
      
      <t> This section outlines work that needs to be done in various
      specifications to make the proposal here actually happen. </t>

      <t>Tasks:
      <list style="numbers">
        
  
        <t> Extend the W3C API to be able to set and read the CSRC list for a
        PC-Track. </t>
        
        <t> Extend the W3C API to be able to read SSRC of last RTP packed
        received.  </t>
        
        <t> Write an RTP Header Extension draft to cary the MSID.  </t>

        <t> Fix up MSID draft to align with this proposal.  </t>
        
        <t> Write a draft to add left, right to the SDP content attribute. Add
        the stuff to the W3C API to read and write this on a track.  </t>

        <t> Write a draft on SDP "SIMULCAST" group to signal multiple m-block as are simulcast of same
        video content.  </t>
        
        <t> Complete the bundle draft. </t>

        <t> Provide guidance for ways to use SDP for reduced glare when adding
        of one way media streams. </t>

        <t> Write a draft defining the multi render attribute. </t>

        <t> Change W3C API to say that a PC-Track can be in only one
        PeerConnection or make an object inside the PeerConnection for each
        track in the PC that can be used to set constraints and settings and get
        information related to the RTP flow. </t>

        <t> Sort out how to tell a PC-Track, particularly one meant for
        receiving information, that it can do simulcast, layered coding, RTX, FEC,
        etc. </t>

      </list>
      </t>
      
    </section>

    <section anchor="sec-sec" title="Security Considerations">
      <t>TBD</t>
    </section>

    <section title="IANA Considerations">
      <t>This document requires no actions from IANA.</t>
    </section>

    <section title="Acknowledgments">
      <t> I would like to thank Suhas Nandakumar, Eric Rescorla, Charles Eckel,
      Mo Zanaty, and Lyndsay Campbell for help with this draft. </t>
    </section>

    <section title="Open Issues">
      <t> The overall solution is complicated considerably by the fact that
      WebRTC allows a PC-Track to be used in more than one PC-Stream but
      requires only one copy of the RTP data for the track to be sent.  I am not
      aware of any use case for this and think it should be removed. If a
      PC-Track needs to be synchronized with two different things, they should
      all go in one PC-Stream instead of two. </t>
    </section>

    <section anchor="sec-existing" title="Existing SDP">
      <t>The following shows some examples of SDP today that any new system
      needs to be able to receive and work with in a backwards compatible
      way.</t>

      <section anchor="sec-mulenc" title="Multiple Encodings">
        <t>Multiple codecs accepted on same m-line <xref target="RFC4566"></xref>.</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

     v=0
     o=alice 2890844526 2890844527 IN IP4 host.atlanta.example.com
     s=
     c=IN IP4 host.atlanta.example.com
     t=0 0

     m=audio 49170 RTP/AVP 99
     a=rtpmap:99 iLBC/8000

     m=video 51372 RTP/AVP 31 32
     a=rtpmap:31 H261/90000
     a=rtpmap:32 MPV/90000
    
]]></artwork>
        </figure>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[

Answer 

     v=0
     o=bob 2808844564 2808844565 IN IP4 host.biloxi.example.com
     s=
     c=IN IP4 host.biloxi.example.com
     t=0 0

     m=audio 49172 RTP/AVP 99
     a=rtpmap:99 iLBC/8000

     m=video 51374 RTP/AVP 31 32
     a=rtpmap:31 H261/90000
     a=rtpmap:32 MPV/90000
    
]]></artwork>
        </figure>

        <t>This means that a sender can switch back and forth between H261 and
        MVP without any further signaling. The receiver MUST be capable of
        receiving both formats. At any point in time, only one video format is
        sent, thus implying that only one video is meant to be displayed. </t>

      </section>

      <section anchor="sec-fec" title="Forward Error Correction">
        <t>Multiple m-blocks identified with respective "mid" grouped to
        indicate FEC operation using FEC-FR semantics defined in
        <xref target="RFC5956"></xref>.</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    v=0
    o=ali 1122334455 1122334466 IN IP4 fec.example.com
    s=Raptor RTP FEC Example
    t=0 0
    a=group:FEC-FR S1 R1

    m=video 30000 RTP/AVP 100
    c=IN IP4 233.252.0.1/127
    a=rtpmap:100 MP2T/90000
    a=fec-source-flow: id=0
    a=mid:S1

    m=application 30000 RTP/AVP 110
    c=IN IP4 233.252.0.2/127
    a=rtpmap:110 raptorfec/90000
    a=fmtp:110 raptor-scheme-id=1; Kmax=8192; T=128;
    P=A; repair-window=200000
    a=mid:R1
    
]]></artwork>
        </figure>
      </section>

      <section anchor="sec-samecodecdiffsettings"
               title="Same Video Codec With Different Settings">
        <t>This example shows a single codec,say H.264, signaled with
        different settings <xref target="RFC4566"></xref>.</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    v=0

    m=video 49170 RTP/AVP 100 99 98
    a=rtpmap:98 H264/90000
    a=fmtp:98 profile-level-id=42A01E; packetization-mode=0;
    sprop-parameter-sets=Z0IACpZTBYmI,aMljiA==
    a=rtpmap:99 H264/90000
    a=fmtp:99 profile-level-id=42A01E; packetization-mode=1;
    sprop-parameter-sets=Z0IACpZTBYmI,aMljiA==
    a=rtpmap:100 H264/90000
    a=fmtp:100 profile-level-id=42A01E; packetization-mode=2;
    sprop-parameter-sets=Z0IACpZTBYmI,aMljiA==;
    sprop-interleaving-depth=45; sprop-deint-buf-req=64000;
    sprop-init-buf-time=102478; deint-buf-cap=128000
]]></artwork>
        </figure>
      </section>

      <section anchor="sec-diffcodecdiffresolutions"
               title="Different Video Codecs With Different Resolutions Formats">
        <t>The SDP below shows some m-blocks with various ways to specify resolutions for video
        codecs signaled <xref target="RFC4566"></xref>.</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    m=video 49170 RTP/AVP 31
    a=rtpmap:31 H261/90000
    a=fmtp:31 CIF=2;QCIF=1;D=1
    
    m=video 49172 RTP/AVP 99
    a=rtpmap:99 jpeg2000/90000
    a=fmtp:99 sampling=YCbCr-4:2:0;width=128;height=128

    m=video 49174 RTP/AVP 96
    a=rtpmap:96 VP8/90000	
    a=fmtp:96 max-fr=30;max-fs=3600; 
    a=imageattr:96 [x=1280,y=720]    

]]></artwork>
        </figure>
      </section>

      <section anchor="sec-lipsync" title="Lip Sync Group">
        <t><xref target="RFC5888"></xref> grouping semantics for Lip
        Synchronization between audio and video</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    v=0
    o=Laura 289083124 289083124 IN IP4 one.example.com
    c=IN IP4 192.0.2.1
    t=0 0
    a=group:LS 1 2

    m=audio 30000 RTP/AVP 0
    a=mid:1

    m=video 30002 RTP/AVP 31
    a=mid:2
]]></artwork>
        </figure>
      </section>

      <section anchor="sec-bfcp" title="BFCP">
       <t> <xref target="RFC4583"></xref> defines SDP format for Binary
       Floor Control Protocol (BFCP) as shown below</t> 
        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    m=application 50000 TCP/TLS/BFCP *
    a=setup:passive
    a=connection:new
    a=fingerprint:SHA-1 \
    4A:AD:B9:B1:3F:82:18:3B:54:02:12:DF:3E:5D:49:6B:19:E5:7C:AB
    a=floorctrl:s-only
    a=confid:4321
    a=userid:1234
    a=floorid:1 m-stream:10
    a=floorid:2 m-stream:11

    m=audio 50002 RTP/AVP 0
    a=label:10

    m=video 50004 RTP/AVP 31
    a=label:11
  
]]></artwork>
        </figure>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[    
Answer

    m=application 50000 TCP/TLS/BFCP *
    a=setup:passive
    a=connection:new
    a=fingerprint:SHA-1 \
    4A:AD:B9:B1:3F:82:18:3B:54:02:12:DF:3E:5D:49:6B:19:E5:7C:AB
    a=floorctrl:s-only
    a=confid:4321
    a=userid:1234
    a=floorid:1 m-stream:10
    a=floorid:2 m-stream:11

    m=audio 50002 RTP/AVP 0
    a=label:10

    m=video 50004 RTP/AVP 31
    a=label:11
    
]]></artwork>
        </figure>
      </section>

    <section anchor="sec-rtx2" title="Retransmission">
    <t>
      The SDP given below shows SDP signaling for retransmission of the
      original media stream(s) as defined in <xref target="RFC4756"></xref>
    </t>
    <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer

   v=0
   o=mascha 2980675221 2980675778 IN IP4 host.example.net
   c=IN IP4 192.0.2.0
   a=group:FID 1 2
   a=group:FID 3 4

   m=audio 49170 RTP/AVPF 96
   a=rtpmap:96 AMR/8000
   a=fmtp:96 octet-align=1
   a=rtcp-fb:96 nack
   a=mid:1

   m=audio 49172 RTP/AVPF 97
   a=rtpmap:97 rtx/8000
   a=fmtp:97 apt=96;rtx-time=3000
   a=mid:2

   m=video 49174 RTP/AVPF 98
   a=rtpmap:98 MP4V-ES/90000
   a=rtcp-fb:98 nack
   a=fmtp:98 profile-level-id=8;config=01010000012000884006682C209\
      0A21F
   a=mid:3

   m=video 49176 RTP/AVPF 99
   a=rtpmap:99 rtx/90000
   a=fmtp:99 apt=98;rtx-time=3000
   a=mid:4
  
]]></artwork>
        </figure>

        <t> Note that RTX RFC also has the following SSRC multiplexing example
        but this is meant for declarative use of SDP as there was no way in this
        RFC to accept, reject, or otherwise negotiate this in a an offer / answer
        SDP usage. </t>

       <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
SDP

   v=0
   o=mascha 2980675221 2980675778 IN IP4 host.example.net
   c=IN IP4 192.0.2.0

   m=video 49170 RTP/AVPF 96 97
   a=rtpmap:96 MP4V-ES/90000
   a=rtcp-fb:96 nack
   a=fmtp:96 profile-level-id=8;config=01010000012000884006682C209\
     0A21F
   a=rtpmap:97 rtx/90000
   a=fmtp:97 apt=96;rtx-time=3000
  
]]></artwork>
        </figure>


    </section>

      <section anchor="sec-llc" title="Layered coding dependency">
        <t><xref target="RFC5583"></xref> "depend" attribute is shown here to
        indicate dependency between layers represented by the individual
        m-blocks</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    a=group:DDP L1 L2 L3

    m=video 20000 RTP/AVP 96 97 98
    a=rtpmap:96 H264/90000
    a=fmtp:96 profile-level-id=4de00a; packetization-mode=0;
      mst-mode=NI-T; sprop-parameter-sets={sps0},{pps0};
    a=rtpmap:97 H264/90000
    a=fmtp:97 profile-level-id=4de00a; packetization-mode=1;
      mst-mode=NI-TC; sprop-parameter-sets={sps0},{pps0};
    a=rtpmap:98 H264/90000
    a=fmtp:98 profile-level-id=4de00a; packetization-mode=2;
      mst-mode=I-C; init-buf-time=156320;
      sprop-parameter-sets={sps0},{pps0};
    a=mid:L1

    m=video 20002 RTP/AVP 99 100
    a=rtpmap:99 H264-SVC/90000
    a=fmtp:99 profile-level-id=53000c; packetization-mode=1;
      mst-mode=NI-T; sprop-parameter-sets={sps1},{pps1};
    a=rtpmap:100 H264-SVC/90000
    a=fmtp:100 profile-level-id=53000c; packetization-mode=2;
      mst-mode=I-C; sprop-parameter-sets={sps1},{pps1};
    a=mid:L2
    a=depend:99 lay L1:96,97; 100 lay L1:98

    m=video 20004 RTP/AVP 101
    a=rtpmap:101 H264-SVC/90000
    a=fmtp:101 profile-level-id=53001F; packetization-mode=1;
      mst-mode=NI-T; sprop-parameter-sets={sps2},{pps2};
    a=mid:L3
    a=depend:101 lay L1:96,97 L2:99
]]></artwork>
        </figure>
      </section>

      <section anchor="sec-ssrc" title="SSRC Signaling">
      <t>
      <xref target="RFC5576"></xref> "ssrc" attribute is shown here to signal
      synchronization sources in a given RTP Session
      </t>  
        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    m=video 49170 RTP/AVP 96
    a=rtpmap:96 H264/90000
    a=ssrc:12345 cname:user@example.com
    a=ssrc:67890 cname:user@example.com
]]></artwork>
        </figure>

        <t>This indicates what the sender will send. It's at best a guess
        because in the case of SSRC collision, it's all wrong. It does not
        allow one to reject a stream. It does not mean that both streams are
        displayed at the same time.</t>
      </section>

      <section anchor="sec-content" title="Content Signaling">
        <t><xref target="RFC4796"></xref> "content" attribute is used to
        specify the semantics of content represented by the video streams.</t>

        <figure>
          <artwork align="left" alt="" height="" name="" type="" width=""
                   xml:space="preserve"><![CDATA[
Offer 

    v=0
    o=Alice 292742730 29277831 IN IP4 131.163.72.4
    s=Second lecture from information technology
    c=IN IP4 131.164.74.2
    t=0 0

    m=video 52886 RTP/AVP 31
    a=rtpmap:31 H261/9000
    a=content:slides

    m=video 53334 RTP/AVP 31
    a=rtpmap:31 H261/9000
    a=content:speaker

    m=video 54132 RTP/AVP 31
    a=rtpmap:31 H261/9000
    a=content:sl
]]></artwork>
        </figure>
      </section>
    </section>

  </middle>

  <back>
    <references title="Normative References">
      <reference anchor="RFC3264">
        <front>
          <title>An Offer/Answer Model with Session Description Protocol
          (SDP)</title>

          <author fullname="J. Rosenberg" initials="J." surname="Rosenberg">
            <organization></organization>
          </author>

          <author fullname="H. Schulzrinne" initials="H."
                  surname="Schulzrinne">
            <organization></organization>
          </author>

          <date month="June" year="2002" />
        </front>

        <seriesInfo name="RFC" value="3264" />

        <format octets="60854"
                target="http://www.rfc-editor.org/rfc/rfc3264.txt" type="TXT" />
      </reference>

      <reference anchor="RFC2119">
        <front>
          <title abbrev="RFC Key Words">Key words for use in RFCs to Indicate
          Requirement Levels</title>

          <author fullname="Scott Bradner" initials="S." surname="Bradner">
            <organization>Harvard University</organization>

            <address>
              <postal>
                <street>1350 Mass. Ave.</street>

                <street>Cambridge</street>

                <street>MA 02138</street>
              </postal>

              <phone>- +1 617 495 3864</phone>

              <email>sob@harvard.edu</email>
            </address>
          </author>

          <date month="March" year="1997" />

          <area>General</area>

          <keyword>keyword</keyword>
        </front>

        <seriesInfo name="BCP" value="14" />

        <seriesInfo name="RFC" value="2119" />

        <format octets="4723"
                target="http://www.rfc-editor.org/rfc/rfc2119.txt" type="TXT" />

        <format octets="17491"
                target="http://xml.resource.org/public/rfc/html/rfc2119.html"
                type="HTML" />

        <format octets="5777"
                target="http://xml.resource.org/public/rfc/xml/rfc2119.xml"
                type="XML" />
      </reference>

      <reference anchor="RFC4566">
        <front>
          <title>SDP: Session Description Protocol</title>

          <author fullname="M. Handley" initials="M." surname="Handley">
            <organization></organization>
          </author>

          <author fullname="V. Jacobson" initials="V." surname="Jacobson">
            <organization></organization>
          </author>

          <author fullname="C. Perkins" initials="C." surname="Perkins">
            <organization></organization>
          </author>

          <date month="July" year="2006" />
        </front>

        <seriesInfo name="RFC" value="4566" />

        <format octets="108820"
                target="http://www.rfc-editor.org/rfc/rfc4566.txt" type="TXT" />
      </reference>
    </references>

    <references title="Informative References">
      <reference anchor="webrtc-api">
        <front>
          <title>WebRTC 1.0: Real-time Communication Between Browsers</title>

          <author fullname="W3C editors"
                  surname="Bergkvist, Burnett, Jennings, Narayanan">
            <organization>W3C</organization>
          </author>

          <date day="4" month="October" year="2011" />
        </front>

        <annotation>Available at
        http://dev.w3.org/2011/webrtc/editor/webrtc.html</annotation>
      </reference>

<!--
      <reference anchor="I-D.ietf-rtcweb-use-cases-and-requirements">
        <front>
          <title>Web Real-Time Communication Use-cases and
          Requirements</title>

          <author fullname="Christer Holmberg" initials="C" surname="Holmberg">
            <organization></organization>
          </author>

          <author fullname="Stefan Hakansson" initials="S" surname="Hakansson">
            <organization></organization>
          </author>

          <author fullname="Goran Eriksson" initials="G" surname="Eriksson">
            <organization></organization>
          </author>

          <date day="4" month="October" year="2011" />

          <abstract>
            <t>This document describes web based real-time communication
            use-cases. Based on the use-cases, the document also derives
            requirements related to the browser, and the API used by web
            applications to request and control media stream services provided
            by the browser.</t>
          </abstract>
        </front>

        <seriesInfo name="Internet-Draft"
                    value="draft-ietf-rtcweb-use-cases-and-requirements-10" />

        <format target="http://www.ietf.org/internet-drafts/draft-ietf-rtcweb-use-cases-and-requirements-10.txt"
                type="TXT" />
      </reference>
-->

      <reference anchor="RFC3550">
        <front>
          <title>RTP: A Transport Protocol for Real-Time Applications</title>

          <author fullname="H. Schulzrinne" initials="H."
                  surname="Schulzrinne">
            <organization></organization>
          </author>

          <author fullname="S. Casner" initials="S." surname="Casner">
            <organization></organization>
          </author>

          <author fullname="R. Frederick" initials="R." surname="Frederick">
            <organization></organization>
          </author>

          <author fullname="V. Jacobson" initials="V." surname="Jacobson">
            <organization></organization>
          </author>

          <date month="July" year="2003" />
        </front>

        <seriesInfo name="STD" value="64" />

        <seriesInfo name="RFC" value="3550" />

        <format octets="259985"
                target="http://www.rfc-editor.org/rfc/rfc3550.txt" type="TXT" />

        <format octets="630740"
                target="http://www.rfc-editor.org/rfc/rfc3550.ps" type="PS" />

        <format octets="504117"
                target="http://www.rfc-editor.org/rfc/rfc3550.pdf" type="PDF" />
      </reference>

      <reference anchor="RFC3261">
        <front>
          <title>SIP: Session Initiation Protocol</title>

          <author fullname="J. Rosenberg" initials="J." surname="Rosenberg">
            <organization></organization>
          </author>

          <author fullname="H. Schulzrinne" initials="H."
                  surname="Schulzrinne">
            <organization></organization>
          </author>

          <author fullname="G. Camarillo" initials="G." surname="Camarillo">
            <organization></organization>
          </author>

          <author fullname="A. Johnston" initials="A." surname="Johnston">
            <organization></organization>
          </author>

          <author fullname="J. Peterson" initials="J." surname="Peterson">
            <organization></organization>
          </author>

          <author fullname="R. Sparks" initials="R." surname="Sparks">
            <organization></organization>
          </author>

          <author fullname="M. Handley" initials="M." surname="Handley">
            <organization></organization>
          </author>

          <author fullname="E. Schooler" initials="E." surname="Schooler">
            <organization></organization>
          </author>

          <date month="June" year="2002" />

          <abstract>
            <t>This document describes Session Initiation Protocol (SIP), an
            application-layer control (signaling) protocol for creating,
            modifying, and terminating sessions with one or more participants.
            These sessions include Internet telephone calls, multimedia
            distribution, and multimedia conferences. [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="3261" />

        <format octets="647976"
                target="http://www.rfc-editor.org/rfc/rfc3261.txt" type="TXT" />
      </reference>

      <reference anchor="RFC5245">
        <front>
          <title>Interactive Connectivity Establishment (ICE): A Protocol for
          Network Address Translator (NAT) Traversal for Offer/Answer
          Protocols</title>

          <author fullname="J. Rosenberg" initials="J." surname="Rosenberg">
            <organization></organization>
          </author>

          <date month="April" year="2010" />

          <abstract>
            <t>This document describes a protocol for Network Address
            Translator (NAT) traversal for UDP-based multimedia sessions
            established with the offer/answer model. This protocol is called
            Interactive Connectivity Establishment (ICE). ICE makes use of the
            Session Traversal Utilities for NAT (STUN) protocol and its
            extension, Traversal Using Relay NAT (TURN). ICE can be used by
            any protocol utilizing the offer/answer model, such as the Session
            Initiation Protocol (SIP). [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="5245" />

        <format octets="285120"
                target="http://www.rfc-editor.org/rfc/rfc5245.txt" type="TXT" />
      </reference>

      <reference anchor="RFC4588">
        <front>
          <title>RTP Retransmission Payload Format</title>

          <author fullname="J. Rey" initials="J." surname="Rey">
            <organization></organization>
          </author>

          <author fullname="D. Leon" initials="D." surname="Leon">
            <organization></organization>
          </author>

          <author fullname="A. Miyazaki" initials="A." surname="Miyazaki">
            <organization></organization>
          </author>

          <author fullname="V. Varsa" initials="V." surname="Varsa">
            <organization></organization>
          </author>

          <author fullname="R. Hakenberg" initials="R." surname="Hakenberg">
            <organization></organization>
          </author>

          <date month="July" year="2006" />

          <abstract>
            <t>RTP retransmission is an effective packet loss recovery
            technique for real-time applications with relaxed delay bounds.
            This document describes an RTP payload format for performing
            retransmissions. Retransmitted RTP packets are sent in a separate
            stream from the original RTP stream. It is assumed that feedback
            from receivers to senders is available. In particular, it is
            assumed that Real-time Transport Control Protocol (RTCP) feedback
            as defined in the extended RTP profile for RTCP-based feedback
            (denoted RTP/AVPF) is available in this memo.
            [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="4588" />

        <format octets="76630"
                target="http://www.rfc-editor.org/rfc/rfc4588.txt" type="TXT" />
      </reference>

      <reference anchor="RFC5888">
        <front>
          <title>The Session Description Protocol (SDP) Grouping
          Framework</title>

          <author fullname="G. Camarillo" initials="G." surname="Camarillo">
            <organization></organization>
          </author>

          <author fullname="H. Schulzrinne" initials="H."
                  surname="Schulzrinne">
            <organization></organization>
          </author>

          <date month="June" year="2010" />

          <abstract>
            <t>In this specification, we define a framework to group "m" lines
            in the Session Description Protocol (SDP) for different purposes.
            This framework uses the "group" and "mid" SDP attributes, both of
            which are defined in this specification. Additionally, we specify
            how to use the framework for two different purposes: for lip
            synchronization and for receiving a media flow consisting of
            several media streams on different transport addresses. This
            document obsoletes RFC 3388. [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="5888" />

        <format octets="43924"
                target="http://www.rfc-editor.org/rfc/rfc5888.txt" type="TXT" />
      </reference>

      <reference anchor="RFC5583">
        <front>
          <title>Signaling Media Decoding Dependency in the Session
          Description Protocol (SDP)</title>

          <author fullname="T. Schierl" initials="T." surname="Schierl">
            <organization></organization>
          </author>

          <author fullname="S. Wenger" initials="S." surname="Wenger">
            <organization></organization>
          </author>

          <date month="July" year="2009" />

          <abstract>
            <t>This memo defines semantics that allow for signaling the
            decoding dependency of different media descriptions with the same
            media type in the Session Description Protocol (SDP). This is
            required, for example, if media data is separated and transported
            in different network streams as a result of the use of a layered
            or multiple descriptive media coding process. A
            new grouping type "DDP" -- decoding dependency -- is defined, to
            be used in conjunction with RFC 3388 entitled "Grouping of Media
            Lines in the Session Description Protocol". In addition, an
            attribute is specified describing the relationship of the media
            streams in a "DDP" group indicated by media identification
            attribute(s) and media format description(s).
            [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="5583" />

        <format octets="40214"
                target="http://www.rfc-editor.org/rfc/rfc5583.txt" type="TXT" />
      </reference>

      <reference anchor="RFC4796">
        <front>
          <title>The Session Description Protocol (SDP) Content
          Attribute</title>

          <author fullname="J. Hautakorpi" initials="J." surname="Hautakorpi">
            <organization></organization>
          </author>

          <author fullname="G. Camarillo" initials="G." surname="Camarillo">
            <organization></organization>
          </author>

          <date month="February" year="2007" />

          <abstract>
            <t>This document defines a new Session Description Protocol (SDP)
            media- level attribute, 'content'. The 'content' attribute defines
            the content of the media stream to a more detailed level than the
            media description line. The sender of an SDP session description
            can attach the 'content' attribute to one or more media streams.
            The receiving application can then treat each media stream
            differently (e.g., show it on a big or small screen) based on its
            content. [STANDARDS-TRACK]</t>
          </abstract>
        </front>

        <seriesInfo name="RFC" value="4796" />

        <format octets="22886"
                target="http://www.rfc-editor.org/rfc/rfc4796.txt" type="TXT" />
      </reference>
      <reference anchor='RFC4756'>
        
        <front>
          <title>Forward Error Correction Grouping Semantics in Session Description Protocol</title>
          <author initials='A.' surname='Li' fullname='A. Li'>
            <organization /></author>
          <date year='2006' month='November' />
          <abstract>
            <t>This document defines the semantics that allow for grouping of Forward Error Correction (FEC) streams with the protected payload streams in Session Description Protocol (SDP).  The semantics defined in this document are to be used with "Grouping of Media Lines in the Session Description Protocol" (RFC 3388) to group together "m" lines in the same session. [STANDARDS-TRACK]</t></abstract></front>
        
        <seriesInfo name='RFC' value='4756' />
        <format type='TXT' octets='12743' target='http://www.rfc-editor.org/rfc/rfc4756.txt' />
      </reference>
      <reference anchor='RFC5956'>
        
        <front>
          <title>Forward Error Correction Grouping Semantics in the Session Description Protocol</title>
          <author initials='A.' surname='Begen' fullname='A. Begen'>
            <organization /></author>
          <date year='2010' month='September' />
          <abstract>
            <t>This document defines the semantics for grouping the associated source and FEC-based (Forward Error Correction) repair flows in the Session Description Protocol (SDP).  The semantics defined in this document are to be used with the SDP Grouping Framework (RFC 5888).  These semantics allow the description of grouping relationships between the source and repair flows when one or more source and/or repair flows are associated in the same group, and they provide support for additive repair flows.  SSRC-level (Synchronization Source) grouping semantics are also defined in this document for Real-time Transport Protocol (RTP) streams using SSRC multiplexing. [STANDARDS-TRACK]</t></abstract></front>
        
        <seriesInfo name='RFC' value='5956' />
        <format type='TXT' octets='29530' target='http://www.rfc-editor.org/rfc/rfc5956.txt' />
      </reference>
      <reference anchor='RFC5576'>
        
        <front>
          <title>Source-Specific Media Attributes in the Session Description Protocol (SDP)</title>
          <author initials='J.' surname='Lennox' fullname='J. Lennox'>
            <organization /></author>
          <author initials='J.' surname='Ott' fullname='J. Ott'>
            <organization /></author>
          <author initials='T.' surname='Schierl' fullname='T. Schierl'>
            <organization /></author>
          <date year='2009' month='June' />
          <abstract>
            <t>The Session Description Protocol (SDP) provides mechanisms to describe attributes of multimedia sessions and of individual media streams (e.g., Real-time Transport Protocol (RTP) sessions) within a multimedia session, but does not provide any mechanism to describe individual media sources within a media stream.  This document defines a mechanism to describe RTP media sources, which are identified by their synchronization source (SSRC) identifiers, in SDP, to associate attributes with these sources, and to express relationships among sources.  It also defines several source-level attributes that can be used to describe properties of media sources. [STANDARDS-TRACK]</t></abstract></front>
        
        <seriesInfo name='RFC' value='5576' />
        <format type='TXT' octets='40454' target='http://www.rfc-editor.org/rfc/rfc5576.txt' />
      </reference>
      <reference anchor='RFC4583'>
        
        <front>
          <title>Session Description Protocol (SDP) Format for Binary Floor Control Protocol (BFCP) Streams</title>
          <author initials='G.' surname='Camarillo' fullname='G. Camarillo'>
            <organization /></author>
          <date year='2006' month='November' />
          <abstract>
            <t>This document specifies how to describe Binary Floor Control Protocol (BFCP) streams in Session Description Protocol (SDP) descriptions.  User agents using the offer/answer model to establish BFCP streams use this format in their offers and answers. [STANDARDS-TRACK]</t></abstract></front>
        
        <seriesInfo name='RFC' value='4583' />
        <format type='TXT' octets='24150' target='http://www.rfc-editor.org/rfc/rfc4583.txt' />
      </reference>
    </references>
  </back>
</rfc>

PAFTECH AB 2003-20262026-04-23 14:19:47