One document matched: draft-wu-http-streaming-optimization-ps-02.txt
Differences from draft-wu-http-streaming-optimization-ps-01.txt
Networking Working Group Q.Wu
R.Huang
Internet Draft Huawei
Intended status: Informational September 27, 2010
Expires: March 2011
Problem Statement for HTTP Streaming
draft-wu-http-streaming-optimization-ps-02.txt
Abstract
HTTP Streaming allows breaking the live contents or stored contents
into several chunks/fragments and supplying them in order to the
client. However streaming long duration and high quality media over
the internet has several Challenges when we require the client to
access the same media content with the common Quality experience at
any device, anytime, anywhere. This document explores problem
inherent in HTTP streaming. Several issues regarding network support
for HTTP Streaming have been raised, which include QoS guarantee
offering to streaming video over Internet, efficient delivery,
network control adaptive and real time streaming media
synchronization support.
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with
the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on March 27, 2011.
Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
Wu Expires March 27, 2011 [Page 1]
Internet-Draft PS for HTTP Streaming September 2010
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
This document may contain material from IETF Documents or IETF
Contributions published or made publicly available before November
10, 2008. The person(s) controlling the copyright in some of this
material may not have granted the IETF Trust the right to allow
modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
Table of Contents
1. Introduction.................................................3
1.1. Why HTTP Streaming......................................4
2. Terminology and Concept......................................5
3. Scope and Existing Work......................................5
3.1. Media Fragments URI.....................................5
3.2. Media Presentation Description..........................5
3.3. Playback Control on media fragments.....................6
3.4. Server Push.............................................6
3.5. Scope of the problem....................................6
4. Applicability Statement......................................7
5. System Overview..............................................7
5.1. Server Components.......................................7
5.1.1. Media Encoder......................................8
5.1.2. Streaming Segmenter................................8
5.2. Distribution Components.................................8
5.3. Client Components.......................................8
6. Deployment Scenarios for HTTP Streaming Optimization.........9
6.1. HTTP Streaming Push model without Distribution Server....
involvement..................................................9
6.2. HTTP Streaming Pull model without Distribution Server....
involvement..................................................9
6.3. HTTP Streaming Push model with Distribution Server.......
involvement..................................................10
6.4. HTTP Streaming Pull model with Distribution Server.......
involvement..................................................11
7. Aspects of Problem...........................................11
Wu Expires March 27, 2011 [Page 2]
Internet-Draft PS for HTTP Streaming September 2010
7.1. Over-Utilization of Resources..........................12
7.2. Inefficient Streaming Content Delivery.................13
7.3. Inadequate Streaming Playback Control..................13
7.4. Lacking Streaming Monitoring and Feedback Support......14
7.5. No QoS/QoE guaranteed..................................15
7.6. Lacking Streaming media Synchronization support........15
7.6.1. Push model........................................15
7.6.2. Pull model........................................15
8. Streaming Session State Control.............................16
9. Analysis of different use cases.............................17
9.1. Live Streaming Media broadcast.........................17
9.2. RTP to HTTP Gateway....................................17
9.3. "Multi-Screen" Service Delivery........................18
9.4. Heterogeneous Handover.................................18
9.5. Time Shifted Playback..................................19
9.6. Content Publishing.....................................19
10. Security Consideration.....................................19
10.1. Streaming Content Protection..........................19
11. References.................................................19
11.1. Normative References..................................19
11.2. Informative References................................20
1. Introduction
Streaming service is described as transmission of data over network
as a steady continuous stream, allowing playback to proceed while
subsequent data is being received, which may utilize multiple
transport protocols for data delivery. HTTP streaming refers to the
streaming service wherein the HTTP protocol is used for basic
transport of media data. One example of HTTP streaming is progressive
download streaming which allows the user to access content using
existing infrastructure before the data transfer is complete.
Since HTTP streaming takes Existing HTTP as data transport (i.e.,
HTTP 1.1) and HTTP is operated over TCP, it is much more likely to
cause major packet drop-outs and greater delay due to TCP with the
characteristic which keeps TCP trying to resend the lost packet
before sending anything further. One way to reduce such major packet
drop-outs is to introduce media segmentation capability in the
network behind media encoder, i.e., using segmenter to split the
input streaming media into a serial of small chunks and meanwhile
creating manifest file containing reference to each chunks. Allowing
such streaming media segmentation can mitigate great delays and
breakups during streaming content playout.
With media segmentation support, existing streaming technology (e.g.,
progressive download streaming) is characterized as:
Wu Expires March 27, 2011 [Page 3]
Internet-Draft PS for HTTP Streaming September 2010
- Client based pull schemes that more relies on client to handle
buffer and playback during download.
- No network support, i.e.,no special server is required other than
a standard HTTP Server.
However streaming long duration and high quality media over the
internet has several unique Challenges when there are no network
capabilities available for HTTP Streaming:
- Client polling for each new data in chunks using HTTP requests is
not efficient to deliver high-quality video content across the
Internet
- Segmentation capability requires over-utilizing CPU and bandwidth
resources, which may not be a desirable and effective way to improve
the quality of streaming media delivery
- Lack of QoS guarantee on the packet switching based Internet , the
quality of Internet media streaming may significant degrade due to
rising usage
- Experience burstiness or other dynamics changes due to bandwidth
fluctuations and heterogeneous handover.
- Impossible to fast-forward through any part of a streaming
contents until it is stored on the user's device
With these above challenges, the typical user experience in the
existing streaming schemes can be limited by delayed startups, poor
quality, buffering delays, and inadequate playback control. Therefore
these existing streaming schemes can only offer a better experience
over slower connections.
This document explores problem inherent in HTTP streaming. Several
issues regarding network support for HTTP Streaming have been raised,
which include QoS guarantee offering to streaming video over Internet,
efficient delivery, network control adaptive and real time streaming
media synchronization support. The following section defines the
scope of this document, describes related work and lists the symptoms
and the underlying problems.
1.1. Why HTTP Streaming
As the HTTP protocol is widely used on the Internet as data transport,
it has since been employed extensively for the delivery of multimedia
content. A significant part of the Internet traffic today formerly
Wu Expires March 27, 2011 [Page 4]
Internet-Draft PS for HTTP Streaming September 2010
generated by Peer to Peer (P2P) application has been eclipsed by
streaming, CDN and direct download. Another trend is the growing
popularity of connected devices like Smartphones, TVs, PCs and
tablets is raising interest in multi-screen services that enable
consumers to access the same media content and quality of experience
(QoE) on any device, anytime and anywhere. Since almost all the
connected devices have browser support, but not all of them can
afford high CPU load and batteries draining as TVs or PCs, obviously
it is a best choice to use HTTP streaming to support multi-screen
video delivery.
2. Terminology and Concept
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT","SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in [RFC2119].
Pull model: The model that allows the server keep pushing data
packets to the client.
Push model: The model that allows the client keep pulling data
packets from the server.
3. Scope and Existing Work
This section describes existing related work and defines the scope of
the problem.
3.1. Media Fragments URI
W3C Media Fragments Working Group extends URI defined in [RFC3986]and
specifies some new semantics of URI fragments and URI queries [Media
Fragments] which is used to identify media fragments. The client can
use such Media Fragments URI component to retrieve one fragment
following the previous fragment from the server. However such
component is not extensible to convey more important streaming
information about bandwidth utilization, quality control and buffer
management. Therefore it is a big challenge to use the existing
infrastructure with such component to delivery streaming contents
with QoS/QoE guaranteed.
3.2. Media Presentation Description
[I.D-pantos-http-live-streaming] formerly defines media presentation
format by extending M3U Playlist files and defining additional flags.
3GPP TS 26.234 also centers around media presentation format and
specifies Semantics of Media presentation description for HTTP
Wu Expires March 27, 2011 [Page 5]
Internet-Draft PS for HTTP Streaming September 2010
Adaptive Streaming [TS 26.234], which contains metadata required by
the client(i.e., Smartphone) to construct appropriate URIs
[RFC3986]to access segments and to provide the streaming service to
the user. We refer to this media presentation description as playlist
component. With such component support, client can poll the new data
in chunks one by one. However without client request using HTTP, the
server will not push the new data to the client, therefore it is not
efficient way to rely on client polling to deliver high quality
streaming contents across the Internet, especially when bitrate
switching occurs frequently or bandwidth fluctuates frequently.
3.3. Playback Control on media fragments
W3C HTML5 Working Group has incorporated video playback features into
HTML5 Specification which we refer to as local playback control. Such
local playback capability has been previously dependent on third-
party browser plug-ins. Now HTML5 specification lifts video playback
out of the generic <object> element and put it into specialized
<video> handlers. With such playback control support, implementors
can choose to create their own controls with plain old HTML, CSS, and
JavaScript. However this playback control can not be used to control
streaming contents which are not downloaded to the browser client.
Another example of playback control is trick mode support specified
in 3GPP, in this example, the client can pause playback by simply
holding requesting the new media segementation and resume playback by
sending new data request. However such capability also relies on and
stored streaming contents and playlist at the browser client. It is
Impossible to fast-forward through any part of streaming contents.
3.4. Server Push
W3C Server Sent Push specification defines an API for opening an HTTP
connection for receiving push notifications from a server. However
there is no server-push protocol to be defined in IETF, which can be
used to work with Server Sent Push API developed by W3C. IETF Hybi
working group specifies websocket protocol, as one complementary work,
W3C specifies websocket API. This websocket technolgy provides two-
way communication with servers that does not rely on opening multiple
HTTP connections. However it lacks capability to push real time
streaming data from the server-side to the client.
3.5. Scope of the problem
TBC.
Wu Expires March 27, 2011 [Page 6]
Internet-Draft PS for HTTP Streaming September 2010
4. Applicability Statement
HTTP Streaming can be used on TCP port 80 or 8080, and traffic to
that port is usually allowed through by firewalls, therefore, HTTP
Streaming optimization mechanism can be applied if the client is
behind a firewall that only allows HTTP traffic.
HTTP Streaming may also be appropriate if the client sends feedback
to the server that may cause the multimedia data that is being
transmitted to change or cause the transmission rate to change.
Furthermore, HTTP Streaming may be appropriate if the client must
perform "trick-mode operations" on the multimedia data and prefers
the server to execute trick modes on its behalf. The term "trick-mode
operation" refers to operations like fast-forwarding and rewinding
the data, pausing the transmission, or seeking a different position
in the multimedia data stream.
5. System Overview
HTTP
+ Streaming---+
| Server |
| +-------+ |
| | Media | |
| |Encoder| |
| +-------+ | +--------------+ +-----------+
| |------>| |------>| HTTP |
| | | Distribution | | Streaming |
| +---------+ |<------| Server |<------| Client |
| |Streaming| | +--------------+ +-----------+
| |Segmenter| |
| +---------+ |
| |
+-------------+
Figure 1: Reference Architecture for HTTP Streaming
Figure 1 shows reference Architecture for HTTP Streaming. The
Architecture should comprise the following components:
5.1. Server Components
HTTP Streaming Server is the entity that responds to the HTTP
Connection. It ingests streams from Encoder, breaks the encode
media into segments, maintains all the information for the live
streaming, handles Client requests. The HTTP Streaming server
comprise two key components as follows.
Wu Expires March 27, 2011 [Page 7]
Internet-Draft PS for HTTP Streaming September 2010
5.1.1. Media Encoder
Encoder is the entity that Prepares Streaming Contents for
transmission. It can be used to takes in live source feeds from an
audio-vido device and encodes the media and encapsulate with
specific streaming formats for delivery.
5.1.2. Streaming Segmenter
The stream segmenter is a process that reads the streaming media from
the media encoder and divides it into a series of small media files
with equal duration. Even though each segment is in a seprate file,
video files are made from a continuous stream which can be
reconstructed seamlessly.
The segmenter also creates an index file containing refernces to the
individual media files. Each time the segmenter completes a new media
file, the index file is updated. The index is used to track the
availability and location of the media files. The segmenter may also
encrypt each media segment and create a key file as part of the
process.
5.2. Distribution Components
The distribution system is the entity located between HTTP Streaming
Server and Streaming Client. The example of distribution system could
be a web server or web caching system. The distribution system can be
used to deliver the media files and index files to the client over
HTTP. It also can be used to offload streams request to the server
using Caches and facilitate forwarding the streams to the client.
5.3. Client Components
HTTP Streaming Client is the entity that initiates the HTTP
connection. The client is responsible for fetching index file, media
streams in chunks and encrypted keys.
Wu Expires March 27, 2011 [Page 8]
Internet-Draft PS for HTTP Streaming September 2010
6. Deployment Scenarios for HTTP Streaming Optimization
The deployment scenarios are outlined in the following sections.
The following scenarios are discussed for understanding the overall
problems of HTTP streaming contents delivery. In the HTTP Streaming,
although the initial request and the commands are always coming from
the client, we just focus on the data delivery part. Different model
can be defined depending on whether:
o The Distribution Server is not involved in HTTP Streaming
o Who initiates data delivery
6.1. HTTP Streaming Push model without Distribution Server involvement
In this case, data exchange happens between HTTP Streaming Server and
HTTP Streaming Client. Distribution Server does not involve in this
process. Streaming Content flows from the server to the Client. The
server keeps pushing the latest data packets to the client and the
client just passively receives everything. Therefore we also refer to
it as push mode HTTP streaming.
+-----------+ +-----------+
| HTTP | Push | HTTP |
| Streaming |----------------------------->| Streaming |
| Server | HTTP Streaming | Client |
+-----------+ +-----------+
Figure 2: Push model for HTTP Streaming
6.2. HTTP Streaming Pull model without Distribution Server involvement
As before, data exchanges between HTTP Streaming Server and HTTP
Streaming Client. Distribution Server does not involve in this
process. However, in this scenario, the Client pulls the fragment one
after another by issuing fragment requests, one for each fragment.
Then the server needs to either reply with data immediately or fail
the request.
Wu Expires March 27, 2011 [Page 9]
Internet-Draft PS for HTTP Streaming September 2010
+-----------+ Pull +-----------+
| HTTP |<-----------------------------| HTTP |
| Streaming | HTTP Streaming | Streaming |
| Server |----------------------------->| Client |
+-----------+ +-----------+
Figure 3: Pull model for HTTP Streaming
6.3. HTTP Streaming Push model with Distribution Server involvement
In this case, data exchanges between HTTP Streaming Server,
Distribution Server and HTTP Streaming Client. Distribution Server
with HTTP Cache Support is located between HTTP Streaming Server and
HTTP Streaming Client and needs to involve in this process. The HTTP
Streaming Server keeps pushing the latest data packets to the client,
in the meanwhile, the HTTP Streaming server also push the data
packets to the distribution server for caching. When the new client
requests the same data packets as the one pushed to the previous
client by the server and the data packets requested is cached on the
distribution server, the distribution server can terminate this
request on behalf of the HTTP Streaming server and push the requested
data cached on itself to this new client.
Wu Expires March 27, 2011 [Page 10]
Internet-Draft PS for HTTP Streaming September 2010
+-----------+ +--------------+ +-----------+
| HTTP | | Distribution | | HTTP |
| Streaming | | Server | | Streaming |
| Server | | (HTTP Cache) | | Client |
+-----------+ +--------------+ +-----------+
Push
------------------------------>
Push HTTP Streaming
-------> HTTP Request
<+++++++
Push
-------->
Figure 4: Push model for HTTP Streaming
6.4. HTTP Streaming Pull model with Distribution Server involvement
As before, data exchanges between HTTP Streaming Server, Distribution
Server and HTTP Streaming Client. The Distribution Server has HTTP
Cache support. However, in this scenario, the client issues the
fragment request to the Distribution Server or HTTP Streaming Server.
Distribution Server may process the fragment Request on behalf of
HTTP Streaming Server, when the fragment is not cached on the
distribution server, the distribution server may fail this request.
In the meanwhile, pulls this fragment from the HTTP Streaming Server
and caches the data in itself and wait for the subsequent new request
for this fragment from the clients.
+-----------+ +--------------+ +-----------+
| HTTP | | Distribution | | HTTP |
| Streaming | | Server | | Streaming |
| Server | | (HTTP Cache) | | Client |
+-----------+ +--------------+ +-----------+
Pull HTTP Request
<------- <++++++++
HTTP Streaming HTTP Streaming
-------> ------->
Figure 5: Pull model for HTTP Streaming
7. Aspects of Problem
The Real time streaming service is superior in handling thousands of
concurrent streams simultaneously, e.g., flexible responses to
network congestion, efficient bandwidth utilization, and high quality
performance. However existing related work on HTTP based streaming
requires nothing from the network and are not up to these challenges.
Wu Expires March 27, 2011 [Page 11]
Internet-Draft PS for HTTP Streaming September 2010
7.1. Over-Utilization of Resources
Streaming begins with preparing the contents for delivery over the
Internet. The process of encoding contents for streaming over the
Internet is extremely complicated and demands extensive CPU power
which can be very expensive in terms of equipment, resources and
codec. Also Streaming service tends to over-utilize the CPU and
bandwidth resource to provide better services to end users, which
may be not desirable and effective way to improve the quality of
streaming media delivery, in worse case, the media server may not
have enough bandwidth to support all of the client connections.
When CPU resources are exhausted or insufficient, the encoding
algorithm must sacrifice/downgrade quality to enable the process
to keep pace with live contents rendering for viewing. When the
encoding process is not fully functioned and flexible, content
owner or encoder is forced to limit quality or viewing experience
in order to support live streams. For the non-scalable encoding,
when MBR(i.e., Multiple Bit Rate) encoding is supported, the
encoder usually generates multiple streams with different bit
rates for the same media content, and encapsulates all these
streams together, which needs additional processing capability and
a possibly large storage and in worse case, may cause streaming
session to suffer various quality downgrading, e.g., switching
from high bit rate stream to low bit rate stream, rebufferring
when the functionality of MBR is poorly utilized. For the scalable
encoding, it provides a scalable representation with layered bit
streams decoding at different bit rate so that rate-control can be
performed to mitigate network congestion. However, streaming
application that employs layered coding is sensitive to
transmission losses, especially the losses of base layer packets.
Because the base layer represent the most critical part of the
scalable representation.
Apart from the consequences of CPU and bandwidth resource over-
utilization, which are discussed in previous sub-sections, there
are two additional effects that are undesirable:
o HTTP is sent over TCP and only supports unicast which may
increase processing overhead by 30% in contrast with using
multicast transmission.
o HTTP relies on multiple connections for concurrency which causes
additional round trips for connection setup.
Wu Expires March 27, 2011 [Page 12]
Internet-Draft PS for HTTP Streaming September 2010
7.2. Inefficient Streaming Content Delivery
HTTP is not streaming protocol but can be used to distribute small
chunked contents in order, i.e., transmit any media contents
relying on time-based operation. Since HTTP streaming is operated
over TCP, it is much more likely to cause major packet drop-outs
and greater delay due to TCP with the characteristic which keeps
TCP trying to resend the lost packet before sending anything
further. Thus HTTP streaming protocols suffer from the inefficient
communication established by TCP's design and they are not well
suited for delivering nearly the same amount of streams as UDP
transmission or RTSP transmission. When network congestion happens,
the transport may be degraded due to poor communication between
client and server or slow response of the server for the
transmission rate changes.
Another major issue that plagues HTTP streaming is Client polling
for each new data in chunks. Such client polling scheme using HTTP
requests is not efficient to deliver high-quality streaming video
content across the Internet.
7.3. Inadequate Streaming Playback Control
Playback control allows user interact with streaming contents to
control presentation operation (e.g., fast forward, rewind, scrub,
time-shift, or play in slow motion). RTSP streaming provides such
capability to control and navigate the streaming session when the
client receives the streaming contents. Unlike RTSP streaming,
current HTTP streaming technologies do not provide such capability
for playback control that users are accustomed to with DVD or
television viewing, which significantly impacts the viewing
experience.
This also has the following effects that are not desirable:
o When the user requests media fragments that correspond to the
content's new time index and the media fragments from that point
forward, the client can not have the possibility to change the
time position for playback and select another stream for
rendering with acceptable quality.
o The user can not seek through media content whilst viewing the
content with acceptable quality.
o When the user requests to watch the relevant fragments rather
than having to watch the full videos and manually scroll for the
relevant fragments, the client can not have the possibility of
Wu Expires March 27, 2011 [Page 13]
Internet-Draft PS for HTTP Streaming September 2010
jumping to another point within the media clip or between the
media fragments with acceptable quality (i.e., random access).
o When the media content the user requests to watch is live stream
and needs to be interrupted in the middle, e.g., when the user
takes a phone call, the client can not have the possibility to
pause or resume the streaming session with acceptable quality
after it has been invoked.
o When the user begins to see the content at the new time point, if
the media fragments retrieved when changing position require the
same quality as the media fragments currently being played, it
will result in poor user experience with longer startups latency.
o When there are different formats corresponding to the terminal
capabilities and user preferences available for contents, the
client has no capability to select one format for which the
content will be streamed.
o When the user doesn't have time to watch all the streaming
contents and want to skip trivial part and jump to the key part,
the client does not provide the capability for selective preview
or navigation control.
o When the server wants to replace the currently transmitted video
stream with a lower bit-rate version of the same video stream,
the server has no capability to notify this to the client.
7.4. Lacking Streaming Monitoring and Feedback Support
The usage of streaming media is rapidly increasing on the web. To
provide a high-quality service for the user, monitoring and
analyzing the system's overall performance is extremely important,
since offering the performance monitoring capability can help
diagnose the potential network impairment, facilitate in root cause
analysis and verify compliance of service level agreements (SLAs)
between Internet Service Providers (ISPs) and content provider.
In the current HTTP streaming technology, it fails to give the
server feedback about the experience the user actually had while
watching a particular video. This is because the server controls
all processes and it is impossible to track everything from the
server side.
Consequently, the server may be paying to stream content that is
rarely or never watched. Alternatively, the server may have a video
that continually fails to start or content that rebuffers
Wu Expires March 27, 2011 [Page 14]
Internet-Draft PS for HTTP Streaming September 2010
continually. But the Content owner or encoder receives none of this
information because there is no way to track it.
Therefore it is desirable to allow the server view detailed
statistics using the system's extensive network, quality, and usage
monitoring capabilities. This detailed statistics can be in the
form of real-time quality of service metrics data.
7.5. No QoS/QoE guaranteed
Due to the lack of QoS/QoE guarantee on the packet switching based
Interest, the quality of Internet media streaming may significantly
degrade due to rising usage. Also Internet traffic generated by
HTTP streaming may experience burstiness or other dynamics changes
due to bandwidth fluctuations and heterogeneous handover.
7.6. Lacking Streaming media Synchronization support
7.6.1. Push model
In the push mode, the client just passively accepts what the server
pushes out and always knows how the live stream is progressing.
However if the client's clock is running slower than the encoder's
clock, buffer overflow will happen, i.e., the client is not
consuming samples as fast as the encoder is producing them. As
samples get pushed to the client, more and more get buffered, and
the buffer size keeps growing over time. This can cause the client
to slow down packet processing and eventually run out of memory. On
the other hand, if a client's clock is running faster than the
encoder's clock, the client has to either keep re-buffering or tune
down its clock. To detect this case, the client needs to
distinguish this condition from others that could also cause buffer
underflow, e.g. network congestion. This determination is often
difficult to implement in a valid and authoritative manner. The
client would need to run statistics over an extended period of time
to detect a pattern that's most likely caused by clock drift rather
than something else. Even with that, false detection can still
happen.
7.6.2. Pull model
In the pull model, the client is the one who initiates all the
fragment requests and it needs to know the right timing information
for each fragment in order to do the right scheduling [Smooth
Streaming]. Given that the server is stateless in the pull model
and the client could communicate with any server for the same
streaming session, it has become more challenging. The solution is
Wu Expires March 27, 2011 [Page 15]
Internet-Draft PS for HTTP Streaming September 2010
to always rely on the encoder's clock for computing timing
information for each fragment and design a timing mechanism that's
stateless and cacheable.
With the pull model for HTTP Streaming, The client is driving all
the requests and it will only request the fragments that it needs
and can handle. In other words, the client's buffer is always
synchronized to the client's clock and never gets out of control.
Currently most of existing streaming schemes are based on pull
model. However the side effect of this type of clock drift would be
that the client could slowly fall behind, especially when
transitioning from a "live" client to a DVR client (playing
something stored in the past).
8. Streaming Session State Control
In the push model, the client state is managed both by the client and
the server[Smooth Streaming]. The server keeps a record of each
client for things such as playback state, streaming position,
selected bit rate (if multiple bit rates are supported), etc. While
this gives the streaming server more control, it also adds overhead
to the server. What is more important is that each client has to
maintain the server affinity throughout the streaming session,
limiting scalability and creating a single point of failure. If
somehow a client request is rerouted by a load balancer to another
server in the middle of a streaming session, there is a high
possibility that the request will fail. This limitation creates big
challenges in server scalability and management for CDNs (i.e.,
Content Delivery Network) and server farms.
In the pull model, the client is solely responsible for maintaining
its own state [Smooth Streaming]. In turn, the server is now
stateless. Any client request (fragment or manifest) can be satisfied
by any server that is configured for the same live content. The
network topology can freely reroute the client requests to any server
that is best for the client, which has advantage of load balancing.
From the server's perspective, all client requests are equal. It
doesn't matter whether they are from the same client or multiple
clients, whether they are in live mode or DVR mode, which bit rate
they're trying to play, whether they're trying to do bit rate
switching, etc. They're all just fragment requests to the server, and
the server's job is to manage and deliver the fragments in the most
efficient way. Unlike some other implementations, the HTTP Streaming
server's job is once again to keep all the content readily available
to empower the client's decisions, and to make sure it presents the
client with a semantically consistent picture. This has two benefits:
(1) the feedback loop is much smaller as the client makes all the
Wu Expires March 27, 2011 [Page 16]
Internet-Draft PS for HTTP Streaming September 2010
decisions, resulting in a much faster response (e.g. bit rate
switching), and (2) it makes the server very lean and fast.
Note that the division of the responsibilities between the server and
the client has changed in the pull model. The server is focusing on
delivering and managing fragments with the best possible performance
and scalability, while the client is all about ensuring the smooth
streaming/playback experience, which is a much better solution for
large-scale online video.
9. Analysis of different use cases
9.1. Live Streaming Media broadcast
Today, live video streaming technologies are widely used in
broadcasting news, connecting friends and relatives in online chat
rooms, conducting businesses online face to face, selling products
and services, teaching online courses, monitoring properties, showing
movies online, and so on. However when we choose live broadcast to
deliver high quality live video streaming contents over Internet, due
to the lack of Quality of Service guarantee on the packet switching
based Internet, the quality of Internet media streaming may
significantly degrade due to rising usage and bandwidth fluctuation.
Another issue is due to the dynamics on the media server load and
network bandwidth, a client may experience a prolonged startup delay.
In addition, the filling of the client play-out buffer, which is used
to smooth jitter caused by network bandwidth fluctuation to smooth
jitter caused by network bandwidth fluctuation, further increase the
user's waiting time. Therefore, for viewers, watching a live video
online could easily turn into a frustrating figuring out what is
being shown in a tiny picture box or in a large, fuzzy picture box or
impatient waiting for intermittent signals to resume or for video and
audio to be synced.
9.2. RTP to HTTP Gateway
Multicast audio and video streams are today commonplace in certain
parts of the Internet. The vast majority of Internet users, however,
are not able to take part of multicast streams because they either
lack multicast network connectivity, are located behind firewalls, or
have insufficient network resources available. In an effort to extend
the scope of multicast applications, an RTP to HTTP gateway component
can be developed that makes it possible for an Internet user to take
part of multicast video streams. WebSmile is one example of RTP to
HTTP gateway which can be used to connect to a multicast capable
network. In the Websmile, the server performs three separate
functions depending on the parameters with which it is invoked:
Wu Expires March 27, 2011 [Page 17]
Internet-Draft PS for HTTP Streaming September 2010
o Monitor a multicast session and report back information about the
video sources that are identified.
o Join a session and return an HTML-page with video displays.
o Start forwarding video over HTTP.
9.3. "Multi-Screen" Service Delivery
With the existing deployment today, the services like Network DVR and
TV/Video anywhere are generally limited as to the types of device
that they support, or the level of integration and interactivity
between screens. "Multi-Screen" Service provides a common user
experience across PCs, TVs, Smartphones, Tablets that enable
consumers to access the same media content and quality of experience
(QoE) on any device, anytime and anywhere. Such multi-Screen
Experience is lacking for end user in the services like Network DVR
and TV/Video anywhere. Since all the clients have browser support, it
is obviously one best choice to choose HTTP Streaming to deliver
Multi-Screen Service. However utilizing HTTP Streaming to deliver
Multi-Screen Service and meet the real time streaming requirements
face several great challenges, which include
o The clients with wide range of variation in processing power,
display capability and network conditions.
o Lack capability to offer interaction and user control across
screens
o Network has no intelligence to control and manage HTTP streaming
o There is no QoS/QoE guarantee for best effort Internet
o Playout buffer overflow or underflow due to streaming media
asynchronization.
9.4. Heterogeneous Handover
In some cases, the streaming client may work in the heterogeneous
environments, e.g., moving from 3G network into 2G network; delivery
of on-demand IPTV content to a mobile or PC; stop and resume of
content across different devices; or the downloading of content to a
smart mobile device as it moves between access networks. In such case,
HTTP streaming should allows the experience on each device to be
consistent, and gives subscribers easy access to their favorite
content both online and offline.
Wu Expires March 27, 2011 [Page 18]
Internet-Draft PS for HTTP Streaming September 2010
9.5. Time Shifted Playback
Time Shifted Playback can be integrated with HTTP Streaming to
provide the same viewing experiences as DVD or television viewing
that users are early accustomed to.
9.6. Content Publishing
HTTP Streaming can be used in the CDN to optimize content delivery.
Content Publisher may utilize HTTP Streaming to publish the popular
contents on the sever to the Web Cache, which, in turn, reduce
bandwidth requirements and server load, improve the client response
times for content stored in the cache. Also when the web cache fails
to provide the contents that have greatest demand to the requester
(e.g., Client), the web cache can use HTTP Streaming protocol to
retrieve the contents from the server and cache them waiting for the
next request from the requester.
10. Security Consideration
10.1. Streaming Content Protection
In order to protect the content against theft or unauthorized use,
the possible desirable features include:
o Authorize users to view a stream once or an unlimited number of
times.
o Permit unlimited viewings but restrict viewing to a particular
machine, a region of the world, or within a limit period of time.
o Permit viewing but not copying or allow only one copy with a
timestamp that prevents viewing after a certain time.
o Charge per view or per unit of time, per episode, or view.
11. References
11.1. Normative References
[HTML5 ] http://www.w3.org/TR/html5/video.html#media-elements
[Server Sent Event] http://www.w3.org/TR/eventsource/
[Media Fragments] http://www.w3.org/2008/WebVideo/Fragments/WD-media-
fragments-spec/
Wu Expires March 27, 2011 [Page 19]
Internet-Draft PS for HTTP Streaming September 2010
[Smooth Streaming]
http://blogs.iis.net/samzhang/archive/2009/03/27/live
-smooth-streaming-design-thoughts.aspx
[RFC2326] Schulzrinne,H.,Rao, A.,R.Lanphier," Real Time Streaming
Protocol (RTSP)",RFC2326,April,1998
[RFC1945] Berners-Lee,T.,Fielding,R.,H.Frystyk," Hypertext Transfer
Protocol -- HTTP/1.0", RFC1945, May,1996
[RFC3986] Berners-Lee,T.Fielding,R.,L.Masinter "Uniform Resource
Identifier (URI): Generic Syntax", RFC3986, January,2005
[I.D-pantos-http-live-streaming]
Pantos,R.,W.,May "HTTP Live Streaming", draft-pantos-http-
live-streaming-04 (work in progress), June,2010
[TS 26.234]3GPP TS 26.234, "Transparent end-to-end Packet-switched
Streaming Service (PSS);Protocols and codecs (Release 9)"
11.2. Informative References
[PMOLFRAME]
Clark, A., "Framework for Performance Metric Development",
ID draft-ietf-pmol-metrics-framework-02, March 2009.
[J.1080] Recommendation ITU-T G.1080 "Quality of experience
requirements for IPTV services"
Wu Expires March 27, 2011 [Page 20]
Internet-Draft PS for HTTP Streaming September 2010
Authors' Addresses
Qin Wu
Huawei Technologies Co.,Ltd.
Huawei Technologies Co., Ltd.
101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012
China
Email: sunseawq@huawei.com
Rachel Huang
Huawei Technologies Co., Ltd.
101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012
China
Email: Rachel@huawei.com
Wu Expires March 27, 2011 [Page 21]
| PAFTECH AB 2003-2026 | 2026-04-23 19:32:29 |