One document matched: draft-ietf-ppsp-survey-04.txt
Differences from draft-ietf-ppsp-survey-03.txt
PPSP Y. Gu
Internet-Draft N. Zong, Ed.
Intended status: Standards Track Huawei
Expires: August 29, 2013 Y. Zhang
China Mobile
F. Piccolo
Cisco
S. Duan
CATR
February 25, 2013
Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-04
Abstract
This document presents a survey of some of the most popular Peer-to-
Peer (P2P) streaming applications on the Internet. Main selection
criteria were popularity and availability of information on operation
details at writing time. In doing this, selected applications will
not be reviewed as a whole, but we will focus exclusively on the
signaling and control protocol used to establish and maintain overlay
connections among peers and to advertise and download streaming
content.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 29, 2013.
Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved.
Gu, et al. Expires August 29, 2013 [Page 1]
Internet-Draft Survey of P2P Streaming Applications February 2013
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 4
3. Classification of P2P Streaming Applications Based on
Overlay Topology . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Mesh-based P2P Streaming Applications . . . . . . . . . . 5
3.1.1. Octoshape . . . . . . . . . . . . . . . . . . . . . . 6
3.1.2. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.3. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.4. PPStream . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.5. SopCast . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.6. Tribler . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.7. QQLive . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2. Tree-based P2P streaming applications . . . . . . . . . . 16
3.2.1. End System Multicast (ESM) . . . . . . . . . . . . . . 17
3.3. Hybrid P2P streaming applications . . . . . . . . . . . . 18
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 19
4. Security Considerations . . . . . . . . . . . . . . . . . . . 21
5. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 21
6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21
7. Informative References . . . . . . . . . . . . . . . . . . . . 21
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22
Gu, et al. Expires August 29, 2013 [Page 2]
Internet-Draft Survey of P2P Streaming Applications February 2013
1. Introduction
An ever increasing number of multimedia streaming systems have been
adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and
video contents from a source to a large number of end users. This is
the reference scenario of this document, which presents a survey of
some of the most popular P2P streaming applications available on the
nowadays Internet. The presented survey does not aim at being
exhaustive. Reviewed applications have indeed been selected mainly
based on their popularity and on the information publicly available
on P2P operation details at writing time.
In addition, selected applications are not reviewed as a whole, but
with exclusive focus on signaling and control protocols used to
construct and maintain the overlay connections among peers and to
advertise and download multimedia content. More precisely, we assume
throughout the document the high level system model reported in
Figure 1.
+--------------------------------+
| Tracker |
| Information on multimedia |
| content and peer set |
+--------------------------------+
^ | ^ |
| | | |
Trcker | | Tracker | |
Protocol | | Protocol | |
| | | |
| | | |
| V | V
+-------------+ +------------+
| Peer1 |<-------->| Peer 2 |
+-------------+ Peer +------------+
Protocol
Figure 1, High level model of P2P streaming systems assumed
as reference througout the document
As Figure 1 shows, it is possible to identify in every P2P streaming
system two main types of entity: peers and trackers. Peers represent
end users, which join dynamically the system to send and receive
streamed media content, whereas trackers represent well-known nodes,
which are stably connected to the system and provide peers with
metadata information about the streamed content and the set of active
peers. According to this model, it is possible to distinguish among
two different control and signaling protocols:
the protocol that regulates the interaction between trackers and
peers and will be denoted as "tracker protocol" in the document;
Gu, et al. Expires August 29, 2013 [Page 3]
Internet-Draft Survey of P2P Streaming Applications February 2013
the protocol that regulates the interaction between peers and will
be denoted as "peer protocol" in the document.
Hence, whenever possible, we will always try to identify tracker and
peer protocols and we will provide the corresponding details.
This document is organized as follows. Section 2 introduces
terminology and concepts used throughout the current survey. Since
overlay topology built on connections among peers impacts some
aspects of tracker and peer protocols, Section 2 classifies P2P
streaming application according to the main overlay topologies: mesh-
based, tree-based and hybrid. Then, Section 3 presents some of the
most popular mesh-based P2P streaming applications: Octoshape,
PPLive, Zattoo, PPStream, SopCast, Tribler, QQLive. Likewise,
Section 4 presents End System Multicast as example of tree-based P2P
streaming applications. Finally Section 5 presents New Coolstreaming
as example of hybrid-topology P2P streaming application.
2. Terminologies and concepts
Chunk: A chunk is a basic unit of data organized in P2P streaming for
storage, scheduling, advertisement and exchange among peers.
Live streaming: It refers to a scenario where all the audiences
receive streaming content for the same ongoing event. It is desired
that the lags between the play points of the audiences and streaming
source be small.
Peer: A peer refers to a participant in a P2P streaming system that
not only receives streaming content, but also caches and streams
streaming content to other participants.
Peer protocol: Control and signaling protocol that regulates
interaction among peers.
Pull: Transmission of multimedia content only if requested by
receiving peer.
Push: Transmission of multimedia content without any request from
receiving peer.
Swarm: A swarm refers to a group of peers who exchange data to
distribute chunks of the same content at a given time.
Tracker: A tracker refers to a directory service that maintains a
list of peers participating in a specific audio/video channel or in
the distribution of a streaming file.
Gu, et al. Expires August 29, 2013 [Page 4]
Internet-Draft Survey of P2P Streaming Applications February 2013
Tracker protocol: Control and signaling protocol that regulates
interaction among peers and trackers.
Video-on-demand (VoD): It refers to a scenario where different
audiences may watch different parts of the same recorded streaming
with downloaded content.
3. Classification of P2P Streaming Applications Based on Overlay
Topology
Depending on the topology that can be associated with overlay
connections among peers, it is possible to distinguish among the
following general types of P2P streaming applications:
- tree-based: peers are organized to form a tree-shape overlay
network rooted at the streaming source, and multimedia content
delivery is push-based. Peers that forward data are called parent
nodes, and peers that receive it are called children nodes. Due
to their structured nature, tree-based P2P streaming applications
present a very low cost of topology maintenance and are able to
guarantee good performance in terms of scalability and delay. On
the other side, they are not very resilient to peer churn, that
may be very high in a P2P environment;
- mesh-based: peers are organized in a randomly connected overlay
network, and multimedia content delivery is pull-based. This is
the reason why these systems are also referred to as "data-
driven". Due to their unstructured nature, mesh-based P2P
streaming application are very resilient with respect to peer
churn and are able to guarantee network resource utilization
higher than for tree-based applications. On the other side, the
cost to maintain overlay topology may limit performance in terms
of scalability and delay, and pull-based data delivery calls for
large size buffer where to store chunks;
- hybrid: this category includes all the P2P application that
cannot be classified as simply mesh-based or tree-based and
present characteristics of both mesh-based and tree-based
categories.
3.1. Mesh-based P2P Streaming Applications
In mesh-based P2P streaming application peers self-organize in a
randomly connected overlay graph where peers interact with a limited
subset of peers (neighbors) and explicitly request chunks they need
(pull-based or data-driven delivery). This type of content delivery
may be associated with high overhead, not only because peers
Gu, et al. Expires August 29, 2013 [Page 5]
Internet-Draft Survey of P2P Streaming Applications February 2013
formulate requests to in order to download chunks they need, but also
because in some applications peers exchange information about chunks
they own (in form of so called buffer-maps, a sort of bit maps with a
bit "1" in correspondence of chunks stored in the local buffer). The
main advantage of this kind of applications lies in that a peer does
not rely on a single peer for retrieving multimedia content. Hence,
these applications are very resilient to peer churn. On the other
side, overlay connections are not persistent and highly dynamic
(being driven by content availability), and this makes content
distribution efficiency unpredictable. In fact, different chunks may
be retrieved via different network paths, and this may turn at end
users into playback quality degradation ranging from low bit rates,
to long startup delays, to frequent playback freezes. Moreover,
peers have to maintain large buffers to increase the probability of
satisfying chunk requests received by neighbors.
3.1.1. Octoshape
Octoshape [Octoshape] is popular for the realization of the P2P
plug-in CNN [CNN] that has been using Octoshape to broadcast its
living streaming. Octoshape helps CNN serve a peak of more than a
million simultaneous viewers. But Octoshape has also provided
several innovative delivery technologies such as loss resilient
transport, adaptive bit rate, adaptive path optimization and adaptive
proximity delivery.
Figure 2 depicts the architecture of the Octoshape system.
+------------+ +--------+
| Peer 1 |---| Peer 2 |
+------------+ +--------+
| \ / |
| \ / |
| \ |
| / \ |
| / \ |
| / \ |
+--------------+ +-------------+
| Peer 4 |----| Peer3 |
+--------------+ +-------------+
*****************************************
|
|
+---------------+
| Content Server|
+---------------+
Figure 2, Architecture of Octoshape system
Gu, et al. Expires August 29, 2013 [Page 6]
Internet-Draft Survey of P2P Streaming Applications February 2013
As it can be seen from the picture, there are no trackers and
consequently no tracker protocol is necessary.
As regards the peer protocol, as soon as a peer joins a channel, it
notifies all the other peers about its presence, in such a way that
each peer maintains a sort of address book with the information
necessary to contact other peers who are watching the same channel.
Although Octoshape inventors claim in [Octoshape] that each peer
records all peers joining a channel, we suspect that it is very
unlikely that all peers are recorded. In fact, the corresponding
overhead traffic would be large, especially when a popular program
starts in a channel and lots of peers switch to this channel. Maybe
only some geographic or topological neighbors are notified and the
joining peer gets the address book from these nearby neighbors.
Regarding data distribution strategy, in the Octoshape solution the
original stream is split into a number K of smaller equal-sized data
streams, but a number N > K of unique data streams are actually
constructed, in such a way that a peer receiving any K of the N
available data streams is able to play the original stream. For
instance, if the original live stream is a 400 kbit/sec signal, for
K=4 and N=12, 12 unique data streams are constructed, and a peer that
downloads any 4 of the 12 data streams is able to play the live
stream. In this way, each peer sends requests of data streams to
some selected peers, and it receives positive/negative answers
depending on availability of upload capacity at requested peers. In
case of negative answers, a continues sending requests it finds K
peers willing to upload the minimum number if data streams needed to
redisplay the original live stream. Since the number of peers served
by a given peer is limited by its upload capacity, the upload
capacity at each peer should be larger than the playback rate of the
live stream. Otherwise, artificial peers may be added to offer extra
bandwidth.
In order to mitigate the impact of peer loss, the address book is
also used at each peer to derive the so called Standby List, which
Octoshape peers use to probe other peers and be sure that they are
ready to take over if one of the current senders leaves or gets
congested.
Finally, in order to optimize bandwidth utilization, Octoshape
leverages peers within a network to minimize external bandwidth usage
and to select the most reliable and "closest" source to each viewer.
It also chooses the best matching available codecs and players, and
it scales bit rate up and down according to available internet
connection.
Gu, et al. Expires August 29, 2013 [Page 7]
Internet-Draft Survey of P2P Streaming Applications February 2013
3.1.2. PPLive
PPLive [PPLive] is one of the most popular P2P streaming software in
China. The PPLive system includes six parts.
(1) Video streaming server: providing the source of video content and
coding the content for adapting the network transmission rate and the
client playing.
(2) Peer: also called node or client. The peers compose the self-
organizing network logically and each peer can join or leave
whenever. When the client downloads the content, it also provides
its own content to the other client at the same time.
(3) Directory server: server which the PPLive client, when launched
or shut down by user, automatically registers user information to and
cancels user information from.
(4) Tracker server: server that records the information of all users
watching the same content. In more detail, when the PPLive client
requests some content, this server will check if there are other
peers owning the content and send the information to the client.
(5) Web server: providing PPLive software updating and downloading.
(6) Channel list server: server that stores the information of all
the programs which can be watched by end users, including VoD
programs and live broadcasting programs.
PPLive uses two major communication protocols. The first one is the
Registration and Peer Discovery protocol, the equivalent of tracker
protocol, and the second one is the P2P Chunk Distribution protocol,
the equivalent of peer protocol. Figure 3 shows the architecture of
PPLive system.
Gu, et al. Expires August 29, 2013 [Page 8]
Internet-Draft Survey of P2P Streaming Applications February 2013
+------------+ +--------+
| Peer 2 |----| Peer 3 |
+------------+ +--------+
| |
| |
+--------------+
| Peer 1 |
+--------------+
|
|
|
+---------------+
| Tracker Server|
+---------------+
Figure 3, Architecture of PPlive system
As regards the tracker protocol, firstly a peer gets the channel list
from the Channel list server; secondly it chooses a channel and asks
the Tracker server for a peer-list associated with the selected
channel.
As regards the peer protocol, a peer contacts the peers in its peer-
list to get additional peer-lists, to be merged with the original one
received by Tracker server with the goal of constructing and
maintaining an overlay mesh for peer management and data delivery.
According to [P2PIPTVMEA], PPLive peers maintain a constant peer-list
when the number of peers is relatively small.
For the video-on-demand (VoD) operation, because different peers
watch different parts of the channel, a peer buffers chunks up to a
few minutes of content within a sliding window. Some of these chunks
may be chunks that have been recently played; the remaining chunks
are chunks scheduled to be played in the next few minutes. In order
to upload chunks to each other, peers exchange "buffer-map" messages.
A buffer-map message indicates which chunks a peer currently has
buffered and can share, and it includes the offset (the ID of the
first chunk), the length of the buffer map, and a string of zeroes
and ones indicating which chunks are available (starting with the
chunk designated by the offset). PPlive transfer Data over UDP.
The download policy of PPLive may be summarized with the following
three points:
top-ten peers contribute to a major part of the download traffic.
Meanwhile, session with top-ten peers is quite short, if compared
with the video session duration. This would suggest that PPLive
gets video from only a few peers at any given time, and switches
Gu, et al. Expires August 29, 2013 [Page 9]
Internet-Draft Survey of P2P Streaming Applications February 2013
periodically from one peer to another;
PPLive can send multiple chunk requests for different chunks to
one peer at one time;
PPLive is observed to have the download scheduling policy of
giving higher priority to rare chunks and to chunks closer to play
out deadline.
3.1.3. Zattoo
Zattoo [Zattoo] is P2P live streaming system which serves over 3
million registered users over European countries.The system delivers
live streaming using a receiver-based, peer-division multiplexing
scheme. Zattoo reliably streams media among peers using the mesh
structure.
Figure 4 depicts a typical procedure of single TV channel carried
over Zattoo network. First, Zattoo system broadcasts a live TV
channel, captured from satellites, onto the Internet. Each TV
channel is delivered through a separate P2P network.
-------------------------------
| ------------------ | --------
| | Broadcast | |---------|Peer1 |-----------
| | Servers | | -------- |
| Administrative Servers | -------------
| ------------------------ | | Super Node|
| | Authentication Server | | -------------
| | Rendezvous Server | | |
| | Feedback Server | | -------- |
| | Other Servers | |---------|Peer2 |----------|
| ------------------------| | --------
------------------------------|
Figure 4, Basic architecture of Zattoo system
In order to receive a TV channel, users are required to be
authenticated through Zattoo Authentication Server. Upon
authentication, users obtain a ticket identifying the interest TV
channel with a specific lifetime. Then, users contact the Rendezvous
Server, which plays the role of tracker and based on the received
ticket sends back a list joined of peers carrying the channel.
As regards the peer protocol, a peer establishes overlay connections
with other peers randomly selected in the peer-list received by the
Rendezvous Server.
For reliable data delivery, each live stream is partitioned into
Gu, et al. Expires August 29, 2013 [Page 10]
Internet-Draft Survey of P2P Streaming Applications February 2013
video segments. Each video segment is coded for forward error
correction with Reed-Solomon error correcting code into n sub-stream
packets such that having obtained k correct packets of a segment is
sufficient to reconstruct the remaining n-k packets of the same video
segment. To receive a video segment, each peer then specifies the
sub-stream(s) of the video segment it would like to receive from the
neighboring peers.
Peers decide how to multiplex a stream among its neighboring peers
based on the availability of upload bandwidth. With reference to
such aspect, Zattoo peers rely on Bandwdith Estimation Server to
initially estimate the amount of available uplink bandwidth at a
peer. Once a peer starts to forward substream to other peers, it
receives QoS feedback from its receivers if the quality of sub-stream
drops below a threshold.
Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
handle longer term bandwidth fluctuations. According to this scheme,
each peer determines how many sub-streams to transmit and when to
switch partners. Specifically, each peer continuously estimates the
amount of available uplink bandwidth based initially on probe packets
sent to Zattoo Bandwidth Estimation Server and subsequently on peer
QoS feedbacks, by using different algorithms depending on the
underlying transport protocol. A peer increases its estimated
available uplink bandwidth, if the current estimate is below some
threshold and if there has been no bad quality feedback from
neighboring peers for a period of time, according to some algorithm
similar to how TCP maintains its congestion window size. Each peer
then admits neighbors based on the currently estimated available
uplink bandwidth. In case a new estimate indicates insufficient
bandwidth to support the existing number of peer connections, one
connection at a time, preferably starting with the one requiring the
least bandwidth, is closed. On the other hand, if loss rate of
packets from a peer's neighbor reaches a certain threshold, the peer
will attempt to shift the degraded neighboring peer load to other
existing peers, while looking for a replacement peer. When one is
found, the load is shifted to it and the degraded neighbor is
dropped. As expected if a peer's neighbor is lost due to departure,
the peer initiates the process to replace the lost peer. To optimize
the PDM configuration, a peer may occasionally initiate switching
existing partnering peers to topologically closer peers.
3.1.4. PPStream
The system architecture of PPStream [PPStream] is similar to the one
of PPLive.
To ensure data availability, PPStream uses some form of chunk
Gu, et al. Expires August 29, 2013 [Page 11]
Internet-Draft Survey of P2P Streaming Applications February 2013
retransmission request mechanism and shares buffer map at high rate.
Each data chunk, identified by the play time offset encoded by the
program source, is divided into 128 sub-chunks of 8KB size each. The
chunk id is used to ensure sequential ordering of received data
chunk. The buffer map consists of one or more 128-bit flags denoting
the availability of sub-chunks, and it includes information on time
offset. Usually, a buffer map contains only one data chunk at a
time, and it also contains sending peer's playback status, because as
soon as a data chunk is played back, the chunk is deleted or replaced
by the next data chunk.
At the initiating stage a peer can use up to four data chunks,
whereas on a stabilized stage a peer uses usually one data chunk.
However, in transient stage, a peer uses variable number of chunks.
Sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy. The same fetching pattern for
one data chunk seems to repeat itself in the subsequent data chunks.
Moreover, higher bandwidth PPStream peers tend to receive chunks
earlier and thus to contribute more than lower bandwidth peers.
Based on the experimental results reported in [P2PIPTVMEA], download
policy of PPStream may be summarized with the following two points:
top-ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream peer gets the video
from many peers simultaneously, and session between peers have
long duration;
PPStream does not send multiple chunk requests for different
chunks to one peer at one time; PPStream maintains a constant peer
list with relatively large number of peers.
3.1.5. SopCast
The system architecture of SopCast [SopCast] is similar to the one of
PPLive.
SopCast allows for software updates via HTTP through a centralized
web server, and it makes list of channels available via HTTP through
another centralized server.
SopCast traffic is encoded and SopCast TV content is divided into
video chunks or blocks with equal sizes of 10KB. Sixty percent of
its traffic is signaling packets and 40% is actual video data
packets. SopCast produces more signaling traffic compared to PPLive,
PPStream, with PPLive producing the minimum of signaling traffic. It
has been observed in [P2PIPTVMEA] that SopCast traffic has long-range
dependency, which also means that eventual QoS mitigation mechanisms
Gu, et al. Expires August 29, 2013 [Page 12]
Internet-Draft Survey of P2P Streaming Applications February 2013
may be ineffective. Moreover, according to [P2PIPTVMEA], SopCast
communication mechanism starts with UDP for the exchange of control
messages among its peers by using a gossip-like protocol and then
moves to TCP for the transfer of video segments. It also seems that
top-ten peers contribute to about half of the total download traffic.
Finally, SopCast peer-list can be as large as PPStream peer-list, but
differently from PPStream SopCast peer-list varies over time.
3.1.6. Tribler
Tribler [tribler] is a BitTorrent client that is able to go very much
beyond BitTorrent model also thanks to the support for video
streaming. Initially developed by a team of researchers at Delft
University of Technology, Tribler was able to attract attention from
other universities and media companies and to receive European Union
research funding (P2P-Next and QLectives projects).
Differently from BitTorrent, where a tracker server centrally
coordinates uploads/downloads of chunks among peers and peers
directly interact with each other only when they actually upload/
download chunks to/from each other, there is no tracker server in
Tribler and, as a consequence, there is no need of tracker protocol.
Peer protocol is instead used to organize peers in an overlay mesh.
In more detail, Tribler bootstrap process consists in preloading well
known super-peer addresses into peer local cache, in such a way that
a joining peer randomly selects a super-peer to retrieve a random
list of already active peers to establish overlay connections with.
A gossip-like mechanism called BuddyCast allows Tribler peers to
exchange their preference lists, that is their downloaded file, and
to build the so called Preference Cache. This cache is used to
calculate similarity levels among peers and to identify the so called
"taste buddies" as the peers with highest similarity. Thanks to this
mechanism each peer maintains two lists of peers: i) a list of its
top-N taste buddies along with their current preference lists, and
ii) a list of random peers. So a peer alternatively selects a peer
from one of the lists and sends it its preference list, taste-buddy
list and a selection of random peers. The goal behind the
propagation of this kind of information is the support for the remote
search function, a completely decentralized search service that
consists in querying Preference Cache of taste buddies in order to
find the torrent file associated with an interest file. If no
torrent is found in this way, Tribler users may alternatively resort
to web-based torrent collector servers available for BitTorrent
clients.
As already said, Tribler supports video streaming in two different
forms: video on demand and live streaming.
Gu, et al. Expires August 29, 2013 [Page 13]
Internet-Draft Survey of P2P Streaming Applications February 2013
As regards video on demand, a peer first of all keeps informed its
neighbors about the chunks it has. Then, on the one side it applies
suitable chunk-picking policy in order to establish the order
according to which to request the chunks he wants to download. This
policy aims to assure that chunks come to the media player in order
and in the same time that overall chunk availability is maximized.
To this end, the chunk-picking policy differentiates among high, mid
and low priority chunks depending on their closeness with the
playback position. High priority chunks are requested first and in
strict order. When there are no more high priority chunks to
request, mid priority chunks are requested according to a rarest-
first policy. Finally, when there are no more mid priority chunks to
request, low priority chunks are requested according to a rarest-
first policy as well. On the other side, Tribler peers follow the
give-to-get policy in order to establish which peer neighbors are
allowed to request chunks (according to BitTorrent jargon to be
unchoked). In more detail, time is subdivided in periods and after
each period Tribler peers first sort their neighbors according to the
decreasing numbers of chunks they have forwarded to other peers,
counting only the chunks they originally received from them. In case
if tie, Tribler sorts their neighbors according to the decreasing
total number of chunks they have forwarded to other peers. Since
children could lie regarding the number of chunks forwarded to
others, Tribler peers do directly not ask their children, but their
grandchildren. In this way, Tribler peer unchokes the three highest-
ranked neighbours and, in order to saturate upload bandwidth and in
the same time not decrease the performance of individual connections,
it further unchokes a limited number of neighbors. Moreover, in
order to search for better neighbors, Tribler peers randomly select a
new peer in the rest of the neighbours and optimistically unchoke it
every two periods.
As regards live streaming, differently from video on demand scenario,
the number of chunks cannot be known in advance. As a consequence a
sliding window of fixed width is used to identify chunks of interest:
every chunk that falls out the sliding window is considered out-
dated, is locally deleted and is considered as deleted by peer
neighbors as well. In this way, when a peer joins the network, it
learns about chunks its neighbors possess and identifies the most
recent one. This is assumed as beginning of the sliding window at
the joining peer, which starts downloading and uploading chunks
according to the description provided for video on demand scenario.
Finally, differently from what happens for video on demand scenario,
where torrent files includes a hash for each chunk in order to
prevent malicious attackers from corrupting data, torrent files in
live streaming scenario include the public key of the stream source.
Each chunk is then assigned with absolute sequence number and
timestamp and signed by source public key. Such a mechanism allows
Gu, et al. Expires August 29, 2013 [Page 14]
Internet-Draft Survey of P2P Streaming Applications February 2013
Tribler peers to use the public key included in torrent file and
verity the integrity of each chunk.
3.1.7. QQLive
QQLive [QQLive] is large-scale video broadcast software including
streaming media encoding, distribution and broadcasting. Its client
can apply for web, desktop program or other environments and provides
abundant interactive function in order to meet the watching
requirements of different kinds of users.
Due to the lack of technical details from QQLive vendor, we got some
knowledge about QQLive from paper [QQLivePaper], whose authors did
some measurements and based on this identify the main components and
working flow of QQLive.
Main components of QQLive include:
login server, storing user login information and channel
information;
authentication server, processing user login authentication;
channel server, storing all information about channels including
channel connection nodes watching a channel;
program server, storing audio and video data information;
log server, recording the beginning and ending information of
channels;
peer node, watching programs and transporting streaming media.
Main working flow of QQLive includes startup stage and play stage.
Startup stage includes only interactions between peers and
centralized QQLive servers, so it may be regarded as associated with
tracker protocol. This stage begins when a peer launches QQLive
client. Peer provides authentication information in an
authentication message, which it sends to the authentication server.
Authentication server verifies QQLive provided credentials and if
these are valid, QQLive client starts communicating with login server
through SSL. QQLive client sends a message including QQLive account
and nickname, and login serve returns a message including information
such as membership point, total view time, upgrading time and so on.
At this point, QQLive client requests channel server for updating
channel list. QQLive client firstly loads an old channel list stored
locally and then it overwrites the old list with the new channel list
Gu, et al. Expires August 29, 2013 [Page 15]
Internet-Draft Survey of P2P Streaming Applications February 2013
received from channel server. The full channel list is not obtained
via a single request. QQLive client firstly requests for channel
classification and then requests the channel list within a specific
channel category selected by the user. This approach will give
higher real-time performance to QQLive.
Play stage includes interactions between peers and centralized QQLive
servers and between QQLive peers, so it may be regarded as associated
to both tracker protocol and peer protocol. IN more detail, play
stage is structured in the following phases:
Open channel. QQLive client sends a message to dogin server with
the ID of chosen channel through UDP, whereas login server replies
with a message including channel ID, channel name and program
name. Afterwards, QQLive client communicates with program server
through SSL to access program information. Finally QQLive client
communicates with channel server through UDP to obtain initial
peer information.
View channel. QQLive client establishes connections with peers
and sends packets with fixed length of 118 bytes, which contains
channel ID. QQLive client maintains communication with channel
server by reporting its own information and obtaining updated
information. Peer nodes transport stream packet data through UDP
with fixed-port between 13000 and14000.
Stop channel. QQLive client continuously sends five identical UDP
packets to channel server with each data packet fixed length of 93
bytes.
Close client. QQLive client sends a UDP message to notify log
server and an SSL message to login server, then it continuously
sends five identical UDP packets to channel server with each data
packet fixed length of 45 bytes.
3.2. Tree-based P2P streaming applications
In tree-based P2P streaming applications peers self-organize in a
tree-shape overlay network, where peers do not ask for a specific
content chunk, but simply receive it from their so called "parent"
node. Such content delivery model is denoted as push-based.
Receiving peers are denoted as children, whereas sending nodes are
denoted as parents. Overhead to maintain overlay topology is usually
lower for tree-based streaming applications than for mesh-based
streaming applications, whereas performance in terms of scalability
and delay are usually higher. On the other side, the greatest
drawback of this type of application lies in that each node depends
on one single node, its father in overlay tree, to receive streamed
Gu, et al. Expires August 29, 2013 [Page 16]
Internet-Draft Survey of P2P Streaming Applications February 2013
content. Thus, tree-based streaming applications suffer from peer
churn phenomenon more than mesh-based ones.
3.2.1. End System Multicast (ESM)
Even though End System Multicast (ESM) project is ended by now and
ESM infrastructure is not being currently implemented anywhere, we
decided to include it in this survey for a twofold reason. First of
all, it was probably the first and most significant research work
proposing the possibility of implementing multicast functionality at
end hosts in a P2P way. Secondly, ESM research group at Carnegie
Mellon University developed the world's first P2P live streaming
system, and some members founded later Conviva [conviva] live
platform.
The main property of ESM is that it constructs the multicast tree in
a two-step process. The first step aims at the construction of a
mesh among participating peers, whereas the second step aims at the
construction of data delivery trees rooted at the stream source.
Therefore a peer participates in two types of topology management
structures: a control structure that guarantees peers are always
connected in a mesh, and a data delivery structure that guarantees
data gets delivered in an overlay multicast tree.
There exist two versions of ESM.
The first version of ESM architecture [ESM1] was conceived for small
scale multi-source conferencing applications. Regarding the mesh
construction phase, when a new member wants to join the group, an
out-of-bandwidth bootstrap mechanism provides the new member with a
list of some group member. The new member randomly selects a few
group members as peer neighbors. The number of selected neighbors
does not exceed a given bound, which reflects the bandwidth of the
peer's connection to the Internet. Each peer periodically emits a
refresh message with monotonically increasing sequence number, which
is propagated across the mesh in such a way that each peer can
maintain a list of all the other peers in the system. When a peer
leaves, either it notifies its neighbors and the information is
propagated across the mesh to all participating peers, or peer
neighbors detect the condition of abrupt departure and propagate it
through the mesh. To improve mesh/tree quality, on the one side
peers constantly and randomly probe each other to add new links; on
the other side, peers continually monitor existing links to drop the
ones that are not perceived as good-quality-links. This is done
thanks to the evaluation of a utility function and a cost function,
which are conceived to guarantee that the shortest overlay delay
between any pair of peers is comparable to the unicast delay among
them. Regarding multicast tree construction phase, peers run a
Gu, et al. Expires August 29, 2013 [Page 17]
Internet-Draft Survey of P2P Streaming Applications February 2013
distance-vector protocol on top of the tree and use latency as
routing metric. In this way, data delivery trees may be constructed
from the reverse shortest path between source and recipients.
The second and subsequent version of ESM architecture [ESM2] was
conceived for an operational large scale single-source Internet
broadcast system. As regards the mesh construction phase, a node
joins the system by contacting the source and retrieving a random
list of already connected nodes. Information on active participating
peers is maintained thanks to a gossip protocol: each peer
periodically advertises to a randomly selected neighbor a subset of
nodes he knows and the last timestamps it has heard for each known
node.
The main difference with the first version is that the second version
constructs and maintains the data delivery tree in a completely
distributed manner according to the following criteria: i) each node
maintains a degree bound on the maximum number of children it can
accept depending on its uplink bandwidth, ii) tree is optimized
mainly for bandwidth and secondarily for delay. To this end, a
parent selection algorithm allows identifying among the neighbors the
one that guarantees the best performance in terms of throughput and
delay. The same algorithm is also applied either if a parent leaves
the system or if a node is experiencing poor performance (in terms of
both bandwidth and packet loss). As loop prevention mechanism, each
node keeps also the information about the hosts in the path between
the source and its parent node.It then constructs a (reverse)
shortest path spanning trees of the mesh with the root being the
source.
This second ESM prototype is also able to cope with receiver
heterogeneity and presence of NAT/firewalls. In more detail, audio
stream is kept separated from video stream and multiple bit-rate
video streams are encoded at source and broadcast in parallel though
the overlay tree. Audio is always prioritized over video streams,
and lower quality video is always prioritized over high quality
videos. In this way, system can dynamically select the most suitable
video stream according to receiver bandwidth and network congestion
level. Moreover, in order to take presence of hosts behind NAT/
firewalls, tree is structured in such a way that public host use
hosts behind NAT/firewalls as parents.
3.3. Hybrid P2P streaming applications
This type of applications aims at integrating the main advantages of
mesh-based and tree-based approaches. To this end, overlay topology
is mixed mesh-tree, and content delivery model is push-pull.
Gu, et al. Expires August 29, 2013 [Page 18]
Internet-Draft Survey of P2P Streaming Applications February 2013
3.3.1. New Coolstreaming
Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P
live streaming. Nevertheless, it suffers poor delay performance and
high overhead associated each video block transmission. In the
attempt of overcoming such a limitation, New Coolstreaming
[NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a
hybrid pull-push content delivery mechanism.
Figure 5 illustrates New Coolstreaming architecture.
------------------------------
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Content server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
| Peer1 | | Peer2 |
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+
Figure 5, New Coolstreaming Architecture
The video stream is divided into equal-size blocks or chunks, which
are assigned with a sequence number to implicitly define the playback
order in the stream. Video stream is subdivided into multiple sub-
streams without any coding, so that each node can retrieve any sub-
stream independently from different parent nodes. This consequently
reduces the impact on content delivery due to a parent departure or
failure. The details of hybrid push-pull content delivery scheme are
as follows:
a node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in buffer map, the
Gu, et al. Expires August 29, 2013 [Page 19]
Internet-Draft Survey of P2P Streaming Applications February 2013
requested partner, i.e., the parent node. The node can subscribe
more sub-streams to its partners in this way to obtain higher play
quality;
the selected parent node will continue pushing all blocks of the
sub-stream to the requesting node.
This not only reduces the overhead associated with each video block
transfer, but more importantly it significantly reduces the delay in
retrieving video content.
Video content is processed for ease of delivery, retrieval, storage
and play out. To manage content delivery, a video stream is divided
into blocks with equal size, each of which is assigned a sequence
number to represent its playback order in the stream. Each block is
further divided into K sub-blocks and the set of i-th sub-blocks of
all blocks constitutes the i-th sub-stream of the video stream, where
i is a value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its
parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer.
The synchronization buffer stores the received sub-blocks of each
sub-stream according to the associated block sequence number of the
video stream. The cache buffer then picks up the sub-blocks
according to the associated sub-stream index of each ordered block.
To advertise the availability of the latest block of different sub-
streams in its buffer, a node uses a Buffer Map which is represented
by two vectors of K elements each. Each entry of the first vector
indicates the block sequence number of the latest received sub-
stream, and each bit entry of the second vector if set indicates the
block sequence index of the sub-stream that is being requested.
For data delivery, a node uses a hybrid push and pull scheme with
randomly selected partners. A node having requested one or more
distinct sub-streams from a partner as indicated in its first Buffer
Map will continue to receive the sub-streams of all subsequent blocks
from the same partner until future conditions cause the partner to do
otherwise. Moreover, users retrieve video indirectly from the source
through a number of strategically located servers.
To keep the parent-children relationship above a certain level of
quality, each node constantly monitors the status of the on-going
sub-stream reception and re-selects parents according to sub-stream
availability patterns. Specifically, if a node observes that the
block sequence number of the sub-stream of a parent is much smaller
than any of its other partners by a predetermined amount, the node
then concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum
Gu, et al. Expires August 29, 2013 [Page 20]
Internet-Draft Survey of P2P Streaming Applications February 2013
and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced.
4. Security Considerations
This document does not raise security issues.
5. Author List
The authors of this document are listed as below.
Hui Zhang, NEC Labs America.
Jun Lei, University of Goettingen.
Gonzalo Camarillo, Ericsson.
Yong Liu, Polytechnic University.
Delfin Montuno, Huawei.
Lei Xie, Huawei.
6. Acknowledgments
We would like to acknowledge Jiang xingfeng for providing good ideas
for this document.
7. Informative References
[Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet".
[CNN] CNN web site, www.cnn.com
[PPLive] PPLive web site, www.pplive.com
[P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV
Systems", June 2007.
[Zattoo] Zattoo web site, http: //zattoo.com/
[PPStream] PPStream web site, www.ppstream.com
Gu, et al. Expires August 29, 2013 [Page 21]
Internet-Draft Survey of P2P Streaming Applications February 2013
[SopCast] SopCast web site, http://www.sopcast.com/
[tribler] Tribler Protocol Specification, January 2009, on line
available at http://svn.tribler.org/bt2-design/proto-spec-unified/
trunk/proto-spec-current.pdf
[QQLive] QQLive web site, http://v.qq.com
[QQLivePaper] Liju Feng, et al., "Research on active monitoring based
QQLive real-time information Acquisition System", 2009.
[conviva] Conviva web site, http://www.conviva.com
[ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June
2000. (http://esm.cs.cmu.edu/technology/papers/
Sigmetrics.CaseForESM.2000.pdf)
[ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet
Broadcast System Based on Overlay Multicast", June 2004. (http://
static.usenix.org/events/usenix04/tech/general/full_papers/chu/
chu.pdf)
[NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications", April 2008.
Authors' Addresses
Gu Yingjie
Huawei
No.101 Software Avenue
Nanjing 210012
P.R.China
Phone: +86-25-56624760
Fax: +86-25-56624702
Email: guyingjie@huawei.com
Gu, et al. Expires August 29, 2013 [Page 22]
Internet-Draft Survey of P2P Streaming Applications February 2013
Zong Ning (editor)
Huawei
No.101 Software Avenue
Nanjing 210012
P.R.China
Phone: +86-25-56624760
Fax: +86-25-56624702
Email: zongning@huawei.com
Zhang Yunfei
China Mobile
Email: zhangyunfei@chinamobile.com
Francesca Lo Piccolo
Cisco
Email: flopicco@cisco.com
Duan Shihui
CATR
No.52 HuaYuan BeiLu
Beijing 100191
P.R.China
Phone: +86-10-62300068
Email: duanshihui@catr.cn
Gu, et al. Expires August 29, 2013 [Page 23]
| PAFTECH AB 2003-2026 | 2026-04-24 06:54:44 |