One document matched: draft-ietf-ppsp-survey-03.txt
Differences from draft-ietf-ppsp-survey-02.txt
PPSP Y. Gu, Ed.
Internet-Draft N. Zong, Ed.
Intended status: Standards Track Huawei
Expires: April 19, 2013 Yunfei. Zhang
China Mobile
October 16, 2012
Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-03
Abstract
This document presents a survey of popular Peer-to-Peer streaming
applications on the Internet. We focus on the Architecture and Peer
Protocol/Tracker Signaling Protocol description in the presentation,
and study a selection of well-known P2P streaming systems, including
Joost, PPlive, andother popular existing systems. Through the
survey, we summarize a common P2P streaming process model and the
correspondent signaling process for P2P Streaming Protocol
standardization.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 19, 2013.
Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
Gu, et al. Expires April 19, 2013 [Page 1]
Internet-Draft Survey of P2P Streaming Applications October 2012
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 3
3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 4
3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 4
3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 8
3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 16
3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 19
3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 21
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 21
4. A common P2P Streaming Process Model . . . . . . . . . . . . . 23
5. Security Considerations . . . . . . . . . . . . . . . . . . . 24
6. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 24
7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25
8. Informative References . . . . . . . . . . . . . . . . . . . . 25
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26
Gu, et al. Expires April 19, 2013 [Page 2]
Internet-Draft Survey of P2P Streaming Applications October 2012
1. Introduction
Toward standardizing the signaling protocols used in today's Peer-to-
Peer (P2P) streaming applications, we surveyed several popular P2P
streaming systems regarding their architectures and signaling
protocols between peers, as well as, between peers and trackers. The
studied P2P streaming systems, running worldwide or domestically.
This document does not intend to cover all design options of P2P
streaming applications. Instead, we choose a representative set of
applications and focus on the respective signaling characteristics of
each kind. Through the survey, we generalize a common streaming
process model from those P2P streaming systems, and summarize the
companion signaling process as the base for P2P Streaming Protocol
(PPSP) standardization.
2. Terminologies and concepts
Chunk: A chunk is a basic unit of partitioned streaming media, which
is used by a peer for the purpose of storage, advertisement and
exchange among peers [P2PVOD].
Content Distribution Network (CDN) node: A CDN node refers to a
network entity that usually is deployed at the network edge to store
content provided by the original servers, and serves content to the
clients located nearby topologically.
Live streaming: The scenario where all clients receive streaming
content for the same ongoing event. The lags between the play points
of the clients and that of the streaming source are small..
P2P cache: A P2P cache refers to a network entity that caches P2P
traffic in the network, and either transparently or explicitly
distributes content to other peers.
P2P streaming protocols: P2P streaming protocols refer to multiple
protocols such as streaming control, resource discovery, streaming
data transport, etc. which are needed to build a P2P streaming
system.
Peer/PPSP peer: A peer/PPSP peer refers to a participant in a P2P
streaming system. The participant not only receives streaming
content, but also stores and uploads streaming content to other
participants.
PPSP protocols: PPSP protocols refer to the key signaling protocols
among various P2P streaming system components, including the tracker
and peers.
Gu, et al. Expires April 19, 2013 [Page 3]
Internet-Draft Survey of P2P Streaming Applications October 2012
Swarm: A swarm refers to a group of clients (i.e. peers) sharing the
same content (e.g. video/audio program, digital file, etc) at a given
time.
Tracker/PPSP tracker: A tracker/PPSP tracker refers to a directory
service which maintains the lists of peers/PPSP peers storing chunks
for a specific channel or streaming file, and answers queries from
peers/PPSP peers.
Video-on-demand (VoD): A kind of application that allows users to
select and watch video content on demand
3. Survey of P2P streaming system
In this section, we summarize some existing P2P streaming systems.
The construction techniques used in these systems can be largely
classified into two categories: tree-based and mesh-based structures.
Tree-based structure: Group members self-organize into a tree
structure, based on which group management and data delivery is
performed. Such structure and push-based content delivery have small
maintenance cost and good scalability and low delay in retrieving the
content(associated with startup delay) and can be easily implemented.
However, it may result in low bandwidth usage and less reliability.
Mesh-based structure: In contrast to tree-based structure, a mesh
uses multiple links between any two nodes. Thus, the reliability of
data transmission is relatively high. Besides, multiple links
results in high bandwidth usage. Nevertheless, the cost of
maintaining such mesh is much larger than that of a tree, and pull-
based content delivery lead to high overhead associated each video
block transmission, in particular the delay in retrieving the
content.
Hybrid structure: Combine tree-based and mesh-based structure,
combine pull-based and push-based content delivery to utilize the
advantages of two structures. It has high reliability as much as
mesh-based structure, lower delay than mesh-based structure, lower
overhead associated each video block transmission and high topology
maintenance cost as much as mesh-based structure.
3.1. Mesh-based P2P streaming systems
Mesh-based systems implement a mesh distribution graph, where each
node contacts a subset of peers to obtain a number of chunks. Every
node needs to know which chunks are owned by its peers and explicitly
"pulls" the chunks it needs. This type of scheme involves overhead,
Gu, et al. Expires April 19, 2013 [Page 4]
Internet-Draft Survey of P2P Streaming Applications October 2012
due in part to the exchange of buffer maps between nodes (i.e. nodes
advertise the set of chunks they own) and in part to the "pull"
process (i.e. each node sends a request in order to receive the
chunks). Since each node relies on multiple peers to retrieve
content, mesh based systems offer good resilience to node failures.
On the negative side they require large buffers to support the chunk
pull (clearly, large buffers are needed to increase the chances of
finding a chunk).
In a mesh-based P2P streaming system, peers are not confined to a
static topology. Instead, the peering relationships are established/
terminated based on the content availability and bandwidth
availability on peers. A peer dynamically connects to a subset of
random peers in the system. Peers periodically exchange information
about their data availability. The content is pulled by a peer from
its neighbors who have already obtained the content. Since multiple
neighbors are maintained at any given moment, mesh-based streaming
systems are highly robust to peer churns. However, the dynamic
peering relationships make the content distribution efficiency
unpredictable. Different data packets may traverse different routes
to users. Consequently, users may suffer from content playback
quality degradation ranging from low bit rates, long startup delays,
to frequent playback freezes.
3.1.1. Joost
Joost announced to give up P2P technology on its desktop version last
year, though it introduced a flash version for browsers and iPhone
application. The key reason why Joost shut down its desktop version
is probably the legal issues of provided media content. However, as
one of the most popular P2P VoD application in the past years, it's
worthwhile to understand how Joost works. The peer management and
data transmission in Joost mainly relies on mesh-based structure.
The three key components of Joost are servers, super nodes and peers.
There are five types of servers: Tracker server, Version server,
Backend server, Content server and Graphics server. Supernodes are
managing the p2p control of Joost nodes and Joost nodes are all the
running clients in the Joost network. The architecture of Joost
system is shown in Figure 1.
First, we introduce the functionalities of Joost's key components
through three basic phases. Then we will discuss the Peer protocol
and Tracker protocol of Joost.
Installation: Backend server is involved in the installation phase.
Backend server provides peer with an initial channel list in a SQLite
file. No other parameters, such as local cache, node ID, or
Gu, et al. Expires April 19, 2013 [Page 5]
Internet-Draft Survey of P2P Streaming Applications October 2012
listening port, are configured in this file.
Bootstrapping: In case of a newcomer, Tracker server provides several
super node addresses and possibly some content server addresses.
Then the peer connects Version server for the latest software
version. Later, the peer starts to connect some super nodes to
obtain the list of other available peers and begins streaming video
contents. Super nodes in Joost only deal with control and peer
management traffic. They do not relay/forward any media data.
When Joost is first launched, a login mechanism is initiated using
HTTPS and TLSv1. After, a TCP synchronization, the client
authenticates with a certificate to the login server. Once the login
process is done, the client first contacts a supernode, which address
is hard coded in Joost binary to get a list of peers and a Joost
Seeder to contact. Of course, this depends on the channel chosen by
the user. Once launched, the Joost client checks if there is a more
recent version available sending an HTTP request.
Once authenticated to the video service, Joost node uses the same
authentication mechanism (TCP synchronization, certificate validation
and shared key verification) to login to the backend server.This
server validates the access to all HTTPS services like channel chat,
channel list, video content search.
Joost uses TCP port 80 for HTTP, port 443 for HTTPS transfers and UDP
port 4166 for video packets exchange mainly from long-tail servers
and each Joost peer chooses its own UDP port to exchange with other
peers.
Channel switching: Super nodes are responsible for redirecting
clients to content server or peers.
Peers communicate with servers over HTTP/HTTPs and with super nodes/
other peers over UDP.
Tracker Protocol: Because super nodes here are responsible for
providing the peerlist/content servers to peers, protocol used
between tracker server and peers is rather simple. Peers get the
addresses of super nodes and content servers from Tracker Server over
HTTP. After that, Tracker sever will not appear in any stage, e.g.
channel switching, VoD interaction. In fact, the protocol spoken
between peers and super nodes is more like what we normally called
"Tracker Protocol". It enables super nodes to check peer status,
maintain peer lists for several, if not all, channels. It provides
peer list/content servers to peers. Thus, in the rest of this
section, when we mention Tracker Protocol, we mean the one used
between peers and super nodes.
Gu, et al. Expires April 19, 2013 [Page 6]
Internet-Draft Survey of P2P Streaming Applications October 2012
Joost uses supernodes only to control the traffic but never as relays
for video content. The main streams are sent from the Joost Seeders
and all the traffic is encrypted secure shared video content from
piracy. Joost peers cache the received content to re-stream it when
needed by other peers, to recover from missed video blocks.
Although Joost is a peer-to-peer video distribution technology, it
relies heavily on a few centralized servers to provide the licensed
video content and uses the peer-to-peer overlay to service content at
a faster rate. The centralized nature of Joost is the main factor
that influences its lack of locality awareness and low fairness
ratio. Since Joost is directly providing at least two thirds of the
video content to its clients, only one third will have to be supplied
by independent nodes. This approach does not scale well, and is
sustainable today only because of the relatively low user population.
From a network usage perspective, Joost consumes approximately 700
kbps downstream and 120 kbps upstream, regardless of the total
capacity of the network. This is assuming the network upstream
capacity it is larger than 1Mbps.
There may be some type of RTT-savvy selection algorithm at work,
which gives priority to peers with RTT less than or equal to the RTT
of a Joost content providing super node.
Peers will communicate with super nodes in some scenarios using
Tracker Protocol.
1. When a peer starts Joost software, after the installation and
bootstrapping, the peer will communicate with one or several super
nodes to get a list of available peers/content servers.
2. For on-demand video functions, super nodes periodically exchange
small UDP packets for peer management purpose.
3. When switching between channels, peers contact super nodes and
the latter help the peers find available peers to fetch the requested
media data.
Peer Protocol: The following investigations are mainly motivated from
[JOOSTEXP ], in which a data-driven reverse-engineer experiments are
performed. We omitted the analysis process and directly show the
conclusion. Media data in Joost is split into chunks and then
encrypted. Each chunk is packetized with about 5-10 seconds of video
data. After receiving peer list from super nodes, a peer negotiates
with some or, if necessary, all of the peers in the list to find out
what chunks they have. Then the peer makes decision about from which
peers to get the chunks. No peer capability information is exchanged
Gu, et al. Expires April 19, 2013 [Page 7]
Internet-Draft Survey of P2P Streaming Applications October 2012
in the Peer Protocol.
+---------------+ +-------------------+
| Version Server| | Tracker Server |
+---------------+ +-------------------+
\ |
\ |
\ | +---------------+
\ | |Graphics Server|
\ | +---------------+
\ | |
+--------------+ +-------------+ +--------------+
|Content Server|--------| Peer1 |--------|Backend Server|
+--------------+ +-------------+ +--------------+
|
|
|
|
+------------+ +---------+
| Super Node |-------| Peer2 |
+------------+ +---------+
Figure 1, Architecture of Joost system
Joost provides large buffering and thus causes longer start-up delay
for VoD traffic than for live media streaming traffic. It affords
more FEC for VoD traffic but gives higher priority in delivery to
live media streaming traffic.
To enhance user viewing experience, Joost provides chat capability
between viewers and user program rating mechanisms.
3.1.2. Octoshape
CNN [CNN] has been working with a P2P Plug-in, from a Denmark-based
company Octoshape, to broadcast its living streaming. Octoshape
helps CNN serve a peak of more than a million simultaneous viewers.
It has also provided several innovative delivery technologies such as
loss resilient transport, adaptive bit rate, adaptive path
optimization and adaptive proximity delivery. Figure 2 depicts the
architecture of the Octoshape system.
Octoshape maintains a mesh overlay topology. Its overlay topology
maintenance scheme is similar to that of P2P file-sharing
applications, such as BitTorrent. There is no Tracker server in
Octoshape, thus no Tracker Protocol is required. Peers obtain live
streaming from content servers and peers over Octoshape Protocol.
Several data streams are constructed from live stream. No data
streams are identical and any number K of data streams can
Gu, et al. Expires April 19, 2013 [Page 8]
Internet-Draft Survey of P2P Streaming Applications October 2012
reconstruct the original live stream. The number K is based on the
original media playback rate and the playback rate of each data
stream. For example, a 400Kbit/s media is split into four 100Kbit/s
data streams, and then k = 4. Data streams are constructed in peers,
instead of Broadcast server, which release server from large burden.
The number of data streams constructed in a particular peer equals
the number of peers downloading data from the particular peer, which
is constrained by the upload capacity of the particular peer. To get
the best performance, the upload capacity of a peer should be larger
than the playback rate of the live stream. If not, an artificial
peer may be added to deliver extra bandwidth.
Each single peer has an address book of other peers who is watching
the same channel. A Standby list is set up based on the address
book. The peer periodically probes/asks the peers in the standby
list to be sure that they are ready to take over if one of the
current senders stops or gets congested. [Octoshape]
Peer Protocol: The live stream is firstly sent to a few peers in the
network and then spread to the rest of the network. When a peer
joins a channel, it notifies all the other peers about its presence
using Peer Protocol, which will drive the others to add it into their
address books. Although [Octoshape] declares that each peer records
all the peers joining the channel, we suspect that not all the peers
are recorded, considering the notification traffic will be large and
peers will be busy with recording when a popular program starts in a
channel and lots of peers switch to this channel. Maybe some
geographic or topological neighbors are notified and the peer gets
its address book from these nearby neighbors.
The peer sends requests to some selected peers for the live stream
and the receivers answers OK or not according to their upload
capacity. The peer continues sending requests to peers until it
finds enough peers to provide the needed data streams to redisplay
the original live stream.
Gu, et al. Expires April 19, 2013 [Page 9]
Internet-Draft Survey of P2P Streaming Applications October 2012
+------------+ +--------+
| Peer 1 |---| Peer 2 |
+------------+ +--------+
| \ / |
| \ / |
| \ |
| / \ |
| / \ |
| / \ |
+--------------+ +-------------+
| Peer 4 |----| Peer3 |
+--------------+ +-------------+
*****************************************
|
|
+---------------+
| Content Server|
+---------------+
Figure 2, Architecture of Octoshape system
To spread the burden of data distribution across several peers and
thus limiting the impact of peer loss, Octoshape splits a live stream
into a number of smaller equal-sized sub-streams. For example, a
400kbit/s live stream is split and coded into 12 distinct 100kbit/s
sub-streams. Only a subset of these sub-streams needs to reach a
user for it to reconstruct the "original" live stream. The number of
distinct sub-streams could be as many as the number of active peers.
To optimize bandwidth utilization, Octoshape leverages computers
within a network to minimize external bandwidth usage and to select
the most reliable and "closest" source to each viewer. It also
chooses the best matching available codecs and players and scales bit
rate up and down according to available internet connection.
3.1.3. PPLive
PPLive [PPLive] is one of the most popular P2P streaming software in
China. The PPLive system includes six parts.
(1) Video streaming server: providing the source of video content and
coding the content for adapting the network transmission rate and the
client playing.
(2) Peer: also called node or client. The nodes compose the self-
organizing network logically and each node can join or withdraw
whenever. When the client downloads the content, it also provides
Gu, et al. Expires April 19, 2013 [Page 10]
Internet-Draft Survey of P2P Streaming Applications October 2012
its own content to the other client at the same time.
(3) Directory server: when the user start up the PPLive client, the
client will automatically register the user information to this
server; when the client exits, the client will cancel its peer.
(4) Tracker server: this server will record the information of all
the users which see the same content. When the client request some
content, this server will check if there are other peers owning the
content and send the information of these peers to the client, if on,
then tell the client to request the video steaming server for the
content.
(5) Web server: providing PPLive software updating and downloading.
(6) Channel list server: this server store the information of all the
programs which can be seen by the users, including VoD programs and
broadcasting programs, such as program name, file size and
attribution.
PPLive has two major communication protocols. One is Registration
and peer discovery protocol, i.e. Tracker Protocol, and the other is
P2P chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows
the architecture of PPLive.
Tracker Protocol: First, a peer gets the channel list from the
Channel server, in a way similar to that of Joost. Then the peer
chooses a channel and asks the Tracker server for the peerlist of
this channel.
Peer Protocol: The peer contacts the peers in its peerlist to get
additional peerlists, which are aggregated with its existing list.
Through this list, peers can maintain a mesh for peer management and
data delivery.
For the video-on-demand (VoD) operation, because different peers
watch different parts of the channel, a peer buffers up to a few
minutes worth of chunks within a sliding window to share with each
others. Some of these chunks may be chunks that have been recently
played; the remaining chunks are chunks scheduled to be played in the
next few minutes. Peers upload chunks to each other. To this end,
peers send to each other "buffer-map" messages; a buffer-map message
indicates which chunks a peer currently has buffered and can share.
The buffer-map message includes the offset (the ID of the first
chunk), the length of the buffer map, and a string of zeroes and ones
indicating which chunks are available (starting with the chunk
designated by the offset). PPlive transfer Data over UDP.
Gu, et al. Expires April 19, 2013 [Page 11]
Internet-Draft Survey of P2P Streaming Applications October 2012
Video Download Policy of PPLive:
1) Top ten peers contribute to a major part of the download
traffic. Meanwhile, the top peer session is quite short compared
with the video session duration. This would suggest that PPLive
gets video from only a few peers at any given time, and switches
periodically from one peer to another;
2) PPLive can send multiple chunk requests for different chunks to
one peer at one time;
3) PPLive is observed to have the download scheduling policy of
giving higher priority to rare chunks and to chunks closer to play
out deadline and to be using a sliding window mechanism to
regulate the buffering of chunks.
PPLive maintains a constant peer list with relatively small number of
peers. [P2PIPTVMEA]
+------------+ +--------+
| Peer 2 |----| Peer 3 |
+------------+ +--------+
| |
| |
+--------------+
| Peer 1 |
+--------------+
|
|
|
+---------------+
| Tracker Server|
+---------------+
Figure 3, Architecture of PPlive system
3.1.4. Zattoo
Zattoo is P2P live streaming system which serves over 3 million
registered users over European countries [Zattoo].The system delivers
live streaming using a receiver-based, peer-division multiplexing
scheme. Zattoo reliabily streams media among peers using the mesh
structure.
Figure 4 depcits a typical procedure of single TV channel carried
over Zattoo network. First, Zattoo system broadcasts live TV,
captured from satellites, onto the Internet. Each TV channel is
delivered through a separate P2P network.
Gu, et al. Expires April 19, 2013 [Page 12]
Internet-Draft Survey of P2P Streaming Applications October 2012
-------------------------------
| ------------------ | --------
| | Broadcast | |---------|Peer1 |-----------
| | Servers | | -------- |
| Administrative Servers | -------------
| ------------------------ | | Super Node|
| | Authentication Server | | -------------
| | Rendezvous Server | | |
| | Feedback Server | | -------- |
| | Other Servers | |---------|Peer2 |----------|
| ------------------------| | --------
------------------------------|
Figure 4, Basic architecture of Zattoo system
Tracker(Rendezvous Server) Protocol: In order to receive the signal
the requested channel, registered users are required to be
authenticated through Zattoo Authentication Server. Upon
authentication, users obtain a ticket with specific lifetime. Then,
users contact Rendezvous Server with the ticket and identify of
interested TV channel. In return, the Rendezvous Server sends back a
list joined peers carrying the channel.
Peer Protocol: Similar to aforementioned procedures in Joost, PPLive,
a new Zattoo peer requests to join an existing peer among the peer
list. Upon the availability of bandwidth, requested peer decides how
to multiplex a stream onto its set of neighboring peers. When
packets arrive at the peer, sub-streams are stored for reassembly
constructing the full stream.
Note Zattoo relies on Bandwdith Estimation Server to initially
estimate the amount of available uplink bandwith at a peer. Once a
peer starts to forward substream to other peers, it receives QoS
feedback from other receivers if the quality of sub-stream drops
below a threshold.
For reliable data delivery, each live stream is partitioned into
video segments. Each video segment is coded for forward error
correction with Reed-Solomon error correcting code into n sub-stream
packets such that having obtained k correct packets of a segment is
sufficient to reconstruct the remaining n-k packets of the same video
segment. To receive a video segment, each peer then specifies the
sub-stream(s) of the video segment it would like to receive from the
neighboring peers.
Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
handle longer term bandwidth fluctuations. In this scheme, each peer
determines how many sub-streams to transmit and when to switch
partners. Specifically, each peer continually estimates the amount
Gu, et al. Expires April 19, 2013 [Page 13]
Internet-Draft Survey of P2P Streaming Applications October 2012
of available uplink bandwidth based initially on probe packets to the
Zattoo Bandwidth Estimation Server and later, based on peer QoS
feedbacks, using different algorithms depending on the underlying
transport protocol. A peer increases its estimated available uplink
bandwidth, if the current estimate is below some threshold and if
there has been no bad quality feedback from neighboring peers for a
period of time, according to some algorithm similar to how TCP
maintains its congestion window size. Each peer then admits
neighbors based on the currently estimated available uplink
bandwidth. In case a new estimate indicates insufficient bandwidth
to support the existing number of peer connections, one connection at
a time, preferably starting with the one requiring the least
bandwidth, is closed. On the other hand, if loss rate of packets
from a peer's neighbor reaches a certain threshold, the peer will
attempt to shift the degraded neighboring peer load to other existing
peers, while looking for a replacement peer. When one is found, the
load is shifted to it and the degraded neighbor is dropped. As
expected if a peer's neighbor is lost due to departure, the peer
initiates the process to replace the lost peer. To optimize the PDM
configuration, a peer may occasionally initiate switching existing
partnering peers to topologically closer peers.
3.1.5. PPStream
The system architecture and working flows of PPStream is similar to
PPLive [PPStream]. PPStream transfers data using mostly TCP, only
occasionally UDP.
Video Download Policy of PPStream
1) Top ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream gets the video from
many peers simultaneously, and its peers have long session
duration;
2) PPStream does not send multiple chunk requests for different
chunks to one peer at one time;
PPStream maintains a constant peer list with relatively large number
of peers. [P2PIPTVMEA]
To ensure data availability, PPStream uses some form of chunk
retransmission request mechanism and shares buffer map at high rate,
although it rarely requests concurrently for the same data chunk.
Each data chunk, identified by the play time offset encoded by the
program source, is divided into 128 sub-chunks of 8KB size each. The
chunk id is used to ensure sequential ordering of received data
chunk.
Gu, et al. Expires April 19, 2013 [Page 14]
Internet-Draft Survey of P2P Streaming Applications October 2012
The buffer map consists of one or more 128-bit flags denoting the
availability of sub-chunks and having a corresponding time offset.
Usually a buffer map contains only one data chunk at a time and is
thus smaller than that of PPLive. It also contains sending peer's
playback status to the other peers because as soon as a data chunk is
played back, the chunk is deleted or replaced by the next data chunk.
At the initiating stage, a peer can use up to 4 data chunks and on a
stabilized stage, a peer uses usually one data chunk. However, in
transient stage, a peer uses variable number of chunks. Although,
sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy, the same fetching pattern for
one data chunk seems to repeat in the following data chunks.
Moreover, high bandwidth PPStream peers tend to receive chunks
earlier and thus to contributes more than lower bandwidth peers.
3.1.6. SopCast
The system architecture and working flows of SopCast is similar to
PPLive. SOPCast transfer data mainly using UDP, occasionally TCP;
Top ten peers contribute to about half of the total download traffic.
SOPCast's download policy is similar to PPLive's policy in that it
switches periodically between provider peers. However, SOPCast seems
to always need more than one peer to get the video, while in PPLive a
single peer could be the only video provider;
SOPCast's peer list can be as large as PPStream's peer list. But
SOPCast's peer list varies over time. [P2PIPTVMEA]
SopCast allows for software update through (HTTP) a centralized web
server and makes available channel list through (HTTP) another
centralized server.
SopCast traffic is encoded and SopCast TV content is divided into
video chunks or blocks with equal sizes of 10KB. Sixty percent of
its traffic is signaling packets and 40% is actual video data
packets. SopCast produces more signaling traffic compared to PPLive,
PPStream, and TVAnts, whereas PPLive produces the least. Its traffic
is also noted to have long-range dependency, indicating that
mitigating it with QoS mechanisms may be difficult. It is reported
that SopCast communication mechanism starts with UDP for the exchange
of control messages among its peers using a gossip-like protocol and
then moves to TCP for the transfer of video segments. This use of
TCP for data transfer seems to contradict others findings.
Gu, et al. Expires April 19, 2013 [Page 15]
Internet-Draft Survey of P2P Streaming Applications October 2012
3.1.7. TVants
The system architecture and working flows of TVants is similar to
PPLive. TVAnts is more balanced between TCP and UDP in data
transmission;
The system architecture and working flows of TVants is similar to
PPLive. TVAnts is more balanced between TCP and UDP in data
transmission;
TVAnts' peer list is also large and varies over time. [P2PIPTVMEA]
For data delivery, peers exhibit mild preference to exchange data
among themselves in the same Autonomous System and also among peers
in the same subnet. TVAnts peer also exhibits some preference to
download from closer peers. TVAnts peer exploits location
information and download mostly from high-bandwidth peers. However,
it does not seem to enforce any tit-for-tat mechanisms in the data
delivery.
TVAnts seems to be sensitive to network impairments such as changes
in network capacity, packet loss, and delay. For capacity loss, a
peer will always seek for more peers to download. In the process of
trying to avoid bad paths and selecting good peers to continue
downloading data, aggressive and potentially harmful behavior for
both application and the network results when bottleneck is affecting
all potential peers.
When a peer experiences limited access capacity, it reacts by
increasing redundancy (with FEC or ARQ mechanism) as if reacting to
loss and thus causes higher download rate. To recover from packet
losses, it uses some kind of ARQ mechanism. Although network
conditions do impact video stream distribution such as the network
delay impacting the start-up phase, they seem to have little impact
on the network topology discovery and maintenance process.
3.2. Tree-based P2P streaming systems
Tree-based systems implement a tree distribution graph, rooted at the
source of content. In principle, each node receives data from a
parent node, which may be the source or a peer. If peers do not
change too often, such systems require little overhead, since packets
are forwarded from node to node without the need for extra messages.
However, in high churn environments (i.e. fast turnover of peers in
the tree), the tree must be continuously destroyed and rebuilt, a
process that requires considerable control message overhead. As a
side effect, nodes must buffer data for at least the time required to
repair the tree, in order to avoid packet loss. One major drawback
Gu, et al. Expires April 19, 2013 [Page 16]
Internet-Draft Survey of P2P Streaming Applications October 2012
of tree-based streaming systems is their vulnerability to peer churn.
A peer departure will temporarily disrupt video delivery to all peers
in the sub-tree rooted at the departed peer.
3.2.1. PeerCast
PeerCast adopts a Tree structure. The architecture of PeerCast is
shown in Figure 6.
Peers in one channel construct the Broadcast Tree and the Broadcast
server is the root of the Tree. A Tracker can be implemented
independently or merged in the Broadcast server. Tracker in Tree
based P2P streaming application selects the parent nodes for those
new peers who join in the Tree. A Transfer node in the Tree receives
and transfers data simultaneously.
Peer Protocol: The peer joins a channel and gets the broadcast server
address. First of all, the peer sends a request to the server, and
the server answers OK or not according to its idle capability. If
the broadcast server has enough idle capability, it will include the
peer in its child-list. Otherwise, the broadcast server will choose
at most eight nodes of its children and answer the peer. The peer
records the nodes and contacts one of them, until it finds a node
that can server it.
In stead of requesting the channel by the peer, a Transfer node
pushes live stream to its children, which can be a transfer node or a
receiver. A node in the tree will notify its status to its parent
periodically, and the latter will update its child-list according to
the received notifications.
Gu, et al. Expires April 19, 2013 [Page 17]
Internet-Draft Survey of P2P Streaming Applications October 2012
------------------------------
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Broadcast server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
|Transfer1| |Transfer2|
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
|Receiver1| |Receiver2| |Receiver3| |Receiver4|
+---------+ +---------+ +---------+ +---------+
Figure 6, Architecture of PeerCast system
Each PeerCast node has a peering layer that is between the
application layer and the transport layer. The peering layer of each
node coordinates among similar nodes to establish and maintain a
multicast tree. Moreover, the peering layer also supports a simple,
lightweight redirect primitive. This primitive allows a peer p to
direct another peer c which is either opening a data-transfer session
with p, or has a session already established with p to a target peer
t to try to establish a data-transfer session. Peer discovery starts
at the root (source) or some selected sub-tree root and goes
recursively down the tree structure. When a peer leaves normally, it
informs its parent who then releases the peer, and it also redirects
all its immediate children to find new parents starting at some
target node.
The peering layer allows for different policies of topology
maintenance. In choosing a parent from among the children of a given
peer, a child can be chosen randomly, one at a time in some fixed
order, or based on least access latency with respect to the choosing
peer.
Gu, et al. Expires April 19, 2013 [Page 18]
Internet-Draft Survey of P2P Streaming Applications October 2012
3.2.2. Conviva
Conviva [conviva] is a real-time media control platform for Internet
multimedia broadcasting. For its early prototype, End System
Multicast (ESM) [ESM] is the underlying networking technology on
organizing and maintaining an overlay broadcasting topology. Next we
present the overview of ESM. ESM adopts a Tree structure. The
architecture of ESM is shown in Figure 7.
ESM has two versions of protocols: one for smaller scale conferencing
apps with multiple sources, and the other for larger scale
broadcasting apps with Single source. We focus on the latter version
in this survey.
ESM maintains a single tree for its overlay topology. Its basic
functional components include two parts: a bootstrap protocol, a
parent selection algorithm, and a light-weight probing protocol for
tree topology construction and maintenance; a separate control
structure decoupled from tree, where a gossip-like algorithm is used
for each member to know a small random subset of group members;
members also maintain pathes from source.
Upon joining, a node gets a subset of group membership from the
source (the root node); it then finds parent using a parent selection
algorithm. The node uses light-weight probing heuristics to a subset
of members it knows, and evaluates remote nodes and chooses a
candidate parent. It also uses the parent selection algorithm to
deal with performance degradation due to node and network churns.
ESM Supports for NATs. It allows NATs to be parents of public hosts,
and public hosts can be parents of all hosts including NATs as
children.
Gu, et al. Expires April 19, 2013 [Page 19]
Internet-Draft Survey of P2P Streaming Applications October 2012
------------------------------
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Broadcast server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
| Peer1 | | Peer2 |
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer3 | | Peer4 | | Peer5 | | Peer6 |
+---------+ +---------+ +---------+ +---------+
Figure 7, Architecture of ESM system
ESM constructs the multicast tree in a two-step process. It
constructs first a mesh of the participating peers; the mesh having
the following properties:
1) The shortest path delay between any pair of peers in the mesh
is at most K times the unicast delay between them, where K is a
small constant.
2) Each peer has a limited number of neighbors in the mesh which
does not exceed a given (per-member) bound chosen to reflect the
bandwidth of the peer's connection to the Internet.
It then constructs a (reverse) shortest path spanning trees of the
mesh with the root being the source.
Therefore a peer participates in two types of topology management: a
control structure in which peers make sure they are always connected
in a mesh and a data delivery structure in which peers make sure data
gets delivered to them in a tree structure.
To improve mesh/tree structural and operating quality, each peer
randomly probes one another to add new links that have perceived gain
Gu, et al. Expires April 19, 2013 [Page 20]
Internet-Draft Survey of P2P Streaming Applications October 2012
in utility; and each peer continually monitors existing links to drop
those links that have perceived drop in utility. Switching parent
occurs if a peer leaves or fails; if there is a persistent congestion
or low bandwidth condition; or if there is a better clustering
configuration. To allow for more public hosts to be available for
becoming parents of NATs, public hosts preferentially choose NATs as
parents.
The data delivery structure, obtained from running a distance vector
protocol on top of the mesh using latency between neighbors as the
routing metric, is maintained using various mechanisms. Each peer
maintains and keeps up to date the routing cost to every other
member, together with the path that leads to such cost. To ensure
routing table stability, data continues to be forwarded along the old
routes for sufficient time until the routing tables converge. The
time is set to be larger than the cost of any path with a valid
route, but smaller than infinite cost. To make better use of the
path bandwidth, streams of different bit-rates are forwarded
according to the following priority scheme: audio being higher than
video streams and lower quality video being higher than quality
video. Moreover, bit-rates of stream are adapted to the peer
performance capability.
3.3. Hybrid P2P streaming system
The object of the hybrid P2P streaming system is to use the
comprehensive advantage of tree-mesh topology and pull-push mode in
order to achieve balance among system robust, scalability and
application real-time performance.
3.3.1. New Coolstreaming
The Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P
live streaming. As the above analysis, it has poor delay performance
and high overhead associated each video block transmission. After
that, New coolstreaming [NEWCOOLStreaming] adopts a hybrid mesh and
tree structure with hybrid pull and push mechanism. All the peers
are organized into mesh-based topology in the similar way like pplive
to ensure high reliability.
Besides, content delivery mechanism is the most important part of New
Coolstreaming. Fig.8 is the content delivery architecture. The
video stream is divided into blocks with equal size, in which each
block is assigned a sequence number to represent its playback order
in the stream. We divide each video stream into multiple sub-streams
without any coding, in which each node can retrieve any sub-stream
independently from different parent nodes. This subsequently reduces
Gu, et al. Expires April 19, 2013 [Page 21]
Internet-Draft Survey of P2P Streaming Applications October 2012
the impact to content delivery due to a parent departure or failure.
The details of hybrid push and pull content delivery scheme are shown
in the following:
(1) A node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in BM, the requested
partner, i.e., the parent node.( The node can subscribe more sub-
streams to its partners in this way to obtain higher play quality.)
(2) The selected parent node will continue pushing all blocks in need
of the sub-stream to the requested node.
This not only reduces the overhead associated with each video block
transfer, but more importantly, significantly reduces the timing
involved in retrieving video content.
------------------------------
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Content server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
| Peer1 | | Peer2 |
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+
Figure 8 Content Delivery Architecture
Video content is processed for ease of delivery, retrieval, storage
and play out. To manage content delivery, a video stream is divided
into blocks with equal size, each of which is assigned a sequence
number to represent its playback order in the stream. Each block is
further divided into K sub-blocks and the set of ith sub-blocks of
all blocks constitutes the ith sub-stream of the video stream, where
i is a value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its
Gu, et al. Expires April 19, 2013 [Page 22]
Internet-Draft Survey of P2P Streaming Applications October 2012
parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer.
The synchronization buffer stores the received sub-blocks of each
sub-stream according to the associated block sequence number of the
video stream. The cache buffer then picks up the sub-blocks
according to the associated sub-stream index of each ordered block.
To advertise the availability of the latest block of different sub-
streams in its buffer, a node uses a Buffer Map which is represented
by two vectors of K elements each. Each entry of the first vector
indicates the block sequence number of the latest received sub-
stream, and each bit entry of the second vector if set indicates the
block sequence index of the sub-stream that is being requested.
For data delivery, a node uses a hybrid push and pull scheme with
randomly selected partners. A node having requested one or more
distinct sub-streams from a partner as indicated in its first Buffer
Map will continue to receive the sub-streams of all subsequent blocks
from the same partner until future conditions cause the partner to do
otherwise. Moreover, users retrieve video indirectly from the source
through a number of strategically located servers.
To keep the parent-children relationship above a certain level of
quality, each node constantly monitors the status of the on-going
sub-stream reception and re-selects parents according to sub-stream
availability patterns. Specifically, if a node observes that the
block sequence number of the sub-stream of a parent is much smaller
than any of its other partners by a predetermined amount, the node
then concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum
and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced.
4. A common P2P Streaming Process Model
As shown in Figure 8, a common P2P streaming process can be
summarized based on Section 3:
1) When a peer wants to receive streaming content:
1.1) Peer acquires a list of peers/parent nodes from the
tracker.
1.2) Peer exchanges its content availability with the peers on
the obtained peer list, or requests to be adopted by the parent
nodes.
Gu, et al. Expires April 19, 2013 [Page 23]
Internet-Draft Survey of P2P Streaming Applications October 2012
1.3) Peer identifies the peers with desired content, or the
available parent node.
1.4) Peer requests for the content from the identified peers,
or receives the content from its parent node.
2) When a peer wants to share streaming content with others:
2.1) Peer sends information to the tracker about the swarms it
belongs to, plus streaming status and/or content availability.
+---------------------------------------------------------+
| +--------------------------------+ |
| | Tracker | |
| +--------------------------------+ |
| ^ | ^ |
| | | | |
| query | | peer list/ |streaming Status/ |
| | | Parent nodes |Content availability/ |
| | | |node capability |
| | | | |
| | V | |
| +-------------+ +------------+ |
| | Peer1 |<------->| Peer 2 | |
| +-------------+ content/+------------+ |
| join requests |
+---------------------------------------------------------+
Figure 8, A common P2P streaming process model
The functionality of Tracker and data transfer in Mesh-based
application and Tree-based is a little different. In the Mesh-based
applications, such as Joost and PPLive, Tracker maintains the lists
of peers storing chunks for a specific channel or streaming file. It
provides peer list for peers to download from, as well as upload to,
each other. In the Tree-based applications, such as PeerCast and
Canviva, Tracker directs new peers to find parent nodes and the data
flows from parent to child only.
5. Security Considerations
This document does not raise security issues.
6. Author List
The authors of this document are listed as below.
Gu, et al. Expires April 19, 2013 [Page 24]
Internet-Draft Survey of P2P Streaming Applications October 2012
Hui Zhang, NEC Labs America.
Jun Lei, University of Goettingen.
Gonzalo Camarillo, Ericsson.
Yong Liu, Polytechnic University.
Delfin Montuno, Huawei.
Lei Xie, Huawei.
Shihui Duan, CATR.
7. Acknowledgments
We would like to acknowledge Jiang xingfeng for providing good ideas
for this document.
8. Informative References
[PPLive] "www.pplive.com".
[PPStream]
"www.ppstream.com".
[CNN] "www.cnn.com".
[JOOSTEXP]
Lei, Jun, et al., "An Experimental Analysis of Joost Peer-
to-Peer VoD Service".
[P2PVOD] Huang, Yan, et al., "Challenges, Design and Analysis of a
Large-scale P2P-VoD System", 2008.
[Octoshape]
Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet".
[Zattoo] "http: //zattoo.com/".
[Conviva] "http://www.rinera.com/".
[ESM] Zhang, Hui., "End System Multicast,
http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
May .
Gu, et al. Expires April 19, 2013 [Page 25]
Internet-Draft Survey of P2P Streaming Applications October 2012
[Survey] Liu, Yong, et al., "A survey on peer-to-peer video
streaming systems", 2008.
[P2PIPTVMEA]
Silverston, Thomas, et al., "Measuring P2P IPTV Systems".
[Challenge]
Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the
Internet: Issues, Existing Approaches, and Challenges",
June 2007.
[NEWCOOLStreaming]
Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications",
Apr. 2008.
Authors' Addresses
Gu Yingjie (editor)
Huawei
No.101 Software Avenue
Nanjing, Jiangsu Province 210012
P.R.China
Phone: +86-25-56624760
Fax: +86-25-56624702
Email: guyingjie@huawei.com
Zong Ning (editor)
Huawei
No.101 Software Avenue
Nanjing, Jiangsu Province 210012
P.R.China
Phone: +86-25-56624760
Fax: +86-25-56624702
Email: zongning@huawei.com
Zhang Yunfei
China Mobile
Email: zhangyunfei@chinamobile.com
Gu, et al. Expires April 19, 2013 [Page 26]
| PAFTECH AB 2003-2026 | 2026-04-24 06:55:44 |