One document matched: draft-unify-nfvrg-recursive-programming-01.xml
<?xml version="1.0" encoding="UTF-8"?>
<?rfc toc="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes" ?>
<?rfc tocindent="no"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY I-D.unify-nfvrg-challenges SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.unify-nfvrg-challenges.xml">
<!ENTITY I-D.ietf-sfc-dc-use-cases SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-sfc-dc-use-cases.xml">
<!ENTITY I-D.zu-nfvrg-elasticity-vnf SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.zu-nfvrg-elasticity-vnf.xml">
]>
<rfc category="info" ipr="trust200902" docName="draft-unify-nfvrg-recursive-programming-01">
<front>
<title abbrev="Toward recursive programming">Towards recursive virtualization and programming for
network and cloud resources</title>
<author fullname="Robert Szabo" initials="R." surname="Szabo">
<organization abbrev="Ericsson">Ericsson Research, Hungary</organization>
<address>
<postal>
<street>Irinyi Jozsef u. 4-20</street>
<city>Budapest</city>
<region></region>
<code>1117</code>
<country>Hungary</country>
</postal>
<email>robert.szabo@ericsson.com</email>
<uri>http://www.ericsson.com/</uri>
</address>
</author>
<author fullname="Zu Qiang" initials="Z." surname="Qiang">
<organization abbrev="Ericsson">Ericsson</organization>
<address>
<postal>
<street>8400, boul. Decarie</street>
<city>Ville Mont-Royal</city>
<region>QC</region>
<code>8400</code>
<country>Canada</country>
</postal>
<email>zu.qiang@ericsson.com</email>
<uri>http://www.ericsson.com/</uri>
</address>
</author>
<author fullname="Mario Kind" initials="M." surname="Kind">
<organization abbrev="Deutsche Telekom AG">Deutsche Telekom AG</organization>
<address>
<postal>
<street>Winterfeldtstr. 21</street>
<city>10781 Berlin</city>
<country>Germany</country>
</postal>
<email>mario.kind@telekom.de</email>
</address>
</author>
<date year="2015" />
<area>IRTF</area>
<workgroup>NFVRG</workgroup>
<keyword>Internet-Draft</keyword>
<abstract>
<t>The introduction of Network Function Virtualization (NFV) in
carrier-grade networks promises improved operations in terms of
flexibility, efficiency, and manageability. NFV is an approach
to combine network and compute virtualizations
together. However, network and compute resource domains expose
different virtualizations and programmable interfaces. In
<xref target="I-D.unify-nfvrg-challenges"/> we argued for a
joint compute and network virtualization by looking into
different compute abstractions.</t>
<!-- They usually expose detailed enough control only to their -->
<!-- primarily resource types. Orchestrating service graphs with -->
<!-- transparent network functions, however, requires coordination of -->
<!-- networking across compute and network domains. -->
<t>In this document we analyze different approaches to orchestrate a
service graph with transparent network functions into a commodity data
center. We show that a recursive compute and network joint
virtualization and programming has clear advantages compared to other
approaches with separated control between compute and network
resources. The discussion of the problems and the proposed solution is
generic for any data center use case; however, we use NFV as an
example.</t>
</abstract>
</front>
<middle>
<section title="Introduction" anchor="intro">
<t>To a large degree there is agreement in the research
community that rigid network control limits the flexibility of
service creation. In
<xref target="I-D.unify-nfvrg-challenges"/>
<list style="symbols">
<t> we analyzed different compute domain abstractions to
argue that joint compute and network virtualization
and programming is needed for efficient combination of
these resource domains;</t>
<t> we described challenges associated with the combined
handling of compute and network resources for a unified
production environment.</t>
</list>
</t>
<t>Our goal here is to analyze different approaches to
instantiate a service graph with transparent network functions
into a commodity Data Center (DC). More specifically, we
analyze
<list style="symbols">
<t> two black box DC set-ups, where the intra-DC network
control is limited to some generic compute only control
programming interface; </t>
<t> a white box DC set-up, where the intra-DC network
control is exposed directly to for a DC external control
to coordinate forwarding configurations;</t>
<t> a recursive approach, which illustrates potential
benefits of a joint compute and network virtualization and
control.</t>
</list>
</t>
<t>The discussion of the problems and the proposed solution is
generic for any data center use case; however, we use NFV as an
example.</t>
</section>
<section title="Terms and Definitions" anchor="sec-terms">
<t>We use the terms compute and "compute and storage"
interchangeably throughout the document. Moreover, we use the
following definitions, as established in
<xref target="ETSI-NFV-Arch"/>:</t>
<t><list style="hanging">
<t hangText="NFV:">Network Function Virtualization - The principle
of separating network functions from the hardware they run on by
using virtual hardware abstraction.</t>
<t hangText="NFVI:">NFV Infrastructure - Any combination of
virtualized compute, storage and network resources.</t>
<t hangText="VNF:">Virtualized Network Function - a software-based
network function.</t>
<t hangText="MANO:">Management and Orchestration - In the ETSI NFV
framework <xref target="ETSI-NFV-MANO"/>, this is the global entity
responsible for management and orchestration of NFV lifecycle.</t>
</list></t>
<t>Further, we make use of the following terms:</t>
<t><list style="hanging">
<t hangText="NF:">a network function, either software-based (VNF) or
appliance-based.</t>
<t hangText="SW:">a (routing/switching) network element with a
programmable control plane interface.</t>
<t hangText="DC:"> a data center is an interconnection of Compute Nodes
(see below) with a data center controller, which offers programmatic
resource control interface to its clients.</t>
<t hangText="CN:"> a server, which is controlled by a DC control plane
and provides execution environment for virtual machine (VM) images such
as VNFs.</t>
</list></t>
</section>
<section title="Use Cases" anchor="sec-ucs">
<t>Service Function Chaining (SFC) looks into the problem how to deliver
end-to-end services through the chain of network functions (NFs). Many
of such NFs are envisioned to be transparent to the client, i.e., they
intercept the client connection for adding value to the services
without the knowledge of the client. However, deploying network
function chains in DCs with Virtualized Network Functions (VNFs) are
far from trivial <xref target="I-D.ietf-sfc-dc-use-cases"/>. For
example, different exposures of the internals of the DC will imply
different dynamisms in operations, different orchestration complexities
and may yield for different business cases with regards to
infrastructure sharing.</t>
<t>We investigate different scenarios with a simple NF forwarding graph
of three VNFs (o->VNF1->VNF2->VNF3->o), where all VNFs are deployed
within the same DC. We assume that the DC is a multi-tier leaf and
spine (CLOS) and that all VNFs of the forwarding graph are
bump-in-the-wire NFs, i.e., the client cannot explicitly access
them.</t>
<section title="Black Box DC" anchor="sec-ucs-bb">
<t>In Black Bock DC set-ups, we assume that the compute
domain is an autonomous domain with legacy (e.g., OpenStack)
orchestration APIs. Due to the lack of direct forwarding
control within the DC, no native L2 forwarding can be used
to insert VNFs running in the DC into the forwarding
graph. Instead, explicit tunnels (e.g., VxLAN) must be
used, which need termination support within the deployed
VNFs. Therefore, VNFs must be aware of the previous and
the next hops of the forwarding graph to receive and
forward packets accordingly.</t>
<section title="Black Box DC with L3 tunnels"
anchor="sec-ucs-bb-l3">
<t><xref target="fig_bb_vnf"/> illustrates a set-up where
an external VxLAN termination point in the SDN domain is
used to forward packets to the first NF (VNF1) of the
chain within the DC. VNF1, in turn, is configured to
forward packets to the next SF (VNF2) in the chain and so
forth with VNF2 and VNF3.</t>
<t>In this set-up VNFs must be capable of handling L3
tunnels (e.g., VxLAN) and must act as forwarders
themselves. Additionally, an operational L3 underlay must
be present so that VNFs can address each
other.</t>
<t>Furthermore, VNFs holding chain forwarding information
could be untrusted user plane functions from 3rd party
developers. Enforcement of proper forwarding is
problematic.</t>
<t> Additionally, compute only orchestration might result
in sub-optimal allocation of the VNFs with regards to the
forwarding overlay, for example, see back-forth use of a
core switch in <xref target="fig_bb_vnf"/>.</t>
<t>In <xref target="I-D.unify-nfvrg-challenges"/> we also
pointed out that within a single Compute Node (CN) similar
VNF placement and overlay optimization problem may
reappear in the context of network interface cards and CPU
cores.</t>
<figure anchor="fig_bb_vnf" align="center"
title="Black Box Data Center with VNF Overlay">
<artwork align="center"><![CDATA[
| A A
+---+ | S |
|SW1| | D |
+---+ | N | P
/ \ V | H
/ \ | Y
| | A | S
+---+ +-+-+ | | I
|SW | |SW | | | C
,+--++.._ _+-+-+ | | A
,-" _|,,`.""-..+ | C | L
_,,,--"" | `. |""-.._ | L |
+---+ +--++ `+-+-+ ""+---+ | O |
|SW | |SW | |SW | |SW | | U |
+---+ ,'+---+ ,'+---+ ,'+---+ | D |
| | ,-" | | ,-" | | ,-" | | | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | |
|CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V
| | |
+-+ +-+ +-+ A
|V| |V| |V| | L
|N| |N| |N| | O
|F| |F| |F| | G
|1| |3| |2| | I
+-+ +-+ +-+ | C
+---+ --1>-+ | | +--<3---------------<3---+ | | A
|SW1| +-2>-----------------------------2>---+ | L
+---+ <4--------------+ V
<<=============================================>>
IP tunnels, e.g., VxLAN
]]></artwork>
</figure>
</section> <!-- sec-ucs-bb-l3 -->
<section title="Black Box DC with external steering"
anchor="sec-ucs-bb-ext">
<t><xref target="fig_bb_ext"/> illustrates a set-up where an
external VxLAN termination point in the SDN domain is used
to forward packets among all the SFs (VNF1-VNF3) of the
chain within the DC. VNFs in the DC need to be configured to
receive and send packets between only the SDN endpoint,
hence are not aware of the next hop VNF address. Shall any
VNFs need to be relocated, e.g., due to scale in/out as
described in <xref target="I-D.zu-nfvrg-elasticity-vnf"/>,
the forwarding overlay can be transparently re-configured at
the SDN domain.</t>
<t>Note however, that traffic between the DC internal SFs
(VNF1, VNF2, VNF3) need to exit and re-enter the DC through
the external SDN switch. This, certainly, is sub-optimal an
results in ping-pong traffic similar to the local and remote
DC case discussed in <xref target="I-D.zu-nfvrg-elasticity-vnf"/>.</t>
<figure anchor="fig_bb_ext" align="center"
title="Black Box Data Center with ext Overlay">
<artwork align="center"><![CDATA[
| A A
+---+ | S |
|SW1| | D |
+---+ | N | P
/ \ V | H
/ \ | Y
| | ext port A | S
+---+ +-+-+ | | I
|SW | |SW | | | C
,+--++.._ _+-+-+ | | A
,-" _|,,`.""-..+ | C | L
_,,,--"" | `. |""-.._ | L |
+---+ +--++ `+-+-+ ""+---+ | O |
|SW | |SW | |SW | |SW | | U |
+---+ ,'+---+ ,'+---+ ,'+---+ | D |
| | ,-" | | ,-" | | ,-" | | | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | |
|CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V
| | |
+-+ +-+ +-+ A
|V| |V| |V| | L
|N| |N| |N| | O
|F| |F| |F| | G
|1| |3| |2| | I
+-+ +-+ +-+ | C
+---+ --1>-+ | | | | | | A
|SW1| <2-----+ | | | | | L
| | --3>---------------------------------------+ | |
| | <4-------------------------------------------+ |
| | --5>------------+ | |
+---+ <6----------------+ V
<<=============================================>>
IP tunnels, e.g., VxLAN
]]></artwork>
</figure>
</section> <!-- sec-ucs-bb-ext -->
</section> <!-- sec-ucs-bb -->
<section title="White Box DC" anchor="sec-ucs-wb">
<t><xref target="fig_wb"/> illustrates a set-up where the
internal network of the DC is exposed in full details through
an SDN Controller for steering control. We assume that native
L2 forwarding can be applied all through the DC until the
VNFs’ port, hence IP tunneling and tunnel termination at the
VNFs are not needed. Therefore, VNFs need not be forwarding graph
aware but transparently receive and forward packets. However,
the implications are that the network control of the DC must
be handed over to an external forwarding controller (see
that the SDN domain and the DC domain overlaps in <xref
target="fig_wb"/>). This most probably prohibits clear
operational separation or separate ownerships of the two
domains.</t>
<figure anchor="fig_wb" align="center"
title="White Box Data Center with L2 Overlay">
<artwork align="center"><![CDATA[
| A A
+---+ | S |
|SW1| | D |
+---+ | N | P
/ \ | | H
/ \ | | Y
| | ext port | A | S
+---+ +-+-+ | | | I
|SW | |SW | | | | C
,+--++.._ _+-+-+ | | | A
,-" _|,,`.""-..+ | | C | L
_,,,--"" | `. |""-.._ | | L |
+---+ +--++ `+-+-+ ""+---+ | | O |
|SW | |SW | |SW | |SW | | | U |
+---+ ,'+---+ ,'+---+ ,'+---+ V | D |
| | ,-" | | ,-" | | ,-" | | | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ | |
|CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| | |
+--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ V V
| | |
+-+ +-+ +-+ A
|V| |V| |V| | L
|N| |N| |N| | O
|F| |F| |F| | G
|1| |3| |2| | I
+-+ +-+ +-+ | C
+---+ --1>-+ | | +--<3---------------<3---+ | | A
|SW1| +-2>-----------------------------2>---+ | L
+---+ <4--------------+ V
<<=============================================>>
L2 overlay
]]></artwork>
</figure>
</section> <!-- sec-ucs-wb -->
</section> <!-- sec-ucs -->
<section title="Recursive approach" anchor="sec-ucs-unify">
<t>We argued in <xref target="I-D.unify-nfvrg-challenges"/> for
a joint software and network programming interface. Consider
that such joint software and network abstraction
(virtualization) exists around the DC with a corresponding
resource programmatic interface. A software and network
programming interface could include VNF requests and the
definition of the corresponding network overlay. However,
such programming interface is similar to the top level
services definition, for example, by the means of a VNF
Forwarding Graph.</t>
<t><xref target="fig_rec_1"/> illustrates a joint domain
virtualization and programming setup. In
<xref target="fig_rec_1"/> "[x]" denotes ports of the
virtualized data plane while "x" denotes port created
dynamically as part of the VNF deployment request. Over the
joint software and network virtualization VNF placement and
the corresponding traffic steering could be defined in an
atomic, which is orchestrated, split and handled to the next
levels (see <xref target="fig_rec_2"/>) in the hierarchy for
further orchestration. Such setup allows clear operational
separation, arbitrary domain virtualization (e.g., topology
details could be omitted) and constraint based optimization of
domain wide resources.</t>
<figure anchor="fig_rec_1" align="center"
title="Recursive Domain Virtualization and Joint VNF FG
programming: Overarching View">
<artwork align="center"><![CDATA[
|
+-----------------------[x]--------------------+ A
|Domain 0 | | |O
| +--------[x]----------+ | |V
| | / \ | | |E
|Big Switch | -<--- --->-- | | |R
|with | / BiS-BiS \ | | |A
|Big Software | | +-->-+ +-->-+ | | | |R
|(BiS-BiS) | | | | | | | | | |C
| +--x-x----x-x----x-x--+ | |H
| | | | | | | | |I
| +-+ +-+ +-+ | |N
| |V| |V| |V| | |G V
| |N| |N| |N| | | N
| |F| |F| |F| | | F
| |1| |2| |3| | |
| +-+ +-+ +-+ | | F
| | | G
+----------------------------------------------+ V
]]></artwork>
</figure>
<figure anchor="fig_rec_2" align="center"
title="Recursive Domain Virtualization and Joint VNF FG
programming: Domain Views">
<artwork align="center"><![CDATA[
+-------------------------|-----------------------------+ A
| +----------------------[x]---------------------+ AV | |
| | Domain 1 / \ | |N | |
| | | A | |F | |
| | Big Switch (BS) | | | | | |O
| | V | | |F | |V
| | / \ | |G | |E
| +-----------------[x]--------[x]---------------+ V1 | |R
| | | | |A
| +------------------|----------|----------------+ A | |R
| |Domain 2 | A | | | |C
| | V | | | | |H
| | +---[x]--------[x]----+ | |V | |I
| |Big Switch | / BiS-BiS \ | | |N | |N
| |with | / \ | | |F | |G
| |Big Software | | +-->-+ +-->-+ | | | | | |
| |(BiS-BiS) | | | | | | | | | |F | |V
| | +--x-x----x-x----x-x--+ | |G | |N
| | | | | | | | | |2 | |F
| | +-+ +-+ +-+ | | | |
| | |V| |V| |V| | | | |F
| | |N| |N| |N| | | | |G
| | |F| |F| |F| | | | |
| | |1| |2| |3| | | | |
| | +-+ +-+ +-+ | | | |
| +----------------------------------------------+ V | |
+-------------------------------------------------------+ V
]]></artwork>
</figure>
<section title="Virtualization" anchor="sec:virtualization">
<t>Let us first define the joint software and network abstraction
(virtualization) as a Big Switch with Big Software (BiS-BiS). A BiS-BiS
is a node abstraction, which incorporates both software and networking
resources with an associated joint software and network control API
(see <xref target="fig:bisbis-def"/>).</t>
<figure anchor="fig:bisbis-def" align="center"
title="Big Switch with Big Software definition">
<artwork align="center"><![CDATA[
API o __
| \
Software Ctrler \
API O-------------+ \ \
| \ \
Compute Ctrler \ |
| \ |
| +---------------------+ |
| | | | Joint Software &
| | {vCPU | | Network Ctrl API
| | memory | | o
| | storage} | | |
| | | | +---------------------+
| | | | | {{vCPU |
| |Compute Node | \ [1 memory 3]
| | | ==> | storage} |
| +----------x----------+ / [2 {port rate 4]
\ | | | switching delay}} |
+----------x----------+ | +---------------------+
| | | Big Switch &
[1 {port rate 3] | Big Software (BiS-BiS)
| switching delay} | | with joint
[2 4] / Software & Network Ctrler
| Network Element | /
+---------------------+ /
__/
]]></artwork>
</figure>
<t>The configuration over a BiS-BiS allows the atomic definition of NF
placements and the corresponding forwarding overlay as a Network
Function - Forwarding Graph (NF-FG). The embedment of NFs into a
BiS-BiS allows the inclusion of NF ports into the forwarding overlay
definition (see ports a, b, ...,f in
<xref target="fig:bisbis-nffg"/>). Ports 1,2, ..., 4 are seen as
infrastructure ports while NF ports are created and destroyed with NF
placements.</t>
<figure anchor="fig:bisbis-nffg" align="center"
title="Big Switch with Big Software definition with a Network
Function - Forwarding Graph (NF-FG)">
<artwork align="center"><![CDATA[
Step 1: Placement of NFs
Step 2: Interconnect NFs __ Step 1: Placement of NFs
\ with the forwarding
Compute Node \ overlay definition
+---------------------+ \
| +-+ +-+ +-+ | \ +-+ +-+ +-+
| |V| |V| |V| | | |V| |V| |V|
| |N| |N| |N| | | |N| |N| |N|
| |F| |F| |F| | | |F| |F| |F|
| |1| |2| |3| | | |1| |2| |3|
| +-+ +-+ +-+ | | +-+ +-+ +-+
| | +---.| |.---+ | | \ | | | | | |
| +------\ /------+ | ==> +--a-b----c-d----e-f--+
+----------x----------+ / | | | | | | | |
| | [1->+ +-->-+ +-->-+ | 3]
+----------x----------+ | | | |
| / \ | | [2 +->4]
[1->----->- -->---+ 3] | | |
| | | | +---------------------+
[2 +->4] / Big Switch with
| Network Element | / Big Software (BiS-BiS)
+---------------------+ /
__/
]]></artwork>
</figure>
<section title="The virtualizer's data model">
<section title="Tree view">
<figure anchor="fig:virtualizer-tree" align="center"
title="Virtualizer's YANG data model: tree view">
<artwork align="center"><![CDATA[
module: virtualizer
+--rw virtualizer
+--rw id? string
+--rw name? string
+--rw nodes
| +--rw node* [id]
| +--rw id string
| +--rw name? string
| +--rw type string
| +--rw ports
| | +--rw port* [id]
| | +--rw id string
| | +--rw name? string
| | +--rw port_type string
| | +--rw port_data? string
| +--rw links
| | +--rw link* [src dst]
| | +--rw id? string
| | +--rw name? string
| | +--rw src port-ref
| | +--rw dst port-ref
| | +--rw resources
| | +--rw delay? string
| | +--rw bandwidth? string
| +--rw resources
| | +--rw cpu string
| | +--rw mem string
| | +--rw storage string
| +--rw NF_instances
| | +--rw node* [id]
| | +--rw id string
| | +--rw name? string
| | +--rw type string
| | +--rw ports
| | | +--rw port* [id]
| | | +--rw id string
| | | +--rw name? string
| | | +--rw port_type string
| | | +--rw port_data? string
| | +--rw links
| | | +--rw link* [src dst]
| | | +--rw id? string
| | | +--rw name? string
| | | +--rw src port-ref
| | | +--rw dst port-ref
| | | +--rw resources
| | | +--rw delay? string
| | | +--rw bandwidth? string
| | +--rw resources
| | +--rw cpu string
| | +--rw mem string
| | +--rw storage string
| +--rw capabilities
| | +--rw supported_NFs
| | +--rw node* [id]
| | +--rw id string
| | +--rw name? string
| | +--rw type string
| | +--rw ports
| | | +--rw port* [id]
| | | +--rw id string
| | | +--rw name? string
| | | +--rw port_type string
| | | +--rw port_data? string
| | +--rw links
| | | +--rw link* [src dst]
| | | +--rw id? string
| | | +--rw name? string
| | | +--rw src port-ref
| | | +--rw dst port-ref
| | | +--rw resources
| | | +--rw delay? string
| | | +--rw bandwidth? string
| | +--rw resources
| | +--rw cpu string
| | +--rw mem string
| | +--rw storage string
| +--rw flowtable
| +--rw flowentry* [port match action]
| +--rw port port-ref
| +--rw match string
| +--rw action string
| +--rw resources
| +--rw delay? string
| +--rw bandwidth? string
+--rw links
+--rw link* [src dst]
+--rw id? string
+--rw name? string
+--rw src port-ref
+--rw dst port-ref
+--rw resources
+--rw delay? string
+--rw bandwidth? string
]]></artwork>
</figure>
</section> <!-- tree view -->
<section title="YANG Module">
<figure anchor="fig:virtualizer-yang" align="center"
title="Virtualizer's YANG data model">
<artwork align="center"><![CDATA[
<CODE BEGINS> file "virtualizer.yang"
module virtualizer {
namespace "http://fp7-unify.eu/framework/virtualizer";
prefix virt;
organization "EU-FP7-UNIFY";
contact "Robert Szabo <robert.szabo@ericsson.com>";
description "data model for joint software and network
virtualization and resource control";
revision 2015-06-27 {
reference "Initial version";
}
// REUSABLE GROUPS
grouping id-name {
description "used for key (id) and naming";
leaf id {
type string;
description "For unique key id";}
leaf name {
type string;
description "Descriptive name";}
}
grouping node-type {
description "For node type defintion";
leaf type{
type string;
mandatory true;
description "to identify nodes (infrastructure or NFs)";
}
}
// PORTS
typedef port-ref {
type string;
description "path to a port; can refer to ports at multiple
levels in the hierarchy";
}
grouping port {
description "Port definition: used for infrastructure and NF
ports";
uses id-name;
leaf port_type {
type string;
mandatory true;
description "Port type identification: abstract is for
technology independent ports and SAPs for technology specific
ports";}
leaf port_data{
type string;
description "Opaque data for port specific types";
}
}
grouping ports {
description "Collection of ports";
container ports {
description "see above";
list port{
key "id";
uses port;
description "see above";
}
}
}
// FORWARDING BEHAVIOR
grouping flowentry {
leaf port {
type port-ref;
mandatory true;
description "path to the port";
}
leaf match {
type string;
mandatory true;
description "matching rule";
}
leaf action {
type string;
mandatory true;
description "forwarding action";
}
container resources{
uses link-resource;
description "network resources assigned to forwarding entry";
}
description "SDN forwarding entry";
}
grouping flowtable {
container flowtable {
description "Collection of flowentries";
list flowentry {
key "port match action";
description "Index list of flowentries";
uses flowentry;
}
}
description "See container description";
}
// LINKS
grouping link-resource {
description "Core networking characteristics / resources
(bandwidth, delay)";
leaf delay {
type string;
description "Delay value with unit; e.g. 5ms";
}
leaf bandwidth {
type string;
description "Bandwithd value with unit; e.g. 10Mbps";
}
}
grouping link {
description "Link between src and dst ports with attributes";
uses id-name;
leaf src {
type port-ref;
description "relative path to the source port";
}
leaf dst {
type port-ref;
description "relative path to the destination port";
}
container resources{
uses link-resource;
description "Link resources (attributes)";
}
}
grouping links {
description "Collection of links in a virtualizer or a node";
container links {
description "See above";
list link {
key "src dst";
description "Indexed list of links";
uses link;
}
}
}
// CAPABILITIES
grouping capabilities {
description "For capability reporting: currently supported NF
types";
container supported_NFs { // supported NFs are enumerated
description "Collecction of nodes as supported NFs";
list node{
key "id";
description "see above";
uses node;
}
}
// TODO: add other capabilities
}
// NODE
grouping software-resource {
description "Core software resources";
leaf cpu {
type string;
mandatory true;
description "In virtual CPU (vCPU) units";
}
leaf mem {
type string;
mandatory true;
description "Memory with units, e.g., 1Gbyte";
}
leaf storage {
type string;
mandatory true;
description "Storage with units, e.g., 10Gbyte";
}
}
grouping node {
description "Any node: infrastructure or NFs";
uses id-name;
uses node-type;
uses ports;
uses links;
container resources{
description "Software resources offer/request of the node";
uses software-resource;
}
}
grouping infra-node {
description "Infrastructure nodes wich can contain other nodes
as NFs";
uses node;
container NF_instances {
description "Hosted NFs";
list node{
key "id";
uses node;
description "see above";
}
}
container capabilities {
description "Supported NFs as capability reports";
uses capabilities;
}
uses flowtable;
}
//======== Virtualizer ====================
container virtualizer {
description "Definition of a virtualizer instance";
uses id-name;
container nodes{
description "infra nodes, which embeds NFs and report
capabilities";
list node{
key "id";
uses infra-node;
description "see above";
}
}
uses links;
}
}
<CODE ENDS>
]]></artwork>
</figure>
</section>
</section> <!-- virtualizer's data model -->
</section> <!--sec-virtualizer-->
</section> <!-- sec-recursive -->
<section title="Examples">
<section title="Infrastructure reports">
<t><xref target="xml:single-node-infra-report"/> show a single node
infrastructure report. The example shows a BiS-BiS with two ports,
out of which Port 0 is also a Service Access Point 0 (SAP0). </t>
<figure anchor="xml:single-node-infra-report" align="center"
title="Single node infrastructure report example">
<artwork align="center"><![CDATA[
<virtualizer xmlns="http://fp7-unify.eu/framework/virtualizer">
<id>UUID001</id>
<name>Single node simple infrastructure report</name>
<nodes>
<node>
<id>UUID11</id>
<name>single Bis-Bis node</name>
<type>BisBis</type>
<ports>
<port>
<id>0</id>
<name>SAP0 port</name>
<port_type>port-sap</port_type>
<vxlan>...</vxlan>
</port>
<port>
<id>1</id>
<name>North port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>2</id>
<name>East port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
<resources>
<cpu>20</cpu>
<mem>64 GB</mem>
<storage>100 TB</storage>
</resources>
</node>
</nodes>
</virtualizer>
]]></artwork>
</figure>
<t><xref target="xml:3-node-infra-report"/> shows a 3-node
infrastructure report with 3 BiS-BiS nodes. Infrastructure links are
inserted into the virtualization view between the ports of the
BiS-BiS nodes.</t>
<figure anchor="xml:3-node-infra-report" align="center"
title="3-node infrastructure report example">
<artwork align="center"><![CDATA[
<virtualizer xmlns="http://fp7-unify.eu/framework/virtualizer">
<id>UUID002</id>
<name>3-node simple infrastructure report</name>
<nodes>
<node>
<id>UUID11</id>
<name>West Bis-Bis node</name>
<type>BisBis</type>
<ports>
<port>
<id>0</id>
<name>SAP0 port</name>
<port_type>port-sap</port_type>
<vxlan>...</vxlan>
</port>
<port>
<id>1</id>
<name>North port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>2</id>
<name>East port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
<resources>
<cpu>20</cpu>
<mem>64 GB</mem>
<storage>100 TB</storage>
</resources>
</node>
<node>
<id>UUID12</id>
<name>East Bis-Bis node</name>
<type>BisBis</type>
<ports>
<port>
<id>1</id>
<name>SAP1 port</name>
<port_type>port-sap</port_type>
<vxlan>...</vxlan>
</port>
<port>
<id>0</id>
<name>North port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>2</id>
<name>West port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
<resources>
<cpu>10</cpu>
<mem>32 GB</mem>
<storage>100 TB</storage>
</resources>
</node>
<node>
<id>UUID13</id>
<name>North Bis-Bis node</name>
<type>BisBis</type>
<ports>
<port>
<id>0</id>
<name>SAP2 port</name>
<port_type>port-sap</port_type>
<vxlan>...</vxlan>
</port>
<port>
<id>1</id>
<name>East port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>2</id>
<name>West port</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
<resources>
<cpu>20</cpu>
<mem>64 GB</mem>
<storage>1 TB</storage>
</resources>
</node>
</nodes>
<links>
<link>
<id>0</id>
<name>Horizontal link</name>
<src>../../nodes/node[id=UUID11]/ports/port[id=2]</src>
<dst>../../nodes/node[id=UUID12]/ports/port[id=2]</dst>
<resources>
<delay>2 ms</delay>
<bandwidth>10 Gb</bandwidth>
</resources>
</link>
<link>
<id>1</id>
<name>West link</name>
<src>../../nodes/node[id=UUID11]/ports/port[id=1]</src>
<dst>../../nodes/node[id=UUID13]/ports/port[id=2]</dst>
<resources>
<delay>5 ms</delay>
<bandwidth>10 Gb</bandwidth>
</resources>
</link>
<link>
<id>2</id>
<name>East link</name>
<src>../../nodes/node[id=UUID12]/ports/port[id=0]</src>
<dst>../../nodes/node[id=UUID13]/ports/port[id=1]</dst>
<resources>
<delay>2 ms</delay>
<bandwidth>5 Gb</bandwidth>
</resources>
</link>
</links>
</virtualizer>
]]></artwork>
</figure>
</section>
<section title="Simple requests">
<t><xref target="xml:simple-request"/> shows the allocation request
for 3 NFs (Parental control B.4, Http Cache 1.2 and Stateful
firewall C) as instrumented over a BiS-BiS node. It can be seen that
the configuration request contains both the NF placement and the
forwarding overlay definition as a joint request.</t>
<figure anchor="xml:simple-request" align="center"
title="Simple request of 3 NFs on a single BiS-BiS">
<artwork align="center"><![CDATA[
<virtualizer xmlns="http://fp7-unify.eu/framework/virtualizer">
<id>UUID001</id>
<name>Single node simple request</name>
<nodes>
<node>
<id>UUID11</id>
<NF_instances>
<node>
<id>NF1</id>
<name>first NF</name>
<type>Parental control B.4</type>
<ports>
<port>
<id>2</id>
<name>in</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>3</id>
<name>out</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
</node>
<node>
<id>NF2</id>
<name>cache</name>
<type>Http Cache 1.2</type>
<ports>
<port>
<id>4</id>
<name>in</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>5</id>
<name>out</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
</node>
<node>
<id>NF3</id>
<name>firewall</name>
<type>Stateful firewall C</type>
<ports>
<port>
<id>6</id>
<name>in</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
<port>
<id>7</id>
<name>out</name>
<port_type>port-abstract</port_type>
<capability>...</capability>
</port>
</ports>
</node>
</NF_instances>
<flowtable>
<flowentry>
<port>../../ports/port[id=0]</port>
<match>*</match>
<action>output:../../NF_instances/node[id=NF1]
/ports/port[id=2]</action>
</flowentry>
<flowentry>
<port>../../NF_instances/node[id=NF1]
/ports/port[id=3]</port>
<match>fr-a</match>
<action>output:../../NF_instances/node[id=NF2]
/ports/port[id=4]</action>
</flowentry>
<flowentry>
<port>../../NF_instances/node[id=NF1]
/ports/port[id=3]</port>
<match>fr-b</match>
<action>output:../../NF_instances/node[id=NF3]
/ports/port[id=6]</action>
</flowentry>
<flowentry>
<port>../../NF_instances/node[id=NF2]
/ports/port[id=5]</port>
<match>*</match>
<action>output:../../ports/port[id=1]</action>
</flowentry>
<flowentry>
<port>../../NF_instances/node[id=NF3]
/ports/port[id=7]</port>
<match>*</match>
<action>output:../../ports/port[id=1]</action>
</flowentry>
</flowtable>
</node>
</nodes>
</virtualizer>
]]></artwork>
</figure>
</section>
</section>
<section anchor="IANA" title="IANA Considerations">
<t>This memo includes no request to IANA.</t>
</section>
<section anchor="Security" title="Security Considerations">
<t>TBD</t>
</section>
<section title="Acknowledgement" anchor="acknowledgement">
<t> The research leading to these results has received funding
from the European Union Seventh Framework Programme
(FP7/2007-2013) under grant agreement no. 619609 - the UNIFY
project. The views expressed here are those of the authors
only. The European Commission is not liable for any use that
may be made of the information in this document.</t>
<t> We would like to thank in particular David Jocha and Janos
Elek from Ericsson for the useful discussions.</t>
</section>
</middle>
<back>
<references title="Informative References">
<reference anchor="ETSI-NFV-Arch" target="http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_NFV002v010101p.pdf">
<front>
<title>Architectural Framework v1.1.1</title>
<author>
<organization>ETSI</organization>
</author>
<date month="Oct" year="2013" />
</front>
</reference>
<reference anchor="ETSI-NFV-MANO" target="http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-MAN001v061-%20management%20and%20orchestration.pdf">
<front>
<title>Network Function Virtualization (NFV) Management and
Orchestration V0.6.1 (draft)</title>
<author>
<organization>ETSI</organization>
</author>
<date month="Jul." year="2014" />
</front>
</reference>
&I-D.unify-nfvrg-challenges;
&I-D.ietf-sfc-dc-use-cases;
&I-D.zu-nfvrg-elasticity-vnf;
</references>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-24 04:05:01 |