One document matched: draft-unify-nfvrg-recursive-programming-00.xml


<?xml version="1.0" encoding="UTF-8"?>
<?rfc toc="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes" ?>
<?rfc tocindent="no"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY I-D.unify-nfvrg-challenges SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.unify-nfvrg-challenges.xml">
<!ENTITY I-D.ietf-sfc-dc-use-cases SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-sfc-dc-use-cases.xml">
<!ENTITY I-D.zu-nfvrg-elasticity-vnf SYSTEM "http://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.zu-nfvrg-elasticity-vnf.xml">
]>

<rfc category="info" ipr="trust200902" docName="draft-unify-nfvrg-recursive-programming-00">
  <front>
    <title abbrev="Towards recursive programming">Towards recursive virtualization and programming for
    network and cloud resources</title>
    
    <author fullname="Robert Szabo" initials="R." surname="Szabo">
      <organization abbrev="Ericsson">Ericsson Research, Hungary</organization>
      <address>
	<postal>
	  <street>Irinyi Jozsef u. 4-20</street>
	  <city>Budapest</city>
	  <region></region>
	  <code>1117</code>
	  <country>Hungary</country>
	</postal>
	<email>robert.szabo@ericsson.com</email>
	<uri>http://www.ericsson.com/</uri>
      </address>
    </author>

    <author fullname="Zu Qiang" initials="Z." surname="Qiang">
      <organization abbrev="Ericsson">Ericsson</organization>
      <address>
	<postal>
	  <street>8400, boul. Decarie</street>
	  <city>Ville Mont-Royal</city>
	  <region>QC</region>
	  <code>8400</code>
	  <country>Canada</country>
	</postal>
	<email>zu.qiang@ericsson.com</email>
	<uri>http://www.ericsson.com/</uri>
      </address>
    </author>

	<author fullname="Mario Kind" initials="M." surname="Kind">
	<organization abbrev="Deutsche Telekom AG">Deutsche Telekom AG</organization>
	<address>
		<postal>
			<street>Winterfeldtstr. 21</street>
			<city>10781 Berlin</city>
			<country>Germany</country>
		</postal>
	<email>mario.kind@telekom.de</email>
	</address>
	</author>

    <date year="2015" />

    <area>IRTF</area>
    <workgroup>NFVRG</workgroup>

    <keyword>Internet-Draft</keyword>

    <abstract>
      <t>The introduction of Network Function Virtualization (NFV) in
      carrier-grade networks promises improved operations in terms of
      flexibility, efficiency, and manageability. NFV is an approach
      to combine network and compute virtualizations
      together. However, network and compute resource domains expose
      different virtualizations and programmable interfaces. In
      <xref target="I-D.unify-nfvrg-challenges"/> we argued for a
      joint compute and network virtualization by looking into
      different compute abstractions.</t>

      <!-- They usually expose detailed enough control only to their -->
      <!-- primarily resource types. Orchestrating service graphs with -->
      <!-- transparent network functions, however, requires coordination of -->
      <!-- networking across compute and network domains. -->


      <t>In this document we analyze different approaches to
      orchestrate a service graph with transparent network functions
      into a commodity data center.  We show, that a recursive compute
      and network joint virtualization and programming has clear
      advantages compared to other approaches with separated control
      between compute and network resources. The discussion of the
      problems and the proposed solution is generic for any data
      center use case, however, we use NFV as an example.</t>
    </abstract>
  </front>

  <middle>

    <section title="Introduction" anchor="intro">
      <t>To a large degree there is agreement in the research
	community that rigid network control limits the flexibility of
	service creation.  In
	<xref target="I-D.unify-nfvrg-challenges"/>
	<list style="symbols">
	    <t> we analyzed different compute domain abstractions to
	      argue that a joint compute and network virtualization
	      and programming is needed for efficient combination of
	      these resource domains;</t>
	    <t> we described challenges associated to the combined
	      handling of compute and network resources for a unified
	      production environment.</t>
	</list>
      </t>


      <t>Our goal here is to analyze different approaches to
      instantiate a service graph with transparent network functions
      into a commodity Data Center (DC).  More specifically, we
      analyze
	<list style="symbols">
	  <t> two black box DC set-ups, where the intra DC network
	    control is limited to some generic compute only control
	    programming interface; </t>
	  <t> a white box DC set-up, where the intra DC network
	    control is exposed directly to for a DC external control
	    to coordinate forwarding configurations;</t>
	  <t> a recursive approach, which illustrates potential
	    benefits of a joint compute and network virtualization and
	    control.</t>
	</list>
      </t>

      <t>The discussion of the problems and the proposed solution is
      generic for any data center use case, however, we use NFV as an
      example.</t>
    </section>

    <section title="Terms and Definitions" anchor="sec-terms">

      <t>We use the term compute and "compute and storage"
      interchangeably throughout the document. Moreover, we use the
      following definitions, as established in
      <xref target="ETSI-NFV-Arch"/>:</t>

      <t><list style="hanging">
	<t hangText="NFV:">Network Function Virtualization - The principle
	of separating network functions from the hardware they run on by
	using virtual hardware abstraction.</t>

	<t hangText="NFVI:">NFV Infrastructure - Any combination of
	virtualized compute, storage and network resources.</t>

	<t hangText="VNF:">Virtualized Network Function - a software-based
	network function.</t>

	<t hangText="MANO:">Management and Orchestration - In the ETSI NFV
	framework <xref target="ETSI-NFV-MANO"/>, this is the global entity
	responsible for management and orchestration of NFV lifecycle.</t>
      </list></t>

      <t>Further, we make use of the following terms:</t>
      <t><list style="hanging">
	<t hangText="NF:">a network function, either software-based (VNF) or
	appliance-based.</t>

	<t hangText="SW:">a (routing/switching) network element with a
	programmable control plane interface.</t>

	<t hangText="DC:"> a data center network element which in addition
	to a programmable control plane interface offers a DC control
	interface.</t>

	<t hangText="CN:"> a compute node network element, which is
	controlled by a DC control plane and provides execution
	environment for virtual machine (VM) images such as VNFs.</t>

      </list></t>

    </section>

    
    <section title="Use Cases" anchor="sec-ucs">

      <t>The inclusion of commodity Data Centers (DCs), e.g.,
	OpenStack, into the service graphs is far from trivial
	<xref target="I-D.ietf-sfc-dc-use-cases"/>: different
	exposures of the internals of the DC will imply different
	dynamisms in operations, different orchestration complexities
	and may yield for different business cases with regards to
	infrastructure sharing. </t>

      <t>We investigate different scenarios with a simple forwarding
      graph of three VNFs (o->VNF1->VNF2->VNF3->o), where all VNFs are
      deployed within the same DC. We assume that the DC is a
      multi-tier leaf and spine (CLOS) fabric with top-of-the rack
      switches in Compute Nodes (CNs) and that all VNFs are
      transparent (bump-in-the-wire) Service Functions.</t>

      <section title="Black Box DC" anchor="sec-ucs-bb">
	<t>In Black Bock DC set-ups, we assume, that the compute
	   domain is a autonomous domain with legacy (e.g., OpenStack)
	   orchestration APIs. Due to the lack of direct forwarding
	   control within the DC no native L2 forwarding can be used
	   to insert VNFs running in the DC into the forwarding
	   graph. Instead, explicit tunnels (e.g., VxLAN) must be
	   used, which need termination support within the deployed
	   VNFs.  Therefore, VNFs must be aware of the previous and
	   the next hops of the forwarding graph to receive and
	   forward packets accordingly.</t>

	
	<section title="Black Box DC with L3 tunnels"
		 anchor="sec-ucs-bb-l3"> 
	   <t><xref target="fig_bb_vnf"/> illustrates a set-up where
	   an external VxLAN termination point in the SDN domain is
	   used to forward packets into the first SF (VNF1) of the
	   chain within the DC. VNF1, in turn, is configured to
	   forward packets to the next SF (VNF2) in the chain and so
	   forth with VNF2 and VNF3.</t>

	   <t>In this set-up VNFs must be capable of handling L3
	   tunnels (e.g., VxLAN) and must act as forwarders
	   themselves. Additionally, an operational L3 underlay must
	   be present so that VNFs can address each
	   other.</t>

	   <t>Furthermore, VNFs holding chain forwarding information
	   could be untrusted user plane functions from 3rd party
	   developers. Enforcement of proper forwarding is
	   problematic.</t>

	   <t> Additionally, compute only orchestration might result
	   in sub-optimal allocation of the VNFs with regards to the
	   forwarding overlay, for example, see back-forth use of a
	   core switch in <xref target="fig_bb_vnf"/>.</t>

	   <t>In <xref target="I-D.unify-nfvrg-challenges"/> we also
	    pointed out that within a single Compute Node (CN) similar
	    VNF placement and overlay optimization problem may
	    reappear in the context of network interface cards and CPU
	    cores.</t>

	  <figure anchor="fig_bb_vnf" align="center"
		title="Black Box Data Center with VNF Overlay">
	    <artwork align="center"><![CDATA[

                              |                         A     A
                            +---+                       | S   |
                            |SW1|                       | D   |
                            +---+                       | N   | P
                           /     \                      V     | H
                          /       \                           | Y
                         |         |                    A     | S
                       +---+      +-+-+                 |     | I
                       |SW |      |SW |                 |     | C
                      ,+--++.._  _+-+-+                 |     | A
                   ,-"   _|,,`.""-..+                   | C   | L
                 _,,,--"" |    `.   |""-.._             | L   |
            +---+      +--++     `+-+-+    ""+---+      | O   |
            |SW |      |SW |      |SW |      |SW |      | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+      | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | +--<3---------------<3---+ |          | A
|SW1|        +-2>-----------------------------2>---+          | L
+---+ <4--------------+                                       V

    <<=============================================>>
		   IP tunnels, e.g., VxLAN

						   ]]></artwork>
	  </figure>
	  
	</section> <!-- sec-ucs-bb-l3 -->
	
	<section title="Black Box DC with external steering"
		 anchor="sec-ucs-bb-ext">

	  <t><xref target="fig_bb_ext"/> illustrates a set-up where an
	  external VxLAN termination point in the SDN domain is used
	  to forward packets among all the SFs (VNF1-VNF3) of the
	  chain within the DC. VNFs in the DC need to be configured to
	  receive and send packets between only the SDN endpoint,
	  hence are not aware of the next hop VNF address. Shall any
	  VNFs need to be relocated, e.g., due to scale in/out as
	  described in <xref target="I-D.zu-nfvrg-elasticity-vnf"/>,
	  the forwarding overlay can be transparently re-configured at
	  the SDN domain.</t>

	  <t>Note however, that traffic between the DC internal SFs
	  (VNF1, VNF2, VNF3) need to exit and re-enter the DC through
	  the external SDN switch.  This, certainly, is sub-optimal an
	  results in ping-pong traffic similar to the local and remote
	  DC case discussed in <xref target="I-D.zu-nfvrg-elasticity-vnf"/>.</t>

	  <figure anchor="fig_bb_ext" align="center"
		  title="Black Box Data Center with ext Overlay">
	    <artwork align="center"><![CDATA[

                              |                         A     A
                            +---+                       | S   |
                            |SW1|                       | D   |
                            +---+                       | N   | P
                           /     \                      V     | H
                          /       \                           | Y
                         |         |   ext port         A     | S
                       +---+      +-+-+                 |     | I
                       |SW |      |SW |                 |     | C
                      ,+--++.._  _+-+-+                 |     | A
                   ,-"   _|,,`.""-..+                   | C   | L
                 _,,,--"" |    `.   |""-.._             | L   |
            +---+      +--++     `+-+-+    ""+---+      | O   |
            |SW |      |SW |      |SW |      |SW |      | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+      | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | |                        | |          | A
|SW1| <2-----+        | |                        | |          | L
|   | --3>---------------------------------------+ |          |
|   | <4-------------------------------------------+          |
|   | --5>------------+ |                                     |
+---+ <6----------------+                                     V

     <<=============================================>>
                     IP tunnels, e.g., VxLAN

						    ]]></artwork>
	  </figure>

	</section>  <!-- sec-ucs-bb-ext --> 
      </section>  <!-- sec-ucs-bb -->

      <section title="White Box DC" anchor="sec-ucs-wb">

	<t><xref target="fig_wb"/> illustrates a set-up where the
	internal network of the DC is exposed in full details through
	an SDN Controller for steering control. We assume that native
	L2 forwarding can be applied all through the DC until the
	VNFs’ port, hence IP tunneling and tunnel termination at the
	VNFs are not needed. Therefore, VNFs need not be forwarding graph
	aware but transparently receive and forward packets.  However,
	the implications are that the network control of the DC must
	be handed over to an external forwarding controller (see
	that the SDN domain and the DC domain overlaps in <xref
	target="fig_wb"/>). This most probably prohibits clear
	operational separation or separate ownerships of the two
	domains.</t>

      <figure anchor="fig_wb" align="center"
	      title="White Box Data Center with L2 Overlay">
	<artwork align="center"><![CDATA[

                              |                     A         A
                            +---+                   | S       |
                            |SW1|                   | D       |
                            +---+                   | N       | P
                           /     \                  |         | H
                          /       \                 |         | Y
                         |         |   ext port     |   A     | S
                       +---+      +-+-+             |   |     | I
                       |SW |      |SW |             |   |     | C
                      ,+--++.._  _+-+-+             |   |     | A
                   ,-"   _|,,`.""-..+               |   | C   | L
                 _,,,--"" |    `.   |""-.._         |   | L   |
            +---+      +--++     `+-+-+    ""+---+  |   | O   |
            |SW |      |SW |      |SW |      |SW |  |   | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+  V   | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | +--<3---------------<3---+ |          | A
|SW1|        +-2>-----------------------------2>---+          | L
+---+ <4--------------+                                       V

    <<=============================================>>
      		      L2 overlay

		      ]]></artwork>
      </figure>

	
      </section> <!-- sec-ucs-wb -->
      
    </section> <!-- sec-ucs --> 

    <section title="Recursive approach" anchor="sec-ucs-unify">

      <t>We argued in <xref target="I-D.unify-nfvrg-challenges"/> for
	a joint software and network programming interface.  Consider
	that such joint software and network abstraction
	(virtualization) exists around the DC with a corresponding
	resource programmatic interface.  A software and network
	programming interface could include VNF requests and the
	definition of the corresponding network overlay.  However,
	such programming interface is similar to the top level
	services definition, for example, by the means of a VNF
	Forwarding Graph.</t>

	<t><xref target="fig_rec_1"/> illustrates a joint domain
	virtualization and programming setup. VNF placement and the
	corresponding traffic steering could be defined in an abstract
	way, which is orchestrated, split and handled to the next
	level in the hierarchy for further orchestration. Such setup
	allows clear operational separation, arbitrary domain
	virtualization (e.g., topology details could be omitted) and
	constraint based optimization of domain wide resources.</t>


      <figure anchor="fig_rec_1" align="center"
	      title="Recursive Domain Virtualization and Joint VNF FG
		     programming">

	<artwork align="center"><![CDATA[
+-------------------------------------------------------+ A
| +----------------------------------------------+  A   | |
| | SDN Domain            |                      |  |   | |
| |                     +---+                    |  |S  | |
| |                     |SW1|                    |  |D  | |O
| |                     +---+                    |  |N  | |V
| |                    /     \                   |  |   | |E
| +-------------------+-------+------------------+  V   | |R
|                    |         |                        | |A
| +----------------------------------------------+  A   | |R
| | DC Domain                                    |  |   | |C
| | Joint         +---+      +-+-+               |  |   | |H
| | Abstraction   |SW |      |SW |               |  |D  | |I
| | Softw +      ,+--++.._  _+-+-+               |  |C  | |N
| | Network   ,-"   _|,,`.""-..+                 |  |   | |G
| |         _,,,--"" |    `.   |""-.._           |  |V  | |
| |    +---+      +--++     `+-+-+    ""+---+    |  |I  | |V
| |    |SW |      |SW |      |SW |      |SW |    |  |R  | |I
| |    +---+    ,'+---+    ,'+---+    ,'+---+    |  |T  | |R
| |    |   | ,-"  |   | ,-"  |   | ,-"  |   |    |  |   | |T
| |  +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+  |  |   | |
| |  |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|  |  |   | |
| |  +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+  |  |   | |
| |                                              |  |   | |
| +----------------------------------------------+  V   | |
+-------------------------------------------------------+ V

        +--------------------------------------+
        |                    DC Domain         |
        |              +---------------------+ |
        |              |  +-+    +-+    +-+  | |
        |              |  |V|    |V|    |V|  | |
        |              |  |N|    |N|    |N|  | |
        | SDN Domain   |  |F|    |F|    |F|  | |
        | +---------+  |  |1|    |2|    |3|  | |
        | |         |  |  +-+    +-+    +-+  | |
        | |  +---+--+--+>-+ |    | |    | |  | |
        | |  |SW1|  |  |    +-->-+ +-->-+ |  | |
        | |  +---+--+<-+------------------+  | |
        | +---------+  +---------------------+ |
        |                                      |
        |<<=========>><<=====================>>|
        |   VNF FG1            VNF FG2         |
        +--------------------------------------+

         <<==================================>>
              VNF Forwarding Graph overall

    ]]></artwork>
      </figure>
      </section> <!-- sec-recursive -->


    <section anchor="IANA" title="IANA Considerations">
      <t>This memo includes no request to IANA.</t>
      
    </section>

    <section anchor="Security" title="Security Considerations">
      <t>TBD</t>
    </section>
    <section title="Acknowledgement" anchor="acknowledgement">

      <!-- <t>The authors would like to thank the UNIFY team for inspiring -->
      <!-- 	discussions and in particular Fritz-Joachim Westphal for his -->
      <!-- 	comments and suggestions on how to refine this draft.</t> -->

      <t> The research leading to these results has received funding
	from the European Union Seventh Framework Programme
	(FP7/2007-2013) under grant agreement no. 619609 - the UNIFY
	project. The views expressed here are those of the authors
	only. The European Commission is not liable for any use that
	may be made of the information in this document.</t>

      <t> We would like to thank in particular David Jocha and Janos
	Elek from Ericsson for the useful discussions.</t>
    </section>



  </middle>

  <back>

    <references title="Informative References">

      <reference anchor="ETSI-NFV-Arch" target="http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_NFV002v010101p.pdf">
	<front>
          <title>Architectural Framework v1.1.1</title>
          <author>
            <organization>ETSI</organization>
          </author>
          <date month="Oct" year="2013" />
	</front>
      </reference>


      <reference anchor="ETSI-NFV-MANO" target="http://docbox.etsi.org/ISG/NFV/Open/Latest_Drafts/NFV-MAN001v061-%20management%20and%20orchestration.pdf">
	<front>
          <title>Network Function Virtualization (NFV) Management and
	    Orchestration V0.6.1 (draft)</title>
          <author>
            <organization>ETSI</organization>
          </author>
          <date month="Jul." year="2014" />
	</front>
      </reference>


      &I-D.unify-nfvrg-challenges;
      &I-D.ietf-sfc-dc-use-cases;
      &I-D.zu-nfvrg-elasticity-vnf;
      
    </references>



  </back>
</rfc>

PAFTECH AB 2003-20262026-04-24 04:04:55