One document matched: draft-ietf-mboned-mcast-connect-00.txt


MBONED Working Group                                            B. Cain
INTERNET-DRAFT                                          Nortel Networks
Expires August 2000                                       February 2000



                      Connecting Multicast Domains 
                <draft-ietf-mboned-mcast-connect-00.txt> 


STATUS OF THIS MEMO

 This document is an Internet-Draft and is in  full  conformance  with
 all provisions of Section 10 of RFC2026.

 Internet-Drafts are working documents  of  the  Internet  Engineering
 Task  Force  (IETF),  its  areas,  and its working groups.  Note that
 other groups may  also  distribute  working  documents  as  Internet-
 Drafts.

 Internet-Drafts are draft documents valid for a maximum of six months
 and  may be updated, replaced, or obsoleted by other documents at any
 time.  It is inappropriate  to  use  Internet-  Drafts  as  reference
 material or to cite them other than as work in progress.

 The  list   of   current   Internet-Drafts   can   be   accessed   at
 http://www.ietf.org/ietf/1id-abstracts.txt

 The list of Internet-Draft Shadow  Directories  can  be  accessed  at
 http://www.ietf.org/shadow.html.
 


                                Abstract

   New deployment of multicast routing in Internet Service Provider
   networks is through the use of the PIM-SM [PIMSM], MSDP [MSDP], and 
   MBGP [MBGP] protocols.  This informational document describes 
   several solutions for the connection of different types of multicast
   routing domains.  In particular, the problems and 
   solutions for the connection of a stub intra-domain multicast 
   routing domain to a transit (ISP) PIM-SM domain are addressed.












Expires August 2000                                          [Page 1]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

1. Introduction

   New deployment of multicast routing in Internet Service Provider
   networks is through the use of the PIM-SM [PIMSM], MSDP [MSDP], and 
   MBGP [MBGP] protocols.  This informational document describes 
   several solutions for the connection of different types of multicast
   routing domains.  In particular, it describes the problems (and 
   solutions) for the connection of a stub intra-domain multicast 
   routing domain to a transit (ISP) PIM-SM domain.  Because stub 
   domains may use a variety of multicast routing protocols it is 
   important to understand the connection issues between a provider
   PIM-SM domain and stub domains. 

   In [INTEROP], an interoperability mechanism is described which can 
   be implemented in multicast border routers to route multicast 
   traffic between domains.  This is accomplished through a shared 
   multicast forwarding table between two or more multicast routing 
   protocols.  [INTEROP] describes the creation of the shared 
   forwarding cache, and thhe details of individual protocols from
   the perspective of protocol implementors.

   In this document, multiple scenarios are presented for the actual 
   interconnection of a stub/transit domain connection.  We assume 
   that there is a multicast border router (BR) present which is 
   either part of the transit network or part of the stub network 
   which implements the mechanisms described in [INTEROP].  We assume 
   that the BR has two components, one which is the PIM-SM protocol, 
   and one which is the stub domain's intra-domain multicast routing 
   protocol. 


1.1 Transit Domain (ISP) Configuration

   In Internet Service Provider networks, PIM-SM has become the 
   de-facto multicast routing protocol, or tree-building protocol.  In 
   order to connect PIM-SM domains, the MSDP protocol is used.  MSDP is
   a source distribution protocol, which distributes lists of sources 
   to all PIM-SM Rendenzvous points.  To provide for multicast specific
   routing policies, Multi-protocol BGP is used for multicast specific 
   routes.


1.2 Stub Domain Configuration

   Intra-domain networks may run a variety of multicast routing 
   protocols, such as PIM-DM [PIMDM], PIM-SM [PIMSM], MOSPF [MOSPF], or
   DVMRP [DVMRP].  These networks use multicast for private specialized
   applications.  In many circumstances, an intra-domain stub domain 
   may wish to receive multicast connectivity from its ISP to receive
   inter-domain multicast traffic.  Many ISPs have been offering access
   to the legacy DVMRP part of the MBone, but recently, ISPs have 
   begun to offer PIM-SM/MSDP connectivity as well.  Because stub  

Expires August 2000                                          [Page 2]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   domains may run a variety of protocols, confusion exists about the 
   connectivity options when connecting to a PIM-SM provider domain.


1.3 General Configuration Issues

   There are a number of general issues to consider when connecting
   multicast domains.  The following provides a quick summary of the 
   common issues which are not addressed in this document.

   	- Group Scoping: Stub domains may wish to scope certain groups 
          to stay within their domain.  This is best accomplished with
	  administratively scoped addresses [ASCOPE].  Administrative
	  scoping ranges are configured on all border routers so as to
	  not forward scoped groups out of the domain.
	- Special Addresses: Certain multicast addresses are used for
	  protocol purposes which are specific to a domain 
          (e.g. bootstrap messages).  These messages should use
          administratively scoped addresses and therefore should be
          filtered at domain boundaries.
	- Group Ownership: If a stub domain wishes to use global 
          addresses for multicast groups, it should use one of the 
          multicast address allocation mechanisms [GLOP, MALLOC] in 
          place to do so.  By ignoring the problems of address 
          allocation, a domain may select an address which collides 
          with another which could cause excess traffic and possibly
	  denial of service to other groups.
	- Multi-homing: Multi-homing multicast is difficult (note:  
          meaining the actual multi-homing of multicast traffic, not 
          unicast multi-homing with multicast enabled).  Because 
          multicast routing protocols use RPF checks to prevent packet 
          looping, routing configurations must correctly reflect the 
          actual path of source packets.  Mult-homing becomes more
          difficult when different route distribution protocols are 
          used to distribute routes (e.g. DVMRP and MBGP).  It should 
          be noted that a multicast source cannot be "load-balanced"
	  over multiple ingress points.  Because packet looping must be
	  prevented, a set of sources must be injected at one point 
          into the network (of course this does not prevent the use
          of  *backup* routes). 


1.4 Document Organization

   The following sections describe the methods of connecting multicast
   domains.  Section 2 describes the connection of a stub
   flood-and-prune domain to a provider domain using PIM-SM.  Section 3
   describes the connection of a stub PIM-SM domain to a provider
   domain using PIM-SM.  Section 4 describes the connection of
   domains running MOSPF and IGMP-Proxy to a PIM-SM provider domain.
   Section 5 describes the problems of distributing multicast 
   specific routes between domains.

Expires August 2000                                          [Page 3]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

2. Flood and Prune Protocols

   Many stub or enterprise domains run flood and prune protocols.  
   These protocols, such as DVMRP and PIM-DM, are used because they are
   simple to deploy and have been available for a long time.  This 
   section describes the problems and solutions for connecting a flood 
   and prune stub domain to a transit ISP domain running 
   PIM-SM/MSDP/MBGP.  The problems of route distribution are 
   deferred until section 5.


2.1 The Problems
  
   Flood and prune protocols build multicast trees differently than the 
   explicit join mechanism of the PIM-SM protocol.  Flood and prune 
   protocols assume group membership and use prune messages to prune 
   unwanted traffic.  This is in contrast to explicit join protocols 
   like PIM-SM in which leaf routers explicitly setup tree branches 
   when an IGMP join is received.
  
   In order to connect a flood and prune domain to a shared tree domain,
   it is necessary to:

	1. Communicate group membership information between domains
	2. Bring data from the senders from the flood and prune domain 
           to the RP into the PIM-SM domain and visa-versa.

   In flood and prune protocols, global group membership is not 
   available to any routers in the domain.  This is because of the 
   inherent "dense-mode" philosophy in these protocols in that they 
   assume group membership.  This becomes a problem because this 
   information is needed to create branches from the border router to 
   the PIM-SM RP.  This is so sources from the shared tree domain can 
   be injected into the flood and prune domain.

   The opposite problem also exists: how to inject sources from the 
   flood and prune domain into the shared tree domain.  This problem 
   can be solved because of the nature of flood and prune protocols.  
   In a flood and prune protocol, every router knows the set of all 
   active sources for every group.  Using this information, a BR can 
   act as if it is a directly attached router (to a source) for its 
   shared tree component.

   The following section presents several solutions for connecting flood
   and prune protocols in a stub domain to a shared tree protocol in a
   transit (or ISP) domain.  Section 2.3 discusses the problem of 
   injecting sources from stub flood-and-prune domains into transit 
   PIM-SM domains.  Sections 2.4.1 through 2.4.4 suggest several 
   possibilities for bringing sources from the PIM-SM domain into the 
   flood-and-prune domain.

   Note that this document only specifies the *possibilities* in 

Expires August 2000                                          [Page 4]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   connecting domains.  Different vendors may implement different 
   feature sets which include all or part of these solutions.


2.3 Stub Sources into Transit Domain

   This section describes the problem of injecting sources from a
   stub flood-and-prune domain into a PIM-SM transit domain.  
   Regardless of which solution is chosen for the reverse problem (i.e.
   injecting sources from the transit to the stub), there is one 
   general solution to this problem which is specified in [PIMSM] and
   summarized here.

   BRs will have knowledge of all sources in the flood-and-prune stub 
   domain.  In order to inject sources into the transit domain, it will
   act as a PIM-SM "DR edge router" and use the PIM-SM register 
   protocol.  That is, sources from the stub domain will be register 
   encapsulated to the appropriate RP in the PIM-SM domain.  Behavior 
   is similar to a PIM-SM DR router encapsulating sources on its local 
   network with one exception.

   When a PIM-SM DR receives a register stop message, it stops 
   encapsulating the source's data but still periodically sends 
   registers to the RP so that it will know the source is still active.
   In the case of a DR on a LAN, this is straight-forward because a 
   new packet from the source will trigger an update of soft state on 
   the DR.  However, in the case of a BR, it is desirable to prune the 
   source back into the stub domain.  The problem arises because the 
   BR is dependent on the re-flood timer in the flood-and-prune 
   protocol as to when its forwarding state will be updated.  There 
   are two solutions:

        1. The transit domain may locate its RP at the BR.  In this
           case, the BR will have knowledge of all groups joined in
           the transit domain.
        2. The BR may chose not to prune the source into the stub 
           domain.  This allows the BR to refresh its registers with 
           accuracy at the expense of creating a large sink in the 
           network (note: this is how MOSPF works).
        3. DWRs can be used in the transit domain.  If DWRs are 
           available then the BRs will only inject sources from the
	   stub domain which are joined in the transit.
        4. The BR may send refreshes whenever a source is periodically
           flooded in the stub domain.  This MAY be longer than the 
           RP register state for a source and therefore a significant
           delay may occur before the source is injected into the
           transit domain.
        5. MSDP may be used to report active sources into the transit
           domain.  This would involve a MSDP peering between the BR and
           another router in the transit domain.



Expires August 2000                                          [Page 5]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

2.4 Transit Source into Stub Domain

2.4.1 Domain Wide Reports

   The domain wide report [DWR] protocol allows for complete group 
   membership information for a domain to be obtained by BRs.  The DWR 
   protocol works very much like the IGMP protocol except throughout a 
   domain, utilizing domain wide queries and domain wide reports.  
   Routers periodically send reports for all local memberships.  These 
   reports can be used by border routers to determine the total group 
   membership of a domain.

   In using DWRs in the connection of a flood-and-prune stub network to
   a ISP PIM-SM domain, the following occur:

	1. The stub domain must support DWR in its routing devices or 
           proxy DWR Reports from each IGMP subnet. 
	2. When a BR receives a domain wide report, it will perform a
           (*,g) PIM-SM join towards the RP.  This will enable sources
           from the transit domain (and beyond) to be injected into the
           stub domain.  When a DWR membership times out or a group is 
           explicitly left, prunes should be sent for every forwarding
           entry (i.e. non-pruned) matching the group.  

   DWRs present a "clean" solution to the problem of connecting 
   domains.  DWRs may create a small additional overhead in control 
   traffic in the flood-and-prune domain.  They also create extra 
   forwarding entries in the flood-and-prune domain because each router
   which sends a DWR report is itself a multicast source.  


2.4.2 Receivers are Senders Heuristic

   Another possibility for transiting traffic between a flood-and-prune
   domain and a PIM-SM domain is to use the "receivers are senders"
   heuristic.  This heuristic assumes that all receivers in the 
   flood-and-prune domain are also senders and will send traffic 
   to a group (e.g. RTCP).  This is true for many-to-many applications 
   or one-to-many applications where receivers send RTCP reports but 
   not in general.  Thus this heuristic may not deliver multicast 
   traffic from the PIM-SM domain to all receivers in the flood and 
   prune domain.

   The "receivers are senders" heuristic works in the following manner:

 	1. BRs have global knowledge of sources in the flood-and-prune
	   domain by virtue of the protocol itself.  These are either
	   forwarding or prune entries for all active internal sources 
           in all groups.
  	2. For every group for which there is a forwarding entry, a 
           (*,g) join is sent in the PIM-SM domain.  This will pull 
           traffic from the PIM-SM domain into the flood-and-prune 
           domain.  

Expires August 2000                                          [Page 6]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   The problems with this approach is that it may deny forwarding 
   multicast traffic to valid receivers.  This would likely occur in 
   one-to-many applications which do not multicast their RTCP or RTCP 
   like reports.  Another problem results if the aggregate reporting 
   interval for the stub domain is greater than the source timeout
   for the forwarding entries in the BR.  


2.4.3 (*,*,RP)

   The PIM-SM specification [PIMSM] specifies that (*,*,RP) state can 
   be used for interconnection of multicast routing domains.  These 
   (*,*,RP) tree branches are built from multicast border routers to 
   RPs in the PIM-SM domain.  (*,*,RP) branches carry traffic from all 
   sources to all groups.  (*,*,RP) solves the interconnection problem 
   by pulling all traffic from RPs to the BRs where it can then be 
   injected into an adjacent domain (in our case a flood-and-prune 
   domain).  After it is flooded into the domain, it may be pruned back
   to the BR where the BR may then initiate PIM-SM prunes back to the 
   RP.

   In summary, (*,*,RP) works in the following way:

  	1. BRs initiate (*,*,RP) branches to all RPs (all routers in 
           the path will have (*,*,RP) forwarding entries).  BRs simply
           use the RP-set from the RP-set distribution mechanism 
           [PIMSM, AUTORP].
	2. When source traffic arrives at an RP, it will be forwarded 
           down the (*,*,RP) branch (as well as other outgoing 
           interfaces).
	3. When traffic is received at the BR from the (*,*,RP) branch,
           it is injected into the flood-and-prune domain.  If there 
           are no receivers, it will be pruned back to the BR.
	4. If the BR receives prunes for the injected source, it will 
           then prune the source back into the PIM-SM domain by issuing
           (s,g) prunes towards the RP.

   The problems with this approach are that some providers may be 
   reluctant to have (*,*,RP) state in their networks, particularly if 
   they have a large number of customers with flood-and-prune domains.
   This would result in (*,*,RP) in many parts of the network, 
   effectively turning the PIM-SM domain into a flood-and-prune domain.


2.4.4 Running MSDP on BR

   Another possibility for connection is through the use of the MSDP 
   protocol.  In this scenario, MSDP is run on the BR with a peering 
   connection to any other MSDP speaker in the transit domain.  MSDP is
   used to learn about all sources in the PIM-SM domain (and beyond).  
   Once these sources are learned, they can be joined directly and 
   injected into the flood-and-prune domain.  This functions in a 

Expires August 2000                                          [Page 7]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   similar way to (*,*,RP) except that MSDP is used to discover the 
   sources, and must more state is used.

   In summary, using MSDP to connect flood-and-prune domains works in 
   the following way:

  	1. MSDP is run on the BR.  A MSDP peering is configured with 
	   a MSDP speaker in the transit domain.  
	2. When a new source is learned through MSDP, the BR will send 
           a PIM-SM (s,g) join towards the source.
	3. When data from the new source is received, the BR will 
           inject the source into the stub domain to be flooded.
	4. If there are no receivers (or all receivers leave the 
           group), the source will be pruned back to the BR; the BR 
           will then send a (s,g) prune towards the source in the 
           PIM-SM domain.

   Running MSDP on the BR provides a reasonable alternative without
   DWRs.  The only possible drawback is the growth of a providers MSDP 
   mesh as each customer will have a MSDP peering.  However, this may
   actually benefit a provider in that provisioning configurations are
   are similar to inter-provider configurations.



3. Explicit Join Shared Tree Protocols

   Stub domains may run shared tree protocols like PIM-SM.  In cases 
   where a stub domain requires multicast transit service from an ISP
   (also running PIM-SM), several options exist for configuration.  
   Route distribution is deferred until section 5.
  
   This section presents the possibilities for connecting a stub 
   shared tree protocol domain (e.g. customer) to a transit PIM-SM 
   domain (e.g.  provider).  We assume that PIM-SM is the protocol 
   being run in both domains.  (NOTE: although other shared-tree 
   protocols exist, PIM-SM is the only one which has currently 
   experienced "real-world" deployment.  It is for this reason that 
   only PIM-SM to PIM-SM interconnection is addressed)

   When an stub domain wishes to receive multicast connectivity from
   a provider, a decision must be made as to which RPs the stub domain
   will use.  We present two scenarios: the first when the stub PIM
   domain uses the ISP RPs and the second when a stub domain uses
   its own RPS. 


3.1 Using ISP RPs

   A singly-homed stub domain (if allowed) may use its ISP RPs.  In 
   many cases, the stub domain will need to run the same RP-set 
   distribution mechanism [PIMSM, AUTORP] that its ISP does and must 

Expires August 2000                                          [Page 8]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   not filter any groups used for these protocols.  It may be possible 
   for the BR to proxy messages from one RP-set distribution mechanism 
   into another if supported in a BR implemenation.  In both cases, the 
   ISP's RP-set will be distributed into the stub domain.  In this 
   way, the stub domain is an extension of the ISPs PIM-SM domain.

   The disadvantage of this configuration is that all traffic must go
   to the ISPs RPs.  As an optimization, an ISP may use multiple RPs 
   with anycast [LOGRP], and may locate an RP at the POP where the 
   customer connects.  This allows sources in the stub to be relatively
   close to their RP.  This configuration is best when the stub domain 
   is primarily going to receive traffic sourced outside its domain.  
   The advantage of this scheme is that it is easy to provision and 
   configure for the ISP.  However, a potential disadvantage is that 
   routers may become state burdened if a stub domain has many 
   intra-domain groups and the link between the domains may be
   burdened with traffic.
   
   Another potential problem is in the allocation of administratively
   scoped addresses.  One possibility is for the ISP to divide its
   administratively scoped address space between its customers.  
   Another possibility is for the stub domain to have its own RP but
   only for administratively scoped groups.  In this scenario, a 
   filtering mechanism would have to be in place at the BR to block
   administratively scoped addresses across the boundary in the RP-set 
   protocol.  However, global multicast group trees would still be
   constructed across the domain boundary (i.e. using the ISP RPs for
   global groups).

   In summary, this solution works when a domain uses administratively
   scoped addresses for its intra-domain groups (and uses its own RP
   for these groups).  It does not require MSDP configuration and 
   therefore does not grow the provider's MSDP mesh.
  

3.2 Private RPs with MSDP

   When a domain has many multicast sources which will be destined
   only within its domain, it is best to configure a separate PIM-SM
   domain for the stub domain.  In this configuration, the stub domain
   runs MSDP to its provider.  The border router between the PIM-SM
   domains must:	

   	- Block RP-set information between the domains
	- Only allow (s,g) joins/prunes between domains (follows from
	  above)
	- Configure boundaries for administratively scoped addresses
	  between domains.

   Sources flow between transit and stub in the same way that ISP
   PIM-SM/MSDP peering works.  MSDP distributes source information to
   RPs who directly join towards the sources.  The sources are then

Expires August 2000                                          [Page 9]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   sent down the shared tree to receivers (last hop routers may then
   make a decision about switching to an SPT tree following the 
   standard PIM protocol).  When a RP learns a new source in its domain
   it sends a source-active message in MSDP to all peers.

   A domain may wish to configure a RP for private addresses 
   (administratively scoped) and one for global addresses.  In this 
   case, the "global address" RP only needs MSDP (to peer with 
   transit).



4. Connections with other Protocols

4.1 MOSPF

   MOSPF [MOSPF] is a unique protocol which makes use of the OSPF link 
   state database to compute source-based multicast trees.  MOSPF has 
   several properties which make it particularly easy to connect to 
   other multicast routing domains:

  	- MOSPF floods group membership information using a Group
	  Membership LSA.  Each DR will flood group membership 
          information for its attached subnet.  Group LSAs are flooded 
          into the OSPF backbone; therefore, all OSPF backbone routers 
          have total group membership for the entire domain.
	- MOSPF ABR and ASBRs are "wildcard" receivers.  This router 
          will receive all traffic sourced in the domain.  These 
          routers therefore have total source knowledge within a domain.

   Both [MOSPF] and [INTEROP] describe the interoperability between
   MOSPF and other protocols.  The following section gives a quick
   overview of the issues with repect to MOSPF.


4.1.1 Traffic from PIM-SM to MOSPF

   In order to pull sources from a PIM-SM transit domain into a MOSPF
   stub domain, the PIM-SM/MOSPF BR should send (*,g) joins into the 
   PIM-SM domain for every group for which a group membership LSA 
   exists in the OSPF LSDB.  If all hosts leave the group, the group 
   membership LSAs will be flushed and the BR will send a (*,g) prune.
   BRs may also monitor source rates and join to source trees if 
   necessary.  


4.1.2 Traffic from MOSPF to PIM-SM

   In order to pull sources from a flood-and-prune stub domain into a 
   PIM-SM transit domain, the PIM-SM/MOSPF BR will act as a PIM-SM DR 
   edge router and encapsulate all MOSPF sources in PIM-SM registers.  
   The multicast BR should be configured as a OSPF ASBR in order that

Expires August 2000                                          [Page 10]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

   the wildcard receiver bit is enabled in the LSAs originated from the
   router.  As mentioned above, part of the MOSPF protocol requires 
   that ASBRs act as wildcard receivers.  
  

4.2 IGMP-Proxy

  IGMP-proxy [PROXY] is a name used to describe a proxy of the 
  IGMP protocol.  A router or other device proxies IGMP reports from 
  some interfaces (downstream) to other (upstream) interfaces.  The 
  downstream interfaces are typically connected to either dial-in 
  lines or LANS.  The upstream interfaces are connected to one or
  more multicast routers.  In the following section, the term BR is 
  used to describe the upstream multicast routers of an IGMP-proxy. 

  A small domain or dial-in user may use IGMP-Proxy within a small 
  network for multicast connectivity.  The most critical part of a 
  connection with IGMP-Proxy is that the transit domain have the 
  correct routing information for RPF checks for the stub domain.

  In order to inject sources from the transit domain to the IGMP-relay
  domain, a BR is configured with a PIM-SM component (on the provider
  network), and a regular multicast enabled interface on the stub 
  domain side.  To the BR, the IGMP-Proxy domain will look like a 
  single host.  Devices will proxy IGMP reports towards the router 
  which will then perform the standard PIM-SM joining procedure.

  In order to inject sources from the IGMP-Proxy domain into the PIM-SM
  transit domain, the BR must be configured with the correct routing
  information for the PIM-SM RPF checks to pass.  In the simplest case,
  the router has route (pointing towards the stub) for all 
  subnets which are multicast capable.  The proxy will relay all 
  sources towards the BR which will then be injected into the domain.

  It is expected that multi-homed domains will be running a multicast
  routing protocol as opposed to IGMP-Proxy.  In the case that a 
  multi-homed stub uses IGMP-Proxy, it must ensure that the sources are
  relayed to the correct RPF router in the multi-homed configuration 
  (see section 5).
  


5. Exchanging Multicast Specific Routes

  Some multicast routing protocols in use today perform Reverse Path 
  Forwarding (RPF) checks on packets to verify they were received on 
  the "correct" (i.e. shortest to source or RP) interface.  These RPF 
  checks are used to prevent multicast packet looping.

  When multiple multicast domains transit multicast packets, it is 
  important that routes exchanged between the domains allow for RPF 
  checks to be performed correctly.  Problems can occur when domains 

Expires August 2000                                          [Page 11]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

  use different protocols for route selection (e.g. with PIM-SM).  
  Problems can also occur in situations where there are load 
  balancing/backup route schemes in use for unicast routing and the 
  multicast tree building protocol is using those routes for RPF checks.

  This section presents several route distribution scenarios and
  attempts to present some of the problems specific to multicast.  Many
  scenarios are covered briefly because they are well-known 
  configurations for unicast routing.
  

5.1 MBGP Peering

  A provider may choose to MBGP peer with a stub domain in order
  to learn multicast specific routes from the stub domain.  The
  specifics of MBGP peering are similar to unicast BGP peering.

  If a stub domain is multi-homed, then MBGP is important for
  learning the correct ingress for sources.  However, unless these
  routes are injected into the IGP (for multicast), they are not
  useful.  MBGP peering is most useful for providers to learn the
  the correct ingress for a source. 

  Dependant on the IGP (being used for multicast),  multicast specific
  routes may be injected from MBGP.  In most cases, a stub domain
  will inject a default route from the BR that is connected with
  the provider network.  The following sections discuss injecting
  default routes into multicast IGPs.  In many cases the MBGP peer
  in the stub domain is the multicast BR. 

  
5.2 DVMRP Route Injection

  If a stub domain is using DVMRP as its multicast IGP, then a
  default route may be injected from a the multicast BR.  This
  route may be injected dependent on either BGP or MBGP routes
  being learned.

  Some implementations of PIM support using DVMRP as a route
  distribution protocol.  PIM can be configured to use DVMRP routes
  for RPF checking.  In this case, a different multicast default
  route (i.e. from the unicast default) can be injected into a
  stub domain using DVMRP.

  (note: only some implementations of DVMRP *truly* support use
   of a default route.  later versions of the spec explicity state
   the prune and graft rules when a default route is used)


5.3 MOSPF Route Injection

  If a stub domain uses MOSPF as its multicast IGP then multicast

Expires August 2000                                          [Page 12]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000

  specific routes must be injected into OSPF.  In most cases, a
  domain will want to use a default route for external multicast
  sources.  A default route tagged with the "multicast" bit in the
  OSPF can be used for this.
  

5.4 Different Multicast/Unicast Defaults

  If a stub domain wishes to configure separate multicast and unicast
  default routes then it is currently limited in the type of 
  configurations that can be used (this will change as multicast
  specific metrics are added into unicast IGPs).  Three options are
  to:
	1. Use DVMRP (strictly as a route propagation protocol) to 
           propagate the multicast specific route.
	2. Use MOSPF with OSPF multicast tagged route
	3. MBGP peering for all multicast routers
 


6. References

  [PIMSM] Estrin, D.,et al., "Protocol Independent Multicast-Sparse Mode
         (PIM-SM): Protocol Specification," RFC 2362, June 1998.

  [DWR] Fenner, W., "Domain Wide Multicast Group Membership Reports,"
        draft-ietf-idmr-membership-reports-04.txt, August 1999.

  [AUTORP] Farinacci, D., Wei, L., "Auto-RP: Automatic discovery of
           Group-to-RP mappings for IP multicast,"
           ftp://ftpeng.cisco.com/ipmulticast/pim-autorp-spec01.txt,
           September 1998.

  [INTEROP] Thaler, D., "Interoperability Rules for Multicast
            Routing Protocols," RFC 2715, October 1999. 

  [DVMRP] Pusateri, T., "Distance Vector Multicast Routing Protocol,"
          draft-ietf-idmr-dvmrp-v3-09.txt, September 1999.

  [PROXY] Fenner, W., "IGMP-based Multicast Forwarding 
          (``IGMP Proxying'')," draft-fenner-igmp-proxy-01.txt,
          June 1999.

  [MOSPF] Moy, J., "Multicast Extensions to OSPF,"
          RFC 1584, March 1994.

  [PIMDM] Deering, S., et al., "Protocol Independent Multicast
          Version 2 Dense Mode Specification,"
          draft-ietf-pim-v2-dm-01.txt, November 1998.

  [BGP] Rekhter, Y., Li, T., "A Border Gateway Protocol 4 (BGP-4),"
        RFC 1828, March 1995.

Expires August 2000                                          [Page 13]

INTERNET-DRAFT        Connecting Multicast Domains       February 2000


  [MBGP] Bates, T., et al., "Multiprotocol Extensions for BGP-4,"
         RFC 2283, February 1998.

  [MSDP] Farinacci, D., "Multicast Source Discovery Protocol (MSDP),"
         draft-ietf-msdp-spec-05.txt, February 2000.

  [ASCOPE] Meyer, D., "Administratively Scoped IP Multicast,"
           RFC 2365, July 1998.

  [GLOP] Meyer, D., Lothberg, P., "GLOP Addressing in 233/8,"
         RFC 2770, February 2000.

  [MALLOC] Thaler, D., Handley, M., Estrin, D., "The Internet
           Multicast Address Allocation Architecture,"
           draft-ietf-malloc-arch-04.txt, January 2000.

  [LOGRP] Kim, D., et al., "Anycast RP mechanism using PIM and MSDP,"
          draft-ietf-mboned-anycast-rp-05.txt, January 2000.



7. Author's Address
 
  Brad Cain
  Nortel Networks
  600 Technology Park
  Billerica, MA 01821
  1-978-288-1316
  bcain@nortelnetworks.com























Expires August 2000                                          [Page 14]

PAFTECH AB 2003-20262026-04-23 13:05:52