One document matched: draft-farinacci-anycast-clusters-00.txt
Network Working Group Dino Farinacci
INTERNET DRAFT Liming Wei
John Meylor
cisco Systems
March 3, 1998
Use of Anycast Clusters for Inter-Domain Multicast Routing
<draft-farinacci-anycast-clusters-00.txt>
Status of this Memo
This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its areas,
and its working groups. Note that other groups may also distribute
working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
To learn the current status of any Internet-Draft, please check the
"1id-abstracts.txt" listing contained in the Internet-Drafts Shadow
Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe),
munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
ftp.isi.edu (US West Coast).
Abstract
Anycast Clusters is a proposal to connect multiple ISP sparse-mode
PIM domains together. The environment we anticipate is multiple
interconnection points among a set of ISPs when they are unable to
colocate their respective RPs at the same dense-mode interconnect
point. This is an alternative to the Multi-Level RP design and
requires less new code in routers.
This proposal is being submitted as a method for the initial phase of
Inter-Domain Multicast deployment and is upward compatible with the
IDMR protocols being proposed for subsequent phases.
Farinacci, Wei, Meylor [Page 1]
RFC DRAFT February 1987
1.0 Introduction
Anycast Clusters is a proposal to connect multiple ISP sparse-mode
PIM domains together. The environment we anticipate is multiple
interconnection points among a set of ISPs when they are unable to
colocate their respective RPs at the same dense-mode interconnect
point. This is an alternative to the Multi-Level RP design and
requires less new code in routers.
This proposal uses an adminstrative approach to solve the problem of
connecting shared trees together across domains. To expedite
deployment with minimal risk, changes to architecture or protocols
are not considered.
It is known that there is a 3rd party RP problem when an ISP must
depend on another ISP's RP to 1) get their own sources' traffic on
the shared tree and 2) to receive multicast traffic from sources
inside and outside of their domain. This proposal reduces the
probability that a 3rd party RP is used but doesn't eliminate it all
together.
1.1 Terminology
Multicast Cluster
A Multicast Cluster is an interconnect or dense-mode region where
there are two or more multicast peering agreements among ISPs. The
Multicast Cluster runs dense-mode PIM.
Anycast Cluster Address
A single IP unicast address that every router on a Multicast
Cluster interconnect has assigned to it. The address is assigned
to a loopback interface on each multicast router. The same address
is used on the loopback interface of each router.
Cluster RP Address
Is the address routers use to Register and Join to an RP. The RP
address is the assigned Anycast Cluster Address.
Farinacci, Wei, Meylor [Page 2]
RFC DRAFT February 1987
2.0 Overview
There will be N Multicast Cluster interconnects configured in the
Internet. The number of interconnects may change but we anticipate
it will not change often. The global multicast group address space
will be divided among the N Multicast Clusters. That is, 1/Nth of the
group address space will use the Cluster RP Address associated with
each Multicast Cluster. This allows traffic distribution for
different groups on each of their shared-trees to be split across all
Multicast Clusters.
Receivers in any domain join a single shared tree. The leaf routers
that hear IGMP reports for a specific group will do a DNS lookup to
obtain the Cluster RP Address. By converting the dotted decimal group
address, obtained from the IGMP report, into a DNS domain name
string, a name is created for the DNS lookup.
The Joins that are propagated toward the RP will hit the first
Cluster RP Address for the group, and hence why we call it an Anycast
Cluster Address. For the same group that is joined, many border
routers at the Multicast Cluster interconnect can act as the RP for
the group. This is how a domain can pull down data for receivers in
its domain.
Many shared trees can be built if receivers in other domains use a
different router as the RP on the Multicast Cluster. Each shared tree
is joined together at the Multicast Cluster interconnect since it is
a dense-mode interconnect.
Similarly, Register messages from leaf routers use the dotted decimal
group address to DNS string mapping and the closest router with the
Cluster RP Address is used. This allows sources in the same domain to
get their packets to the shared tree for receivers in other domains.
Requiring the Multicast Cluster interconnect to run dense-mode PIM
allows any RP that receives data via Register messages to forward on
the interconnect so the other RPs in the Multicast Cluster can
forward down their respective shared trees.
3.0 Group Allocation
Since a global group address subrange is assigned to a Multicast
Cluster interconnect and not to a domain, there is no address
ownership or leasing algorithm issues to deal with. We anticipate the
number of subranges be less than or equal to 256. So we believe it is
easily manageable in DNS because of the small number and the
infrequent changes to the allocation. The DNS records only need
Farinacci, Wei, Meylor [Page 3]
RFC DRAFT February 1987
change when there is a new Multicast Cluster configured. This occurs
at the rate when new multicast peering interconnects are deployed by
ISPs.
4.0 Example Multicast Cluster Design
Let's say, initially, there are 12 ISPs that will multicast peer at 8
different public interconnects. Therefore, there will be 8 Multicast
Clusters. Let's say the 224.2.0.0/16 range of group addresses are
used for global multicast addressing.
Let's allocate a class C network for the Cluster RP Addresses with
the following address convention:
223.255.255.x where x defines the Cluster RP Address. The host address
for all RPs at a Multicast Cluster will be 1 by convention. So:
223.255.255.1 = Cluster 1
223.255.255.3 = Cluster 2
223.255.255.5 = Cluster 3
223.255.255.7 = Cluster 4
...
223.255.255.255 = Cluster 128
Each ISP router that wishes to be the RP on the Multicast Cluster 1
will do the following:
interface loopback230 ip address 223.255.255.1 255.255.255.255
It is required that at least two RPs are present at any interconnect
so a single ISP doesn't have to carry the load for the the entire
group subrange allocated to the Multicast Cluster.
A single ISP can provide an RP for all or some of the Multicast
Clusters it is attached to but is required to provide an RP on at
least one interconnect.
Farinacci, Wei, Meylor [Page 4]
RFC DRAFT February 1987
In the DNS the 224.2.0.0/16 range could be divided as follows:
Cluster Group Range DNS Mapping RP Address
------- ----------- ----------- ----------
000 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.1
where 0 <= x <= 31
001 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.3
where 32 <= x <= 63
010 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.5
where 64 <= x <= 95
011 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.7
where 96 <= x <= 127
100 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.9
where 128 <= x <= 159
101 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.11
where 160 <= x <= 191
110 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.13
where 192 <= x <= 223
111 224.2.x.0/24 -> x.2.224.mcast.net IN A 223.255.255.15
where 224 <= x <= 255
So if a leaf router received an IGMP report (or data packet) for
group 224.2.129.5, it would do a DNS lookup for 5.129.2.224.mcast.net
(or if the /24 convention is accepted it could simply lookup
129.2.224.mcast.net to reduce DNS entries) and get 223.255.255.9 re-
turned. It would Join or Register to the closest RP 223.255.255.9.
5.0 Observation
By using Anycast Cluster RP Addresses, no single ISP will have to be
RP for the entire group subrange allocated to the Multicast Cluster.
The workload and responsibility can be shared among the RPs (and
different ISPs) on that Multicast Cluster.
If an ISP doesn't configure an RP on a Multicast Cluster, it must do
so on another Multicast Cluster so it can be a good citizen and share
responsibilty. Since group addresses are mostly randomly allocated,
the RP load can be shared across Multicast Clusters and depending on
the source and receiver location may be load shared across routers
within a Multicast Cluster.
6.0 Acknowledgements
The authors would like to thank David Meyer and Steve Deering for
their insightful comments.
Farinacci, Wei, Meylor [Page 5]
RFC DRAFT February 1987
7.0 Author's Address:
Dino Farinacci
Cisco Systems, Inc.
170 Tasman Drive
San Jose, CA, 95134
Email: dino@cisco.com
Liming Wei
Cisco Systems, Inc.
170 Tasman Drive
San Jose, CA, 95134
Email: lwei@cisco.com
John Meylor
Cisco Systems, Inc.
170 Tasman Drive
San Jose, CA, 95134
Email: jmeylor@cisco.com
8.0 References
[1] Estrin D., Farinacci, D., Helmy, A., Thaler, D., Deering, S.,
Handley M., Jacobson, V., Liu C., Sharma, P., Wei, L., "Protocol
Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification",
draft-ietf-idmr-pim-sm-specv2-00.txt, September 9, 1997.
[2] Thaler, D., Estrin, D., Meyer, D., "Border Gateway Multicast
Protocol (BGMP): Protocol Specification", draft-ietf-idmr-gum-01.txt,
October 30, 1997.
[3] Bates, T., Chandra, R., Katz, D., and Y. Rekhter., "Multiprotocol
Extensions for BGP-4", draft-ietf-idr-bgp4-multiprotocol-01.txt,
September 1997.
Farinacci, Wei, Meylor [Page 6]
| PAFTECH AB 2003-2026 | 2026-04-21 07:49:29 |