One document matched: draft-irtf-smug-key-tree-balance-00.txt
INTERNET-DRAFT M. J. Moyer
draft-irtf-smug-key-tree-balance-00.txt Georgia Tech
Expires 25 December 1999 J.R. Rao, P. Rohatgi
IBM
25 June 1999
Maintaining Balanced Key Trees for Secure Multicast
<draft-irtf-smug-key-tree-balance-00.txt>
Status of this Memo
This document is an Internet-Draft and is in full conformance
with all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as
Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other
documents at any time. It is inappropriate to use Internet-
Drafts as reference material or to cite them other than as
"work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
Moyer, Rao, Rohatgi [Page 1]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
ABSTRACT
Several multicast key management schemes such as those proposed by
Wallner et al, Wong et al and Balenson et al are based on a
multilevel, logical hierarchy (or tree) of key-encrypting keys. When
used in conjunction with a reliable multicast infrastructure, this
approach results in a highly efficient key update mechanism in which
the number of multicast messages transmitted upon a membership update
is proportional to the depth of the tree, which is logarithmic in the
size of the secure multicast group, provided the tree is maintained in
a balanced manner. Therefore the issue of maintaining such trees in a
balanced manner is quite relevant in practice. This draft which
is based on the design principles followed in an implementation of
secure multicast using the Wallner key-management scheme, describes
efficient techniques to maintain a balanced key tree in the presence
of arbitrary group membership updates. This draft also describes
other optimizations which were found to be useful in practice.
Moyer, Rao, Rohatgi [Page 2]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
TABLE OF CONTENTS
Status of This Document ...................................1
ABSTRACT ..................................................2
TABLE OF CONTENTS..........................................3
1. INTRODUCTION TO SECURE MULTICAST KEY MANAGEMENT.........4
2. REVIEW OF TREE-BASED KEY MANAGEMENT.....................4
3. OPTIMIZATIONS TO TREE-BASED KEY MANAGEMENT SCHEME.......7
3.1 Addition of a New Client...............................7
3.2 Deletion of an Existing Client........................10
3.2.1 Keeping the Tree Balanced...........................12
3.2.1.1 Modified Leaf Deletion............................12
3.2.1.2 Periodic Rebalancing..............................13
3.3 Additional Improvements to the Tree-Based Scheme......14
3.3.1 Auxiliary Client List...............................14
3.3.2 Time-Based Key Updates..............................15
4. REFERENCES.............................................16
5. AUTHORS' ADDRESSES.....................................16
Moyer, Rao, Rohatgi [Page 3]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
1. INTRODUCTION TO SECURE MULTICAST KEY MANAGEMENT
Secure group communication often works by sharing a common encryption
key among n members of a group, so that group members can send and
receive messages to each other, but non-members cannot. Distributing
this key to all members securely is a challenge. One straightforward
method for secure key distribution is to setup a trusted group key
manager. The group key manager is then responsible for ensuring that
all members of the group have the group key at all times, and that no
non-members of the group ever have the current group key. To make
this guarantee, the group key manager must redistribute a new group
key every time the group's membership changes, i.e., every time a
current member leaves the group or a new member joins the group.
The simplest method of securely distributing a group key works as
follows. First, have each group member negotiate a shared key between
itself and the group key manager at the time the member joins the
group. When the group key manager needs to distribute a new key, it
can do so by encrypting the new key under the shared key of each group
member and sending the encrypted key via unicast to each member. This
method is simple and easy to implement, but it scales poorly as the
size of the multicast group increases. For large groups with highly
dynamic membership, key updates may occur very often. A key update
scheme that incurs messaging and key encryption costs linear in the
size of the group is unacceptable in such a scenario. Fortunately,
there are more efficient key management schemes that can update group
keys with sublinear messaging costs. We briefly describe some of
these schemes in the next section.
2. REVIEW OF TREE-BASED KEY MANAGEMENT
Both Wallner et al. [1] and Wong et al. [2] have developed ideas based
on the observation that if we construct a multilevel, logical
hierarchy of key-encrypting keys, we can use it in conjunction with a
reliable multicast infrastructure to manage a group key such that key
updates after a member joins or leaves the group incur cost that is
logarithmic in the size of the group, in terms of message overhead on
the network and computational (key encryption) overhead at the key
manager. A variant of this idea with lower network communication cost
has been developed by Balenson et al [3]. Wong et al. have performed
extensive theoretical and experimental analysis on various types of
key graphs; they have concluded that the most efficient key graph for
group key management is a k-ary tree. Wallner et al. have described
a key management structure that uses a binary tree in the same
fashion. We briefly describe the binary tree-based scheme here,
since our new results are based on it.
Moyer, Rao, Rohatgi [Page 4]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
The binary tree-based key management scheme works as follows. A
multicast group has n members, named M1 through Mn, and a centralized
group controller whose purpose is to manage the group's shared keys.
A member joins the group by contacting the controller via a secure
unicast channel. At the time that the new member joins, he and the
controller negotiate a shared key that they will use in later
interactions. The controller stores a binary tree structure in which
each node contains a key. At the leaves of the tree are the n shared
keys that the controller has negotiated with each of the members in
the group. Each member stores a subset of the controller's keys. The
subset of keys stored by member j is the same as the set of keys in
the path from leaf j to the root of the tree, including leaf j and the
root itself. The root node stores the group's shared key; all other
keys in the tree are auxiliary keys, used only to facilitate efficient
key updates when clients join and leave the group. The total number
of keys stored by the controller is 2n-1, and the total number of
keys stored by each member is log_2 n.
We now describe how keys are updated when a member joins or leaves the
group. In Figure 1, assume that member M4 leaves the group. We must
update keys in the tree according to the following guideline: when the
group's membership changes, all keys known by the member who joined or
left the group must be updated. In this example, we must update all
keys along the path from K4 to the root. Thus, keys KB and KR must be
updated. (K4 does not need to be updated.) We can use the existing key
hierarchy, along with reliable multicast, to efficiently distribute the
updated keys after the controller has generated them, as follows.
First, we must distribute a new version of KB to all remaining members
who need it. So we send it (via either unicast or multicast)
encrypted under K3. Then, we must distribute a new version of KR. We
do this by multicasting it twice, encrypted under KA and the new
version of KB that we just distributed. At this point, we have
updated all keys that M4 knew while he was a member of the group, so
he is effectively excluded from any future communication. The
semantics for updating keys on a member join are very similar. In
general, for a binary tree, we must update keys at log_2 n levels in
the tree, and at each level we must send two key update messages.
Thus, the binary tree-based key management scheme can update keys
using 2 log_2 n messages.
Moyer, Rao, Rohatgi [Page 5]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
Figure 1: A binary key management tree for 4 clients
____
| |
| KR |
|____|
|
_________|_________
| |
____ ____
| | | |
| KA | | KB |
|____| |____|
| |
____|____ ____|____
| | | |
____ ____ ____ ____
| | | | | | | |
| K1 | | K2 | | K3 | | K4 |
|____| |____| |____| |____|
| | | |
| | | |
____ ____ ____ ____
/ \ / \ / \ / \
| M1 | | M2 | | M3 | | M4 |
\____/ \____/ \____/ \____/
Balenson et al. [3] have proposed a variation of the tree-based group
key management scheme that trades computation time at the group member
nodes in return for fewer key update messages (log_2 n instead of
2 log_2 n) on a join or leave event. Their scheme uses a binary
one-way function tree (OFT) to store the group's key hierarchy. The
structure of the tree and key update semantics in the OFT scheme are
very similar to those in the standard binary tree structure described
above, so we do not discuss the OFT scheme further. The interested
reader is encouraged to consult [3] for more information.
It is important to note that the efficiency of each of these
tree-based key management schemes depends critically on whether the
key management tree remains balanced. (We say that a tree is balanced
if the distances from the root node to any two leaf nodes differ by
no more than 1.) In general, for a balanced binary tree with n
leaves, the distance from the root to any leaf is log_2 n. But if the
tree becomes unbalanced, the distance from the root to a leaf can
become as high as n. It it thus desirable to keep a key management
tree as balanced as possible. Any real implementation of a key
management tree should take this into consideration. In the remainder
of this document, we introduce techniques that we have developed for
efficiently manipulating a key management tree and helping to ensure
that the tree remains as balanced as possible at all times.
Moyer, Rao, Rohatgi [Page 6]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
3. OPTIMIZATIONS TO THE TREE-BASED KEY MANAGEMENT SCHEME
[1] specifies how to remove a client from a multicast group using a
binary key management tree, but does not describe any other details of
managing the group or the tree. The goal of our work is to define a
set of operations and techniques for efficiently creating a new tree
for a group and updating the tree when members (also referred to as
clients) join and leave the group, while maintaining the key tree
balanced. We have implemented our techniques in a Java prototype. In
the remainder of this document we provide detailed descriptions of the
basic tree manipulation and key-update algorithms for adding new
members to a group, deleting current members from a group, and
re-balancing the tree. We also describe other techniques that we
have invented to improve the efficiency of tree-based group key
management.
3.1 Addition of a New Client
When using a tree structure to manage group keys, we want to keep the
tree as small as possible and store as few keys as possible. Thus, our
tree-based management scheme begins with a null tree, i.e., a tree
with no nodes in it. The tree grows and shrinks as clients enter and
exit the group. We keep as an invariant that all interior nodes
have two children. As in Wallner's scheme, every leaf in the tree is
a client node, and all interior nodes represent auxiliary keys, except
the root. The root stores the group's shared key. A client node
contains the following information:
* client ID;
* shared key between that client and the controller.
An interior node contains the following information:
* key;
* boolean key update flag;
* distance and direction to a shallowest descendant leaf;
* distance and direction to a deepest descendant leaf.
Client nodes do not contain information about distance or the
direction to the shallowest or deepest descendant leaf, because every
client node is itself a leaf. The first client to join the group gets
inserted as the root of the tree. The root node typically stores only
the shared key for the group. At this point, the group contains only
one client, so the root is also a leaf. In this case, the root can
store the shared key (between the client and controller) and the group
key. In most cases, however, it is useless to distribute a group key
to a group that contains only one client, so the controller can wait
until a second client joins before distributing a group key.
Alternatively, the shared key between the controller and the single
client can act as the group key.
Moyer, Rao, Rohatgi [Page 7]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
When subsequent clients join the group, the controller inserts them
into the tree based on the following rule.
RULE: Find a shallowest leaf of the tree by following the shallowest leaf
link from the root. In the case of a full binary tree this could be
any leaf, otherwise it will be one of the leaves with the shallowest
depth. Call this leaf LS. Then, create a new interior node NI,
insert it at the location of LS, and make LS a child of NI. Finally,
create a new client node C and insert it as the other child of NI. By
inserting the new client at the shallowest place in the tree, we help
to keep it balanced and minimize its height. This minimizes the
computational cost and bandwidth required during key updates.
Figures 2 and 3 show how this process works.
Figure 2: The shallowest leaf in a tree
____
| |
| |
|____|
|
_________|_________
| |
____ ____
| | | |
| | | P |
|____| |____|
| |
____|____ ____|____
| | | |
____ ____ ____ ____
| | | | | | | |
| | | | | LS | | |
|____| |____| |____| |____|
| | |
_|_ _|_ | _|_
| | | | | | |
. . . . Shallowest . .
. . . . Leaf . .
. . . . . .
Moyer, Rao, Rohatgi [Page 8]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
Figure 3: Adding a new client at the shallowest position.
____
| |
| P |
|____|
|
_________|_________
| |
____ ____
New | | | |
Interior ---> | NI | | |
Node |____| |____|
| |
____|____ ____|____
| | | |
____ ____ . .
| | | | . .
| LS | | C |
|____| |____| <--New Client Node
After inserting the new client node and interior node into the tree,
the controller updates the data that has changed in the tree. To do
this, it traces a path from node C to the root. At each node in the
path, it updates the distance and direction to the shallowest and
deepest descendant leaves. Also, for each node in the path other than
the new client node, it sets the key update flag to TRUE. When it has
finished updating this data, the controller returns to the new client
node and retraces the path to the root. This time, for each node that
has its key update flag set to TRUE, the controller generates a new
key. It creates two key update messages for this new key. In the
first message, the new key is encrypted with the key in its left child
node. In the second message, it is encrypted with the key in its
right node. The controller signs both messages with its private key,
which all group members are assumed to have. It then resets that
node's key update flag to FALSE. This algorithm updates keys in the
same order that the Wallner scheme uses.
Consider the running times and other costs of these operations. The
computational cost incurred by the controller while inserting the new
interior node and client node into the tree is O(log n), where n is the
size of the multicast group. It takes O(log n) time to locate the
insertion point (assuming the tree is balanced) and constant time to
create and insert the new nodes. The cost of the first trip from the
new client node to the root is O(log n). This is true because
updating the data in each node takes constant time, and assuming the
tree is balanced, there are O(log n) nodes in the path from a leaf to
the root. Finally, the computational cost of the key updates on the
second trip to the root is O(log n), and the number of multicast
messages sent is 2 log_2 n.
Moyer, Rao, Rohatgi [Page 9]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
3.2 Deletion of an Existing Client
When a client exits the multicast group, that group's tree should
shrink accordingly. This eliminates any extra keys in the tree,
making key updates more efficient. When deleting a client from the
group, the controller follows this sequence of steps:
1. If the tree has only one leaf remaining, then the client that is
being deleted is the last remaining member of the group. Simply
delete this leaf. (This last leaf should also be the root.) There are
no keys to update, since the group has no more members.
2. If the tree has more than one leaf, first locate the client node
for the client to be deleted. Call it C. Let P be the interior node
that is the parent of C, and let S be the sibling of C, i.e., the
other child of P. Delete P and C, and move S up into the position
formerly occupied by P. (Figures 4 and 5 illustrate this operation.)
Then, starting at the new parent of S, trace a path to the root. At
each node in the path, update the distance and direction to the
shallowest and deepest descendant leaves, and set the key update flag
to TRUE, as described above. Finally, return to the new parent of S
and trace another path to the root. At each node, generate a new key
and encrypt it with the keys of the two nodes immediately beneath it.
Then create two key update messages from the encrypted keys, sign them,
multicast them to the group, and set the key update flag back to
FALSE, just as described above.
Figure 4: Tree with a client who is about to be deleted
____
| |
| |
|____|
|
_________|_________
| |
____ ____
| | | |
| | | P |
|____| |____|
| |
____|____ ____|____
| | | |
____ ____ ____ ____
| | | | | | | | Client
| | | | | S | | C | <-- To Be
|____| |____| |____| |____| Deleted
Moyer, Rao, Rohatgi [Page 10]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
Figure 5: Same tree after the client has been deleted
____
| |
| |
|____|
|
_________|_________
| |
____ ____
| | | |
| | | S |
|____| |____|
|
____|____
| |
____ ____
| | | |
| | | |
|____| |____|
The running time and bandwidth cost analysis of these operations is
similar to that for the client addition operations described
previously. In the trivial case, the only operation required is to
delete one node from the tree. This takes constant time, and it
requires no bandwidth, since there are no multicast messages to send.
In the more complex case, there are four operations to perform:
* locate the appropriate client node,
* perform the tree manipulations,
* update the data in the tree,
* generate and send the keys.
In our prototype, we use an auxiliary data structure, a client hash
table, that stores a pointer to the client node for each client.
This allows us to locate any client node in constant time, given only
that client's ID. Performing the tree manipulations is also a constant
amount of work. Updating the data in the tree and sending the keys to
the group incur the same costs as in client addition: O(log n)
computational time and 2 log_2 n multicast messages.
Moyer, Rao, Rohatgi [Page 11]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
3.2.1 Keeping the Tree Balanced
The running time and bandwidth cost analyses performed in the previous
two sections assumed that the tree always remains balanced, but this
assumption is not entirely valid. When adding a new client to the
group, we always insert it at the shallowest point in the tree; this
technique helps to keep the tree balanced. We have complete control
over how we edit the tree on client additions; however, we have no
control over the locations in the tree at which client deletions occur,
because we cannot predict at any given time which client will leave
the group next. It is not clear how often or to what extent this
problem can affect the performance of the system on key updates, but
we can construct scenarios in which the pattern of additions and
deletions in the group causes the tree to become severely unbalanced.
In such cases, the running time and bandwidth cost of the operations
described above could increase from logarithmic order to linear order
in the size of the group.
To combat this problem, we have implemented in our prototype two
simple tree-rebalancing schemes. One is a modification of the
deletion algorithm specified above which always maintains the tree
in a balanced manner. The other scheme is to allow the tree become
imbalanced due to a sequence of updates but periodically a tree
rebalancing algorithm can be invoked to bring the tree back to a
balanced state. We now describe these schemes in detail.
3.2.1.1 Modified Leaf Deletion
Since we maintain the invariant that interior nodes alway have
two children, for all practical purposes, we can assume that
the tree remains balanced until the difference in depths between
the shallowest and deepest leaves reaches some threshold. The
modified leaf deletion algorithm works like the regular leaf
deletion algorithm except in the case when a leaf deletion will
result in the depth difference exceeding the threshold. In that
case it performs the following steps in deleting the client.
* Find the deepest leaf in the tree.
* Delete it from the tree using the deletion algorithm
described earlier (excluding the final step, in which new
keys are generated and distributed.)
* Re-insert the deleted leaf at the position in the tree where
the client to be removed from the group is located.
* Along the path from the (formerly) deepest leaf's new
position in the tree to the root, mark the key update flag
at every node to TRUE.
At this point, keys must be updated along two paths: the path from the
position where the (formerly) deepest leaf was located, and the path
from the position where the client was removed from the group.
Moyer, Rao, Rohatgi [Page 12]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
As we update each key, we must set the key update flag at that node
back to FALSE. Also, when updating keys we should take care not to
update any key twice, because it is a waste of computation and
bandwidth. We can ensure that this does not happen by checking that
every one of a node's descendant nodes has had its key updated (if
necessary) before updating the key at that node.
This modified deletion algorithm incurs a worst-case cost of 4 log_2 n
key update messages per member deletion, since 2 log_2 n keys must be
updated along each of two paths from a leaf to the root. In practice,
however, the actual number of keys deleted is less than this: most
of the times a client deletion does not result in tree imbalance
and the cost is 2 log_2 n update messages. Even if a balancing
operation has to be done, the two paths always overlap at the root of
the tree. Furthermore, the paths may overlap at other nodes as well.
In our prototype, we have implemented logic that always takes
advantage of overlap in rekeying paths when possible. For example,
in this algorithm, if there were two "deepest" leaves in the tree,
the prototype would delete the one whose path to the root shares the
most common nodes with the client leaf to be deleted.
3.2.1.2 Periodic Rebalancing
The periodic rebalancing algorithm works as follows: Repeat the steps
below until the difference between the shallowest leaf and the deepest
leaf in the tree is within some acceptable range.
* Find the deepest leaf in the tree.
* Delete it from the tree using the deletion algorithm
described in Section 3.2 (excluding the final step, in which
new keys are generated and distributed.)
* Add it to the shallowest point in the tree using the
addition algorithm described in Section 3.1 (again omitting
the key generation and distribution step.)
When this algorithm ends, many of the client nodes in the tree may be
in new locations. For every client node that has changed its
location, we need to update one or more keys. More specifically, for
every node in the tree where either an addition or a deletion
occurred, we need to update all keys on the path from that node to
the root. The rebalancing algorithm marks every node that needs a
key update by setting that node's key update flag to TRUE. All that
remains is to generate new keys, send them, and reset the key update
flags to FALSE. The best order in which to generate the new keys is
to use a depth-first traversal on the tree. This ensures that we do
not generate or distribute any key until all of its descendants have
been generated and sent to the group. It is unclear how useful this
algorithm would be in practice. For a severely unbalanced tree, the
algorithm would require updates to many of the keys in the tree. This
would generate a large amount of multicast network traffic. In
contrast, if the tree is not unbalanced, then it probably performs at
a level close to its optimum, logarithmic level.)
Moyer, Rao, Rohatgi [Page 13]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
We can justify using this algorithm only when we believe that it can
significantly decrease the amount of network traffic generated by key
update messages over the long term. Also, it is unclear how often a
tree would need to be rebalanced or how much rebalancing would be
necessary to improve the key update performance.
3.3 Additional Improvements to the Tree-Based Scheme
The previous sections on addition, deletion, and rebalancing focus on
conceptual issues related to manipulating a tree. This section
examines group management from a more practical standpoint and
highlights several techniques that we found useful for improving the
efficiency and usefulness of a tree-based key management system.
We describe two features that our prototype supports. The first is
an auxiliary data structure that helps our system achieve much faster
performance on client additions at the expense of somewhat slower
performance on some deletions. The second is an alternate key update
strategy: periodic, clock-based updates, rather than event-based
updates.
3.3.1 Auxiliary Client List
It is possible, using a reliable multicast infrastructure, to update
the group key with just two messages whenever a new client joins a
multicast group. The group key manager must simply generate a new
group key and multicast it to the current group encrypted under the
old group key. Then, the key manager must send the new key to the
joining member via a secure unicast connection. We support this
feature in our prototype by adding an extra data structure, an
auxiliary client list, to hold new clients before they can be inserted
into the tree. When a new client contacts the controller to join the
group, the controller negotiates a shared key with the client, as
usual, and also gives the client a new group key. At the same time,
the controller multicasts the new group key to the existing group
members under the previous group key. Then, rather than insert the
new client into the tree immediately, the controller inserts the
client ID and shared key for that client in the auxiliary client list.
Arbitrarily many new clients can reside in the list. Clients in this
list do not need to be inserted into the tree until an existing client
leaves the group. When a client leaves the group, the controller
performs the following operations.
If there are no clients in the auxiliary client list, simply delete the
exiting client as in described above. If there are clients in the
auxiliary list, remove the first client and its shared key from the
list, delete the node of the exiting client, and replace the deleted
node with a new node for the new client. Update all of the ancestors
of the new node by setting their key update flags to TRUE. Then
remove the remaining clients from the auxiliary list, create new
client nodes for them, and insert them into the tree, updating their
Moyer, Rao, Rohatgi [Page 14]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
ancestor nodes accordingly. Do not generate or distribute any new keys
until after all new clients have been added into the tree. When the
tree has been updated, traverse it in depth-first order, generating and
sending new keys as necessary. This tree traversal and key update
process works exactly as described above.
3.3.2 Time-Based Key Updates
There are some secure multicast applications, such as pay-per-view
services, that add and delete clients en masse at pre-specified
intervals. Also, many secure systems associate a time stamp and
expiration time with keys. We have implemented support for both of
these kinds of applications in our prototype by building into it two
key update options. The first option is event-based updates, which we
have been discussing throughout this paper so far. The second option
is to update keys periodically, based on a clock. We describe this
scheme here.
In a periodic key update scheme, the controller performs key updates
every t seconds. A key update consists of performing any pending
client additions and deletions and updating the shared group keys. A
client may contact the controller at any time to enter or exit the
group, but the controller does not actually add or delete that client
until the next key update. If, for example, a client contacts the
controller with a join request, the controller sends a shared key to
the client, so the client can decrypt future key update messages and
verify the signatures on them. Then the controller adds the new client
and its shared key into a list of pending client additions, called an
add-list. The controller also stores a delete-list of clients that
will be deleted at the next key update. (The delete-list contains only
client ID's.) When a key update occurs, the controller performs the
following operations:
1. Remove one client, C_add, from the add-list, and remove one
client, C_del, from the delete-list. Locate the client node of
C_del in the tree and replace its client ID and shared key with
the ID and shared key for C_add. Then update the client hash
table to reflect the addition of C_add and the deletion of
C_del. Finally, change the key update flag for each ancestor
of the edited node to TRUE. Repeat this process until either
the add-list or the delete-list is empty.
2. If the delete-list is not empty, remove all client ID's from it
and delete these clients from the tree as described above. Also
update the client hash table to reflect the deletions.
3. If the add-list is not empty, remove all client ID's and their
shared keys from it and add these clients to the tree as in
Section 3.1. Then update the client hash table to reflect
the additions.
Moyer, Rao, Rohatgi [Page 15]
INTERNET DRAFT draft-irtf-smug-key-tree-balance-00.txt June 1999
4. Finally, update all of the keys in the tree using a depth-first
tree traversal, as described previously. Update the shared group
key at the root even if none of the keys below it have changed.
4. REFERENCES
[1] Debby M. Wallner, Eric J. Harder, and Ryan C. Agee. Key Management
for Multicast: Issues and Architectures. IETF Informational RFC,
September 1998.
[2] Chung Kei Wong, Mohamed Gouda, and Simon S. Lam. Secure Group
Communication Using Key Graphs. Proceedings of ACM SIGCOMM 1998.
[3] D. Balenson, D. McGrew, and A. Sgerman. Key Management for Large
Dynamic Groups: One-Way Functions Trees and Amortized Initialization.
IETF Internet Draft (work in progress), February 1999.
5. AUTHORS' ADDRESSES
Matthew J. Moyer
College of Computing
Georgia Institute of Technology
Atlanta, GA
Email: mjm@cc.gatech.edu
Josyula R. Rao and Pankaj Rohatgi
IBM Thomas J. Watson Research Center
P.O. Box 704
Yorktown Heights, NY 10598
Email: {jrrao,rohatgi}@watson.ibm.com
Moyer, Rao, Rohatgi [Page 16]
Expires 25 December 1999.
| PAFTECH AB 2003-2026 | 2026-04-23 06:11:12 |