Internet DRAFT - draft-zelenka-pnfs-obj
draft-zelenka-pnfs-obj
Network J. Zelenka
Internet-Draft B. Welch
Expires: April 26, 2006 B. Halevy
Panasas
October 23, 2005
Object-based pNFS Operations
draft-zelenka-pnfs-obj-02.txt
Status of this Memo
By submitting this Internet-Draft, each author represents that any
applicable patent or other IPR claims of which he or she is aware
have been or will be disclosed, and any of which he or she becomes
aware will be disclosed, in accordance with Section 6 of BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 26, 2006.
Copyright Notice
Copyright (C) The Internet Society (2005).
Abstract
This Internet-Draft provides a description of the object-based pNFS
extension for NFSv4. This is a companion to the main pnfs operations
draft, which is currently draft-ietf-nfsv4-pnfs-00.txt
Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
Zelenka, et al. Expires April 26, 2006 [Page 1]
Internet-Draft pnfs ops October 2005
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [1].
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Object-Based Layout . . . . . . . . . . . . . . . . . . . . . 3
2.1 osd2_object_layouttype4 . . . . . . . . . . . . . . . . . 4
2.2 pnfs_layoutupdate4 . . . . . . . . . . . . . . . . . . . . 4
3. Generic Layout Alternative . . . . . . . . . . . . . . . . . . 5
3.1 LAYOUTGET . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 LAYOUTCOMMIT . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Mapping virtual object offsets to component object
offsets . . . . . . . . . . . . . . . . . . . . . . . . . 8
4. Usage and implementation notes . . . . . . . . . . . . . . . . 9
5. Security Considerations . . . . . . . . . . . . . . . . . . . 11
5.1 Object Layout Security . . . . . . . . . . . . . . . . . . 11
5.2 Revoking capabilities . . . . . . . . . . . . . . . . . . 12
6. Normative References . . . . . . . . . . . . . . . . . . . . . 13
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 14
Intellectual Property and Copyright Statements . . . . . . . . 15
Zelenka, et al. Expires April 26, 2006 [Page 2]
Internet-Draft pnfs ops October 2005
1. Introduction
In pNFS, the file server returns typed layout structures that
describe where file data is located. There are different layouts for
different storage systems and methods of arranging data on storage
devices. This document describes several layouts to be used with
object-based storage devices (OSD) that are accessed according to the
iSCSI/OSD storage protocol standard (SNIA T10/1355-D [2]).
An "object" is a container for data and attributes, and files are
stored in one or more objects. The OSD protocol specifies several
operations on objects, including READ, WRITE, FLUSH, GETATTR,
SETATTR, CREATE and DELETE. However, in this proposal the client
only uses the READ, WRITE, and FLUSH OSD commands. The other
commands are only used by the pNFS server.
The OSD protocol has a capability-based security scheme that allows
the pNFS server to control what operations and what objects are used
by clients. This scheme is described in more detail in the "Security
Considerations" section.
An object-based layout for pNFS includes object identifiers,
capabilities that allow clients to READ or WRITE those objects, and
various parameters that control how file data is striped across their
component objects.
2. Object-Based Layout
This section gives the data structure definitions that match those
specified in the current pNFS draft. In this case, the split between
files and objects is made at the top-level, and each subsystem has
its own parameters for striping. (The following section preserves an
earlier proposal that attempts to extract the common layout
information up to a more general top level data structure.)
union pnfs_layoutdata4 switch (pnfs_layouttype4 layout_type) {
case LAYOUT_NFSV4_FILES:
nfsv4_file_layouttype4 file_layout;
case LAYOUT_OSD2_OBJECTS:
osd2_object_layouttype4 object_layout;
default:
opaque layout_data<>;
};
Figure 1
Zelenka, et al. Expires April 26, 2006 [Page 3]
Internet-Draft pnfs ops October 2005
2.1 osd2_object_layouttype4
The osd2_object_layouttype4 defines a striped layout over objects
identified with the pnfs_layout_osd2id4 type. We use the generic
pnfs_deviceid4 to specify the object storage devices. The OSD
standard defines partitions within a device, and objects are numbered
within their partition.
Associated with each object is a capability that grants access to it.
Here the type is opaque, but this document will eventually specify a
type compatible with the OSD standard. Relevent parts of the
capability include a byte offset and length that define the range of
the object to which the capability applies, an expiration time, and a
capability version number (also called a "policy tag").
The striping is determined by the stripe_unit, which is the size in
bytes of the stripe unit, and the length of the dev_list. The object
layout always uses a "dense" layout as described in the pNFS
document. The file_size is a snapshot of the file size at the time
the layout was returned to the pNFS client.
struct pnfs_layout_osd2id4 {
pnfs_deviceid4 device_id;
uint64 partition_id;
uint64 object_id;
};
struct pnfs_object_cap4 {
pnfs_layout_osd2id4 object_id;
opaque capability<>;
};
struct osd2_object_layouttype4 {
length4 stripe_unit;
length4 file_size;
pnfs_object_cap4 dev_list<>;
};
Figure 2
2.2 pnfs_layoutupdate4
When a layout is updated with LAYOUTCOMMIT, the client returns
information to the server that describes how much data was written,
and if there were any errors during I/O.
Zelenka, et al. Expires April 26, 2006 [Page 4]
Internet-Draft pnfs ops October 2005
struct pnfs_layout_compoennt_ioerr4 {
pnfs_layout_componentid4 component;
uint64 offset;
uint64 length;
};
union deltaspaceused4 switch (bool valid) {
case TRUE:
int64 delta;
case FALSE:
void;
}
union pnfs_layoutupdate4 switch (pnfs_layout_storage_type4) {
case PNFS_LAYOUT_STORAGE_TYPE_NFSV4_FILES:
void;
case PNFS_LAYOUT_STORAGE_TYPE_OSD2_OBJECTS:
deltaspaceused4 delta_space_used;
newtime4 time_metadata;
pnfs_object_ioerr4 ioerr<>;
default:
opaque layout_data<>;
};
Figure 3
3. Generic Layout Alternative
3.1 LAYOUTGET
There was some discussion of moving striping parameters out of the
layout-specific information and into a more general level. This
section attempts to capture the spirit of that change and expand upon
it by separating the "aggregation type" of a pNFS file from the
"storage type" of a pNFS file.
The layouts defined here provide striped, mirrored, and stripe-
mirrored data organizations. See the discussion section below for
more details on usage of these layouts.
enum pnfs_layout_storage_type4 {
PNFS_LAYOUT_STORAGE_TYPE_NFSV4_FILES = 1,
PNFS_LAYOUT_STORAGE_TYPE_OSD2_OBJECTS = 2,
PNFS_LAYOUT_STORAGE_TYPE_SBC = 3
}
Zelenka, et al. Expires April 26, 2006 [Page 5]
Internet-Draft pnfs ops October 2005
struct pnfs_layout_osd2id4 {
uint64 device_id;
uint64 partition_id;
uint64 object_id;
};
union pnfs_layout_objectid4 switch (pnfs_layout_storage_type4) {
case PNFS_LAYOUT_STORAGE_TYPE_NFSV4_FILES
nfs_fh4 fh;
case PNFS_LAYOUTCLASS_OSD2_OBJECTS
pnfs_layout_osd2id4 object_id;
uint64 offset;
uint64 length;
opaque capability<>;
case PNFS_LAYOUTCLASS_SBC:
TBD ( {LUN,LBA range}<> ? )
};
struct pnfs_layout_componentid4 {
pnfs_deviceid4 dev_id;
pnfs_layout_objectid4 obj_id;
};
enum pnfs_aggregation_type4 {
PNFS_AGGREGATION_TYPE_SIMPLE = 1,
PNFS_AGGREGATION_TYPE_STRIPED_SPARSE = 2,
PNFS_AGGREGATION_TYPE_STRIPED_DENSE = 3,
PNFS_AGGREGATION_TYPE_MIRRORED = 4,
PNFS_AGGREGATION_TYPE_STRIPED_MIRRORED_SPARSE = 5
PNFS_AGGREGATION_TYPE_STRIPED_MIRRORED_DENSE = 6
};
union pnfs_layout_aggregation_map4 switch (pnfs_aggregation_type4) {
case PNFS_AGGREGATION_TYPE_SIMPLE:
pnfs_layout_componentid4 component;
case PNFS_AGGREGATION_TYPE_STRIPED_SPARSE:
case PNFS_AGGREGATION_TYPE_STRIPED_DENSE:
length4 stripe_unit;
pnfs_layout_componentid4 components<>;
case PNFS_AGGREGATION_TYPE_MIRRORED:
pnfs_layout_componentid4 components<>;
case PNFS_AGGREGATION_TYPE_STRIPED_MIRRORED:
length4 stripe_unit;
Zelenka, et al. Expires April 26, 2006 [Page 6]
Internet-Draft pnfs ops October 2005
uint16 mirror_cnt;
pnfs_layout_componentid4 components<>;
}
union pnfs_layout_sec4 switch (pnfs_layout_storage_type4) {
case PNFS_LAYOUT_STORAGE_TYPE_NFSV4_FILES
void;
case PNFS_LAYOUTCLASS_OSD2_OBJECTS
opaque capability<>;
case PNFS_LAYOUTCLASS_SBC:
TBD
};
struct pnfs_layout4 {
pnfs_layout_aggregation_map4 map;
pnfs_layout_sec4 sec;
};
Figure 4
3.2 LAYOUTCOMMIT
Zelenka, et al. Expires April 26, 2006 [Page 7]
Internet-Draft pnfs ops October 2005
struct pnfs_layout_compoennt_ioerr4 {
pnfs_layout_componentid4 component;
uint64 offset;
uint64 length;
};
union deltaspaceused4 switch (bool valid) {
case TRUE:
int64 delta;
case FALSE:
void;
}
union pnfs_layoutupdate4 switch (pnfs_layout_storage_type4) {
case PNFS_LAYOUT_STORAGE_TYPE_NFSV4_FILES:
void;
case PNFS_LAYOUT_STORAGE_TYPE_OSD2_OBJECTS:
deltaspaceused4 delta_space_used;
newtime4 time_metadata;
pnfs_object_ioerr4 ioerr<>;
default:
opaque layout_data<>;
};
Figure 5
3.3 Mapping virtual object offsets to component object offsets
For AGGREGATION_TYPE_SIMPLE, there is a 1:1 mapping between the
logical file and the physical object used to store the file. For
AGGREGATION_TYPE_MIRRORED, there is a 1:N mapping between a byte in
the logical file and bytes in the physical objects used to store the
files. That is, every storage object is an exact copy of the logical
object.
For AGGREGATION_TYPE_MIRRORED, bytes at offset N length L correspond
to bytes at offset N length L in each component mirror. Note that
the layout definition supports any number of mirrors.
For AGGREGATION_TYPE_STRIPED_DENSE and
AGGREGATION_TYPE_STRIPED_MIRRORED_DENSE, the data is densely packed
in component objects. The layout specifies a stripe_unit.
In AGGREGATION_TYPE_STRIPED_{DENSE,SPARSE}, the number of devices in
the stripe is equal to the length of the objects<> array.
Zelenka, et al. Expires April 26, 2006 [Page 8]
Internet-Draft pnfs ops October 2005
For AGGREGATION_TYPE_STRIPED_MIRRORED_{SPARSE,DENSE}, the number of
devices in the stripe is equal to the length of the objects<> array
divided by mirror_cnt. The objects<> is indexed such that mirrors
appear adjacent to one another. Thus, an object with 6 items in the
object array and a mirror_cnt of 2 would have an object array <D0a
D0b D1a D1b D2a D2b>. D0a and D0b are mirrors; D1a and D1b are
mirrors; D2a and D2b are mirors.
In STRIPED_DENSE and STRIPED_MIRRORED_DENSE, the stripe width (S) is
the stripe_unit times the number of devices in the stripe. To map
offset L in the virtual object, one determines the stripe number N by
computing N = L / S. The device number D = (L-(N*S)) / stripe_unit.
The offset (o) within the D's component is (N*stripe_unit)+
(L%stripe_unit).
For example, consider an object striped over four devices, <D0 D1 D2
D3>. The stripe_unit is 4096 bytes. The stripe width S is thus 4 *
4096 = 16384.
Offset 0:
N = 0 / 16384 = 0
D = 0-0/4096 = 0 (D0)
o = 0*4096 + (0%4096) = 0
Offset 4096:
N = 4096 / 16384 = 0
D = (4096-(0*16384)) / 4096 = 1 (D1)
o = (0*4096)+(4096%4096) = 0
Offset 9000:
N = 9000 / 16384 = 0
D = (9000-(0*16384)) / 4096 = 2 (D2)
o = (0*4096)+(9000%4096) = 808
Offset 132000:
N = 132000 / 16384 = 8
D = (132000-(8*16384)) / 4096 = 0
o = (8*4096) + (132000%4096) = 33696
Figure 6
4. Usage and implementation notes
When a client wishes to access storage directly, it issues a
LAYOUTGET for the object. If it receives NFS4ERR_LAYOUTUNAVAILABLE,
it remembers that layouts are not available for this object, and
Zelenka, et al. Expires April 26, 2006 [Page 9]
Internet-Draft pnfs ops October 2005
subsequent accesses are performed through the server using normal
NFSv4 operations. If it receives NFS4ERR_LAYOUTTRYLATER, it
satisfies its immediate I/O needs with normal NFSv4 operations, but
after a short time may retry the LAYOUTGET.
The access to data objects given to clients via LAYOUTGET is strictly
for the purpose of reading and writing data. Clients should always
retrieve attributes by requesting them from the metadata server, and
attribute updates should only be done through the metadata server.
When ANSI/T10 objects are used for the backing store, the only T10
commands that pNFS clients SHOULD issue to storage are READ, WRITE,
and FLUSH.
We expect clients to flush any cached writes before releasing locks
or issuing CLOSEs. When a client holds a layout delegation, this
flush should include a LAYOUTCOMMIT.
Mirrored object types require additional serialization of updates to
ensure correct operation. Otherwise, if two clients simultaneously
write to the same logical range of an object, the result could
include different data in the same ranges of mirrored tuples. It is
the responsibility of the metadata server to enforce serialization
requirements such as this. For example, the metadata server may do
so by not granting overlapping write layouts within mirrored objects.
As with non-layout-delegated NFSv4 reads and writes, Applications
should not assume any particular serialization of accesses for any
particular layout type. Applications may use NFSv4 advisory or
mandatory locks to obtain the desired serialization.
When the server receives a layout request that it cannot grant due to
a sharing issue (for example, LAYOUTGET for writing on a mirrored
object, where another client holds a layout for writing), the server
may issue a CB_LAYOUTRECALL to the client (or clients) holding
conflicting layouts, and it will respond to the new request with
NFS4ERR_LAYOUTTRYLATER. When the clients return their layouts, they
may simply issue a LAYOUTRETURN and cease using the layout.
Alternatively, they may issue LAYOUTRETURN and LAYOUTGET in the same
compound operation, thus requesting a new layout. In this sharing
case, the server could reply with a new layout, or it could determine
that the access pattern results in inefficient use of direct access
to storage, and may choose to coerce all accesses to use NFSv4 reads
and writes rather than directly accessing storage.
When a client issues a LAYOUTGET and it receives a layout that
contains the beginning of the byterange it requested, it may
immediately issue another LAYOUTGET for the subsequent byterange.
When a layout is granted but the offset of the layout is past the
beginning of the range requested, it should not immediately re-
Zelenka, et al. Expires April 26, 2006 [Page 10]
Internet-Draft pnfs ops October 2005
request a layout for the non-granted range; instead, it should assume
that such a request would fail with NFS4ERR_LAYOUTTRYLATER. When a
server receives a request for a layout range which it cannot entirely
grant, it should either fail the entire LAYOUTGET with
NFS4ERR_LAYOUTTRYLATER, or it should grant the sub-range with the
lowest offset.
At any time, a client that holds a layout may issue a LAYOUTCOMMIT.
In the LAYOUTCOMMIT args, lastbytewritten represents the largest
offset at which the client wrote. Offset and length represent the
range of the file covered by the layout.
The layoutupdate field of LAYOUTCOMMIT args allows the client to
propagate new attributes to the server in addition to those normally
propagated in LAYOUTCOMMIT4args. Delta_space_used is not an absolute
value for the space_used attribute, but instead is a measure of the
change in space_used as a result of the clients access(es). For each
attribute provided as part of LAYOUTCOMMIT, the client either
provides no value at all (the FALSE case of the union discriminator),
or it provides the new value that is currently set on storage. If a
client wishes to update the attributes on storage, it may issue a
SETATTR as part of the compound request containing the LAYOUTCOMMIT.
5. Security Considerations
The pNFS extension partitions the NFSv4 file system protocol into two
parts, the control path and the data path (storage protocol). The
control path contains all the new operations described by this
extension; all existing NFSv4 security mechanisms and features apply
to the control path. The combination of components in a pNFS system
is required to preserve the security properties of NFSv4 with respect
to an entity accessing data via a client, including security
countermeasures to defend against threats that NFSv4 provides
defenses for in environments where these threats are considered
significant.
5.1 Object Layout Security
The object storage protocol relies on a cryptographically secure
capability to control accesses at the object storage devices.
Capabilities are generated by the metadata server, returned to the
client, and used by the client as described below to authenticate
their requests to the Object Storage Device (OSD). Capabilities
therefore achieve the required access and open mode checking. They
allow the file server to define and check a policy (e.g., open mode)
and the OSD to check and enforce that policy without knowing the
details (e.g., user IDs and ACLs).
Zelenka, et al. Expires April 26, 2006 [Page 11]
Internet-Draft pnfs ops October 2005
Each capability is specific to a particular object, an operation on
that object, a byte range w/in the object, and has an explicit
expiration time. The capabilities are signed with a secret key that
is shared by the object storage devices (OSD) and the metadata
managers. clients do not have device keys so they are unable to forge
capabilities.
The details of the security and privacy model for Object Storage are
defined in the T10 OSD standard. The following sketch of the
algorithm should help the reader understand the basic model.
LAYOUTGET returns
{CapKey = MAC<SecretKey>(CapArgs), CapArgs}
The client uses CapKey to sign all the requests it issues for that
object using the respective CapArgs. In other words, the CapArgs
appears in the request to the storage device, and that request is
signed with the CapKey as follows:
ReqMAC = MAC<CapKey>(Req, Nonceln)
The following is sent to the OSD: {CapArgs, Req, Nonceln, ReqMAC}.
The OSD uses the SecretKey it shares with the metadata server to
compare the ReqMAC the client sent with a locally computed
MAC<MAC<SecretKey>(CapArgs)>(Req, Nonceln)
and if they match the OSD assumes that the capabilities came from an
authentic metadata server and allows access to the object, as allowed
by the CapArgs. Therefore, if the server LAYOUTGET reply, holding
CapKey and CapArgs, is snooped by another client, it can be used to
generate valid OSD requests (within the CapArgs access restriction).
To provide the required privacy requirements for the capabilities
returned by LAYOUTGET, the GSS-API can be used, e.g. by using a
session key known to the file server and to the client to encrypt the
whole layout or parts of it. Two general ways to provide privacy in
the absence of GSS-API that are independent of NFSv4 are either an
isolated network such as a VLAN or a secure channel provided by
IPsec.
5.2 Revoking capabilities
At any time, the metadata server may invalidate all outstanding
capabilities on an object by changing its capability version. This
causes the OSD to reject subsequent accesses to the object using
capabilities signed using the old capability version. When a client
Zelenka, et al. Expires April 26, 2006 [Page 12]
Internet-Draft pnfs ops October 2005
attempts to use a capability and discovers a capability version
mismatch, it should issue a LAYOUTRETURN for the object. The client
may elect to issue a compound LAYOUTRETURN/LAYOUTGET (or
LAYOUTCOMMIT/LAYOUTRETURN/LAYOUTGET) to attempt to fetch a refreshed
set of capabilities.
The metadata server may elect to change the capability version on an
object at any time, for any reason (with the understanding that there
is likely an associated performance penalty, especially if there are
outstanding layouts for this object). The metadata server MUST
revoke outstanding capabilities when any one of the following occurs:
(1) the permissions on the object change, (2) a conflicting mandatory
byte-range lock is granted.
A pNFS client will typically hold one layout for each byte range for
either READ or READ/WRITE. It is the pNFS client's responsibility to
enforce access control among multiple users accessing the same file.
It is neither required nor expected that the pNFS client will obtain
a separate layout for each user accessing a shared object. The
client SHOULD use ACCESS calls to check user permissions when
performing I/O so that the server's access control policies are
correctly enforced. The result of the ACCESS operation may be cached
indefinitely, as the server is expected to recall layouts when the
file's access permissions or ACL change.
6. Normative References
[1] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", March 1997.
[2] Weber, R., "SCSI Object-Based Storage Device Commands",
July 2004, <http://www.t10.org/ftp/t10/drafts/osd/osd-r10.pdf>.
[3] Goodson, G., "NFSv4 pNFS Extentions", October 2005, <ftp://
www.ietf.org/internet-drafts/draft-ietf-nfsv4-pnfs-00.txt>.
[4] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame,
C., Eisler, M., and D. Noveck, "Network File System (NFS)
version 4 Protocol", RFC 3530, April 2003.
Zelenka, et al. Expires April 26, 2006 [Page 13]
Internet-Draft pnfs ops October 2005
Authors' Addresses
Jim Zelenka
Panasas, Inc.
1501 Reedsdale St. Suite 400
Pittsburgh, PA 15233
USA
Phone: +1-412-323-3500
Email: jimz@panasas.com
URI: http://www.panasas.com/
Brent Welch
Panasas, Inc.
6520 Kaiser Drive
Fremont, CA 95444
USA
Phone: +1-650-608-7770
Email: welch@panasas.com
URI: http://www.panasas.com/
Benny Halevy
Panasas, Inc.
1501 Reedsdale St. Suite 400
Pittsburgh, PA 15233
USA
Phone: +1-412-323-3500
Email: bhalevy@panasas.com
URI: http://www.panasas.com/
Zelenka, et al. Expires April 26, 2006 [Page 14]
Internet-Draft pnfs ops October 2005
Intellectual Property Statement
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information
on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at
http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at
ietf-ipr@ietf.org.
Disclaimer of Validity
This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Copyright Statement
Copyright (C) The Internet Society (2005). This document is subject
to the rights, licenses and restrictions contained in BCP 78, and
except as set forth therein, the authors retain all their rights.
Acknowledgment
Funding for the RFC Editor function is currently provided by the
Internet Society.
Zelenka, et al. Expires April 26, 2006 [Page 15]