Internet-Draft | NFSv4 Inernationalization | October 2020 |
Noveck | Expires 4 April 2021 | [Page] |
This document describes the handling of internationalization for all NFSv4 protocols, including NFSv4.0, NFSv4.1, NFSv4.2 and extensions thereof, and future minor versions.¶
It updates RFC7530 and RFC5661.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 4 April 2021.¶
Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
Internationalization is a complex topic with its own set of terminology (see [22]). The topic is made more complex for the NFSv4 protocols by the tangled history described in Section 3. In large part, this document is based on the actual behavior of NFSv4 client and server implementations (for all existing minor versions) and is intended to serve as a basis for further implementations to be developed that can interact with existing implementations as well as those to be developed in the future.¶
Note that the behaviors on which this document are based are each demonstrated by a combination of an NFSv4 server implementation proper and a server-side physical file system. It is common for servers and physical file systems to be configurable as to the behavior shown. In the discussion below, each configuration that shows different behavior is considered separately.¶
As a consequence of this choice, normative terms defined in RFC2119 [1] are often derived from implementation behavior, rather than the other way around, as is more commonly the case. The specifics are discussed in Section 2.¶
With regard to the question of interoperability with existing specifications for NFSv4 minor versions, different minor versions pose different issues.¶
With regard to NFSv4.0 as defined in RFC7530 [3], no significant interoperability issues are expected to arise because the internationalization in that specification, which is the basis for this one, was also based on the behavior of existing implementations. Although, in a formal sense, the treatment of internationalization here supersedes that in RFC7530 [3], the treatments are intended to be essentially the same, in order to eliminate interoperability issues.¶
Because of a change in the handling of Internationalized domain names, there are some differences from the handling in RFC7530 [3], as discussed in Section 3. For a discussion of those differences and potential compatibility issues, see Sections 12.1 and 12.2.¶
With regard to NFSv4.1 as defined RFC5661 [4], the situation is quite different. The approach to internationalization specified in that document, based in large part on that in RFC3530 was never implemented, and implementers were either unaware of the troublesome implications of that approach or chose to ignore the existing specification as essentially unimplementable. An internationalization approach compatible with that specified in RFC7530 [3] tended to be followed, despite the fact that, in other respects, NFSv4.1 was considered to be a separate protocol.¶
If there were NFSv4 servers who obeyed the internationalization dictates within RFC5661 [4], or clients that expected servers to do so, they would fail to interoperate with typical clients and servers when dealing with non-UTF8 file names, which are quite common. As no such implementations have come to our attention, it has to be assumed that they do not exist and interoperability with existing implementations as described here is an appropriate basis for this document.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as BCP 14 [1] [2] when, and only when, they appear in all capitals, as shown here.¶
Although the key words "MUST", "SHOULD", and "MAY" retain their normal meanings, as described above, we need to explain how the statements involving these terms were arrived at:¶
Note that in introductory and explanatory sections of this document (i.e. Sections 1 through 4 these terms do not appear except to explain how they are used in this document. Also, they do not appear in Appendix A which provides non-normative implementation guidance.¶
With regard to the parts of this document deriving from RFC7530, we explain below how the normative terms used derive from the behavior of existing implementations, in those situations in which existing implementation behavior patterns can be determined.¶
Behavior implemented by all existing clients or servers is described using "MUST", since new implementations need to follow existing ones to be assured of interoperability. While it is possible that different behavior might be workable, we have found no case where this seems reasonable.¶
The converse holds for "MUST NOT": if a type of behavior poses interoperability problems, it MUST NOT be implemented by any existing clients or servers.¶
Behavior implemented by most existing clients or servers, where that behavior is more desirable than any alternative, is described using "SHOULD", since new implementations need to follow that existing practice unless there are strong reasons to do otherwise.¶
The converse holds for "SHOULD NOT".¶
There are also cases in which certain server behaviors, while not known to exist, cannot be reliably determined not to exist. In part, this is a consequence of the long period of time that has elapsed since the publication of the defining specifications, resulting in a situation in which those involved in t implementation work may no longer be involved in or aware of working group activities.¶
In the case of possible server behavior that is neither known to exist nor known not to exist, we use "SHOULD NOT" and "MUST NOT" as follows, and similarly for "SHOULD" and "MUST".¶
In the case of a "MAY", "SHOULD", or "SHOULD NOT" that applies to servers, clients need to be aware that there are servers that may or may not take the specified action, and they need to be prepared for either eventuality.¶
The history of internationalization within NFSv4 is discussed in this section. Despite the fact that NFSv4.0 and subsequent minor versions have differed in many ways, the actual implementations of internationalization have remained the same and internationalized names have been handled without regard to the minor version being used. As a result, this document is able to treat internationalization for all NFSv4 minor versions together.¶
During the period from the publication of RFC3010 [17] until now, two different perspectives with regard to internationalization have been held and represented, to varying degrees, in specifications for NFSv4 minor versions.¶
As specifications were developed, approved, and at times rewritten, this fundamental difference of approach was never fully resolved, although, with the publication of RFC7530 [3], a satisfactory modus vivendi may have been arrived at.¶
Although many specifications were published dealing with NFSv4 internationalization, all minor versions used the same implementation approach, even when the current specification for that minor version specified an entirely different approach. As a result, we need to treat the history of NFSv4 internationalization below as an integrated whole, rather than treating individual minor versions separately.¶
The approach to internationalization specified in RFC3010 [17] sidestepped the conflict of approaches cited above by discussing the reasons that UTF-8 encoding was desirable while leaving file names as uninterpreted strings of bytes. The issue of string normalization was avoided by saying "The NFS version 4 protocol does not mandate the use of a particular normalization form at this time."¶
Despite this approach's inconsistency with general IETF expectations regarding internationalization, RFC3010 was published as a Proposed Standard. NFSv4.0 implementation related to internationalization of file names followed the same paradigm used by NFSv3, assuring interoperability with files created using that protocol, as well as with those created using local means of file creation.¶
When it became necessary, because of issues with byte-range locking, to create an rfc3010bis, no change to the previously approved approach seemed indicated and the drafts submitted up until [24] closely followed RFC3010 as regards internationalization. The IESG then decided that a different approach to internationalization was required, to be based on stringprep [18] and rfc3010bis was accordingly revised, replacing all of the Internationalization section, before being published as RFC3530 [21].¶
These changes required the rejection of file names that were not valid UTF-8, file names that included code points not, at the time of publication, assigned a Unicode character (e.g. capital eszett) or that were not allowed by stringprep (e.g. Zero-width joiner and non-joiner characters). Because these restrictions would have caused the set of valid file names to be different on NFS-mounted and local file systems there was no chance of them ever being implemented.¶
Because these specification changes were made without working group involvement, most implementers were unaware of them while those who were aware of the changes ignored them and continued to develop implementations based on the internationalization approach specified in RFC3010.¶
When NFsv4.1 was being developed, it seemed that no changes in internationalization would be required. Many people were unaware of the stringprep-based requirements which made the NFSv4.0 internationalization specified in RFC3530 unimplementable. As a result, the internationalization specified in RFC5661 [4] was based on that in RFC3530 [21], although the addition of the attribute fs_charset_cap, discussed below, provided additional flexibility.¶
The attribute fs_charset_cap, discussed below in Section 7 provides flags allowing the server to indicate that it accepts and processes non-UTF-8 file names. Rejecting them was a "MUST" in RFC3530 and became a "SHOULD" in RFC5661, although there is no evidence that any of these designations ever affected server behavior.¶
As a result of this treatment of internationalization, even though NFSv4.1 was a separate protocol and could have had a different approach to internationalization, for a considerable time, the internationalization specification for both protocols was based on stringprep (in RFC3530 and RFC5661) while the actual implementations of the two minor versions both followed the approach specified in RFC3010, despite its obsoleted status.¶
When work started on rfc3530bis it was clear that issues related to internationalization had to be addressed. When the implications of the stringprep references in RFC3530 were discussed with implementers it became clear that mandating that NFSv4.0 file names conform to stringprep was not appropriate. While some working group members articulated the view that, because of the need to maintain compatibility with the POSIX interface and existing file systems, internationalization for NFSv4 could not be successfully addressed by the IETF, the rfc3530bis draft submitted to the IESG did not explicitly embrace the implementers' perspective set forth above.¶
The draft submitted to the IESG and RFC7530 [3] as published provided an explanation (see Section 5) as to why restrictions on character encodings were not viable. It allowed non-UTF-8 encodings to be used for internationalized file names while defining UTF-8 as the preferred encoding and allowing servers to reject non-UTF-8 string as invalid. Other stringprep-based string restrictions were eliminated. With regard to normalization, it continued to defer the matter, leaving open the possibility that one might be chosen later.¶
This approach is compatible, in implementation terms, with that specified in RFC3010 [17], allowing it to be used compatibly with existing implementations for all existing minor versions. This is despite the fact that RFC5661 [4] specifies an entirely different approach.¶
As a result of discussions leading up to the publishing of RFC7530, it was discovered that some local file systems used with NFSv4 were configured to be both normalization-aware and normalization-preserving, mapping all canonically equivalent file names to the same file while preserving the form actually used to create the file, of whatever form, normalized or not. This behavior, which is legal according to RFC3010, which says little about name mapping is probably illegal according to stringprep. Nevertheless, it was expressly pointed out in RFC7530 as a valid choice to deal with normalization issues, since it allows normalization-aware processing without the difficulties that arise in imposing a particular normalization form, as described in Section 9.¶
In its discussion of internationalized domain names, RFC7530 [3] adopted an approach compatible with IDNA2003, rather than attempting to derive the specification from the behavior of existing implementations.¶
NFSv4.2 made no changes to internationalization. As a result, RFC7862 [5] which made no mention of internationalization, implicitly aligned internationalization in NFSv4.2 with that in NFSv4.1, as specified by RFC5661 [4].¶
As a result of this implicit alignment, there is no need for this document to specifically address NFSv4.2 or be marked as updating RFC7862. It is sufficient that it updates RFC5661, which specifies the internationalization for NFSv4.1, inherited by NFSv4.2.¶
The above history, can, for the purposes of the rest of this document be summarized in the following statements:¶
In order to deal with all NFSv4 minor versions, this document follows the internationalization approach defined in RFC7530, with some changes discussed in Section 4 and applies that approach to all NFSv4 minor versions.¶
This document follows the internationalization approach defined in RFC7530, with a number of significant necessary changes.¶
There are a number of noteworthy circumstances that limit the degree to which internationalization-related encoding and normalization- related restrictions can be made universal with regard to NFSv4 clients and servers:¶
Servers MAY reject component name strings that are not valid UTF-8. This leads to a number of types of valid server behavior, as outlined below. When these are combined with the valid normalization-related behaviors as described in Section 8, this leads to the combined behaviors outlined below.¶
This attribute, nominally "RECOMMENDED", appears to have been added to NFSv4.1 to allow servers, while staying within the constraints of the stringprep-based specification of internationalization, to allow uses of UTF-8-unaware naming by clients. As a result, those NFSv4 servers implementing internationalization as NFSv3 had done, could be considered spec-compliant, as long as a later "SHOULD" was ignored. However, because use of UTF-8 was tied to existing stringprep restrictions, implementations of internationalization, that were aware of Unicode canonical equivalence issues were not provided for. Although this attribute may have been implemented despite the problems noted in Section 7.1, the overall scheme was never implemented and NFSv4.1 implementations dealt with internationalization as NFSv4.0 implementations had.¶
It is generally accepted that attributes designated "RECOMMENDED" are essentially OPTIONAL with the client having the responsibility to deal with server non-support of them. While RFC7530 has gone so far as to explicitly exclude this use from the general statement that these terms are to be used as defined by RFC2119, no NFSv4.1 specification has done so, at least through RFC8881 [10]. In this particular case, there are a number of circumstances that makes this OPTIONAL status noteworthy:¶
The attribute contains two flag bits. As discussed below, in Section 7.1, it is hard two see why two bits are required while the implications of this issue for future NFSv4.1 specifications will be discussed in Section 7.2¶
We reproduce Section 14.4 of [10] below, with comments interspersed trying to make sense of what is there, in order to arrive at an appropriate replacement, to be presented in Section 7.2. In that connection, we need to understand better a few issues:¶
Issues related to the confusion caused by mention of "UTF-8 characters" and the lack of mention of Unicode will be addressed in the revision in Section 7.2 but will not be further discussed here.¶
const FSCHARSET_CAP4_CONTAINS_NON_UTF8 = 0x1; const FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 = 0x2; typedef uint32_t fs_charset_cap4;¶
While it is made clear that two separate bits are to be provided, their names seem to indicate that they should be complements of one another. As a way of understanding why two bits were specified, it is helpful to consider a possible boolean attribute as a potential replacement. That attribute would clearly govern whether names that do not conform to the rules of UTF-8 are to be rejected, which was a "MUST" in RFC3530 [21]. Although conveying this information is clearly part of the motivation, stating so clearly might have been judged by the authors as too provocative, given the role of IESG in arriving at the internationalization approach specified in RFC3530.¶
It is clear that the ability of operating environments to enforce use of UTF-8 encoding is not an issue, since RFC3530 made this the responsibility of the server implementation. That mandate was never followed because implementers chose not to follow it, and not because they were unable to do so. The apparently confused statement above is best understood if one notes that its essential job is to state that the "MUST" in RFC3530 referred to above is not reasonable. However, the authors might well feel unable to say so clearly, in light of the potential IESG reaction.¶
The problem with the mention of (plural) capabilities is that the only capability mentioned which servers could implement is to accept strings which are not valid UTF-8. There are other potential capabilities having to do with the implementation of canonical equivalence, but since they were not mentioned, they will not be discussed further here.¶
As stated, this would mean that a server would have to keep track of a count of non-UTF-8-encoded names within the file system and change the attribute value as that count varied between zero and non-zero. Since it is most unlikely that any server would keep track of that or that any client would find it useful, we will assume that the capability to store such names is what is most likely intended.¶
There is no way for the server to convert non-UTF names to UTF-8 or anything else, since it has no knowledge of the name encoding to begin with. The alternative to treating names as UTF-8-encoded Unicode strings is to treat them as POSIX does, as uninterpreted strings of bytes. That makes it impossible to interpret strings that do not follow the rules of UTF-8 at all, making it impossible to convert the string to UTF-8.¶
As stated above, there is no way a server could ever do that.¶
That is clear and so it poses no problem for a revised treatment, unlike the other flag.¶
There is no problem with this statement. However, it does, by implication, raise the issue of what values of FSCHARSET_CAP4_CONTAINS_NON_UTF8 may be set in the case in which FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 is set to zero.¶
According to RFC2119 [1], "SHOULD" means that "there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighing a different course". In this context, it is unclear what these "full implications" might be given the introduction above. The clause, "because some operating e environments and file systems do not enforce character set encodings", gives one no basis for treating this as other than an unproblematic behavior variant, calling into question the use of "SHOULD".¶
Also, the statement in RFC2119 that these terms (i.e. those like "SHOULD") "only be used where it is actually required for interoperation or to limit behavior which has the potential for causing harm"¶
Despite the statement in RFC2119, that "they [i.e. terms such as 'SHOULD'] must not be used to impose a particular method on implementors", it is hard to avoid the conclusion that this is in fact the motivation for the "SHOULD", although the authors might not have had any such intention but felt that the IESG might well have such an intention.¶
We provide a revised version of Section 14.4 of [10] below, taking into account the issues noted in Section 7.1. Given there was a working group consensus to adopt the confusing language discussed there, we must now adopt, by consensus, a clearer replacement that reclects the working group's intentions. Given the passage of time and the changed context, it might not be possible to determine those intentions. In any case, we will have to be aware of how this attribute was implemented and used, particularly with regard to the first flag, whose meaning remains obscure.¶
The following treatment is proposed as a basis for discussion, with the understanding that it needs to be changed, if it raises interoperability issues.¶
const FSCHARSET_CAP4_CONTAINS_NON_UTF8 = 0x1; const FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 = 0x2; typedef uint32_t fs_charset_cap4;¶
Strings that potentially contain characters outside the ASCII range [11] are generally represented in NFSv4 using the UTF-8 encoding [8] of Unicode [12]. See [8] for precise encoding and decoding rules.¶
Some details of the protocol treatment depend on the type of string:¶
For strings that are component names, the preferred encoding for any non-ASCII characters is the UTF-8 representation of Unicode.¶
In many cases, clients have no knowledge of the encoding being used, with the encoding done at the user level under the control of a per-process locale specification. As a result, it may be impossible for the NFSv4 client to enforce the use of UTF-8. The use of non-UTF-8 encodings can be problematic, since it may interfere with access to files stored using other forms of name encoding. Also, normalization-related processing (see Section 9) of a string not encoded in UTF-8 could result in inappropriate name modification or aliasing. In cases in which one has a non-UTF-8 encoded name that accidentally conforms to UTF-8 rules, substitution of canonically equivalent strings can change the non-UTF-8 encoded name drastically.¶
For similar reasons, where non-UTF-8 encoded names are accepted, case-related mappings cannot be relied upon. For this reason, the attribute case_insensitive MUST NOT be returned as TRUE for file systems which accept non-UTF-8 encoded file names.¶
The kinds of modification and aliasing mentioned here can lead to both false negatives and false positives, depending on the strings in question, which can result in security issues such as elevation of privilege and denial of service (see [23] for further discussion).¶
The client and server operating environments can potentially differ in their policies and operational methods with respect to character normalization (see [12] for a discussion of normalization forms). This difference may also exist between applications on the same client. This adds to the difficulty of providing a single normalization policy for the protocol that allows for maximal interoperability. This issue is similar to the issues of character case where the server may or may not support case-insensitive file name matching and may or may not preserve the character case when storing file names. The protocol does not mandate a particular behavior but allows for a range of useful behaviors.¶
The NFSv4 protocol does not mandate the use of a particular normalization form. A subsequent minor version of the NFSv4 protocol might specify a particular normalization form, although there would be difficulties in doing so (see Section 15 for details). In any case, the server and client can expect that they might receive unnormalized characters within protocol requests and responses. If the operating environment requires normalization, then the implementation will need to normalize the various UTF-8 encoded strings within the protocol before presenting the information to an application (at the client) or local file system (at the server).¶
Server implementations MAY normalize file names to conform to a particular normalization form before using the resulting string when looking up or creating a file. Servers MAY also perform normalization-insensitive string comparisons without modifying the names to match a particular normalization form. Except in cases in which component names are excluded from normalization-related handling because they are not valid UTF-8 strings, a server MUST make the same choice (as to whether to normalize or not, the target form of normalization, and whether to do normalization-insensitive string comparisons) in the same way for all accesses to a particular file system. Servers SHOULD NOT reject a file name because it does not conform to a particular normalization form, as this would deny access to clients that use a different normalization form or clients acting on behalf of application that use a different normalization form.¶
When the server is to process file names in a case-insensitive way in a given file system, it may choose to do so in a number of ways.¶
When a server implements case-insensitive file name handling, it is necessary that clients do so as well. For example, if a client possessing the cached contents of a directory, notes that the file "a" does not exist, it cannot immediately act on that presumed non-existence, without checking for the potential existence of "A" as well. As a result, clients need to be able to provide case-insensitive name comparisons, irrespective of whether the server handling is case-preserving or not.¶
Because case-insensitive name comparisons are not always as straightforward as the above example suggests, the client, if it is to emulate the server's name handling, would need information about how certain cases are to be dealt with. In cases in which that information is unavailable, the client needs to avoid making assumptions about the server's handling, since it will be unaware of the Unicode version implemented by the server, or many of the details of specific issues that might need to be addressed differently by different server file systems in implementing case-insensitive name handling.¶
Many of the problematic issues with regard to the case-insensitive handling of name are discussed in Section 5.18 of the Unicode Standard [13] which deals with case mapping. While we need to address all of these issues as well, our approach will not be exactly the same.¶
Another source of information about case-folding, and indirectly about case-insensitive comparisons, is the case-folding text file which is part of the Unicode Standard [14]. This file contains, for each Unicode character that can be uppercased or lowercased, a single character, or, in some cases a string of characters of the other case. For characters in capital case, the lowercase counterpart is given. Each of the mappings is characterized as of one of four types:¶
While the case mapping section does discuss case-insensitive string comparisons, and describes a procedure for constructing equivalence classes of Unicode characters, the description does not deal clearly with the effect of F-type mappings. There are a number of problems with dealing with F-type mappings for case folding and basing case-insensitive string comparisons on those mappings, particularly in situations, such as file systems, in which extensive processing of strings is unlikely to be possible.¶
Despite these potential difficulties, case mappings involving multi-character sequences can be reversed when used as a basis for case-insensitive string comparisons and incorporated into a set of equivalence classes on name strings.¶
Case-insensitive servers MAY do either case-mapping to a chosen case or case-insensitive string comparisons when providing a case-preserving implementation. In either case, it MAY include F-type mappings, which map a single character to a multi-character string. However, only the case in which it is doing case-insensitive string comparison will it use the inverse of F-type mappings, in which a multi-character string is mapped to a single character of a different case¶
In these cases, the server can choose to use either a C-type mapping or an F-type mapping, or both, when both exist. Similarly the server may choose to implement the C-type mappings of LATIN CAPITAL LETTER I to LATIN SMALL LETTER I and vice versa, the corresponding T-type mappings or both, although using only the second of these is NOT ALLOWED, unless there is a means of informing the client that it has been chosen.¶
Implementing case-insensitive string comparisons based on equivalence classes including multi-character strings can be performed as described below. This algorithm requires that if there is more than one multi-character string within a given equivalence class, they must all be equivalent, with any equivalences derivable from case-insensitive string equivalence using single-character equivalence classes.¶
Although other sources are possible (see items EX2 and EX3 in Section 10.2), multi-character sequences often appear in case-insensitive equivalence classes as the result of the canonical decomposition of one or more precomposed characters as elements of a case-insensitive equivalence class.¶
While the algorithm described in this section can deal with certain case-based equivalences deriving from canonical decomposition, it is not capable of providing general handling of the combination of canonical equivalence and case-based equivalence. While this can be addressed by normalizing strings before doing case-insensitive comparison, it is more efficient to do a general form-insensitive and case-insensitive string comparison in a single step as described in Appendix A¶
The following tables would be used by the comparison algorithm presented below.¶
Case-insensitive comparison proceeds as follows:¶
In this section, we discuss many of the interesting and/or troublesome issues that the need for case-insensitive handling gives rise to in fully internationalized environment. Many of these are also discussed in [13]. However, our treatment of these issues, while not inconsistent with that in [13], differs significantly for a number of reasons:¶
The examples below present common situations that go beyond the simple invertible case mappings of Latin characters and the straightforward adaptation of that model to Greek and Cyrillic. In EX4 and EX5 we have case-based equivalence classes including multi-character strings not derived from canonical equivalences while for EX7 and EX8 all multi-character strings are derived from canonical equivalences. In addition, EX1, EX2, EX3 and EX6 discuss other situations in which an equivalence class has more than two elements.¶
Certain digraph characters such LATIN SMALL LETTER DZ (U+01F3) have additional case variants to consider such as the titlecase character LATIN CAPTAL LETTER D WITH SMALL LETTER Z (U+01F2) in addition to the uppercase LATIN CAPITAL LETTER DZ (U+01F1). While the titlecased variant would not appear in names in case-insensitive non-case-preserving file systems, case-insensitive string comparison has no problem in treating these three characters as within the same equivalence class.¶
This equivalence class can be derived from only C-type mappings. The possibility of mapping these characters to two-character sequences they represent is not a troublesome issue since that would be derived from a compatibility equivalence, rather than a canonical equivalence, and there is no F-type mapping making it an option.¶
To deal with the case of the OHM SIGN (U+2126) which is essentially identical to the GREEK CAPITAL LETTER OMEGA (U+03A9), one can construct an equivalence class consisting of OHM SIGN (U+2126), GREEK CAPITAL LETTER OMEGA (U+03A9), and GREEK SMALL LETTER OMEGA (U+03C9).¶
This equivalence class can be derived only from C-type mappings. Both OHM SIGN (U+2126), and GREEK CAPITAL LETTER OMEGA (U+03A9) lowercase to GREEK LETTER OMEGA (U+03C9), while that character only uppercases to GREEK CAPITAL LETTER OMEGA (U+03A9).¶
To deal with the case of the ANGSTROM SIGN (U+212B) which is essentially identical to LATIN CAPITAL LETTER A WITH RING ABOVE (U+00C5), one can construct an equivalence class consisting of ANGSTROM SIGN (U+212B), LATIN CAPITAL LETTER A WITH RING ABOVE (U+00C5), LATIN SMALL LETTER A WITH RING ABOVE (U+00E5), together with the two-character sequences involving LATIN CAPITAL LETTER A (U+0041) or LATIN SMALL LETTER A (U+0061) followed by COMBINING RING ABOVE (U+030A).¶
This equivalence class can be derived from C-type mappings together with the ability to map characters to canonically equivalent strings. Both ANGSTROM SIGN (U+212B), and LATIN CAPITAL LETTER A WITH RING ABOVE (U+00C5) lowercase to LATIN SMALL LETTER A WITH RING ABOVE (U+00E5), while that character only uppercases to CAPITAL LETTER A WITH RING ABOVE (U+00C5).¶
In some cases, case mapping of a single character will result in a multi-character string. For example, the German character LATIN SMALL LETTER SHARP S (U+00DF) would be uppercased to "SS", i.e. two copies of LATIN CAPITAL LETTER S (U+0053). On the other hand, in some situations, it would be uppercased to the character LATIN CAPITAL LETTER SHARP S (U+1E9E), using an S-type mapping. referred to as an instance of "Tailored Casing". Unfortunately, in the context of a file system, there is unlikely to be available information that provides guidance about which of these case mappings should be chosen. However, the use of case-insensitive mappings with larger equivalence classes often provides handling that is acceptable to a wider variety of users. In this case, German-speakers get the mapping they expect while those unfamiliar with these characters only see them when they access a file whose name contains them.¶
It appears that if the construction of case-based equivalence classes were generalized to include multi-character sequences, then all of LATIN SMALL LETTER SHARP S (U+00DF), LATIN CAPITAL LETTER SHARP S (U+1E9E), "ss", "sS", "Ss", and "SS" would belong to the same equivalence class and could be handled by the general algorithm described in Section 10.1, as well by code specifically written to deal with this particular issue.¶
In some cases context-dependent case mapping is required. For example, GREEK CAPITAL LETTER SIGMA (U+03A3) lowercases to GREEK SMALL LETTER SIGMA (U+03C3) if it is followed by another letter and to GREEK SMALL LETTER FINAL SIGMA (U+03C2) if it is not.¶
Despite this, case-insensitive comparisons can be implemented, by considering all of these characters as part of the same equivalence class, without any context-dependence, and this equivalence class can be derived using only C-type mappings.¶
In most languages written using Latin characters, the uppercase and lowercase varieties of the letter "I" differ in that only the lowercase character. In a number of Turkic languages, there are two distinct characters derived from "I" which differ only with regard to the presence or absence of a dot so that there are both capital and small i's with each having dotted and dotless variants. Within such languages, the dotted and dotless I's represent different vowel sounds and are treated as separate characters with respect to case mapping. The uppercase of LATIN SMALL LETTER I (U+0069) is LATIN CAPITAL LETTER I WITH DOT ABOVE (U+0130), rather than LATIN CAPITAL LETTER I (U+0049). Similarly the lowercase of LATIN CAPITAL LETTER I (U+0049) is LATIN SMALL LETTER DOTLESS I (U+0131) rather than LATIN SMALL LETTER I (U+0069).¶
When doing case mapping, the server must choose to uppercase LATIN SMALL LETTER I (U+0069) to either LATIN CAPITAL LETTER I (U+0049), based on a C-type mapping to LATIN CAPITAL LETTER I WITH DOT ABOVE (U+0130), based on a T-type mapping. The former is acceptable to most people but confusing to speakers of the Turkic languages in question since the case mapping changes the character to represent a different vowel sound. On the other hand, the latter mapping seemingly inexplicably results in a character many users have never seen before. Normally such choices are dealt with based on a locale but, in a file system environment, no locale information may be available.¶
In the context of case-insensitive string comparison, it is possible to create a larger equivalence class, including all of the letters LATIN SMALL LETTER I (U+0069), LATIN CAPITAL LETTER I (U+0049), LATIN CAPITAL LETTER I WITH DOT ABOVE (U+0130), LATIN SMALL LETTER DOTLESS I (U+0131) together with the two-character string consisting of LATIN CAPITAL LETTER I (U+0049) followed by COMBINING DOT ABOVE (U+0307).¶
Given the way that internationalization is addressed within the NFSv4 protocols, clients, and applications accessing NFS files can generally remain unaware of the specific type of internationalization-related processing implemented by the server. For example, although a server MAY store all file names according to the rules appropriate to a particular normalization form, it MUST NOT reject names solely because they are not encoded using this normalization form, allowing the clients and applications to avoid knowledge of normalization choices.¶
However, as has been pointed out in [25], there are situations in which clients implementing local optimizations use the saved contents of directories fetched from the server, making it necessary that the client's and the server's handling of internationalization-related name mapping issues be in concord. There are two basic ways this issue can be addressed:¶
There are a number of restrictions, not previously specified in RFC7530 [3], on server implementation of internationalized file name handling. These restrictions apply to both case-sensitive and case-insensitive file systems and are designed to limit the options that servers have in choosing server-side internationalized file name handling so as to enable the clients to either duplicate that handling or limit it to avoid relying on cases in which the proper handling cannot be determined or duplicated by the client.¶
In cases in which the server provides no way of determining the details of the case-equivalence relationship implemented by the server for a particular file system, that mapping must include all C-type case mappings included by the particular UNICODE version whose canonical equivalence relation is implemented by the server, i.e. it MUST map between LATIN SMALL LETTER I (U+0069)and LATIN CAPITAL LETTER I (U+0049).¶
The existing minor versions, NFSv4.0 [3], NFSv4.1 [4], and NFSv4.2 [5], have very limited facilities allowing a client to get information about the server's internationalization-related file name handling. Because these protocols were all defined when it was assumed that the server's internationalized file name handling could be specified in great detail, there was no provision for attributes defining the server's choices. As a result, the information available to the client is quite limited:¶
When a file system is internationalization-unaware, the client can use both positive and negative name caching, without any issues arising from the potential for conflict between distinct file names that would be considered equivalent by the server. In other cases, the handling is more restricted in the use of negative name caching. The issue with regard to case-sensitive and case-insensitive file systems are discussed separately below. In each case, the client has a range of choices trading off forgone optimization opportunities against the difficulty of implementation while avoiding negative consequences arising from the fact that certain details of the server's name handling are not known to it.¶
In the case of case-sensitive file systems, the uncertainty to be dealt with concerns the version of Unicode implemented by the server, given that different versions may have different canonical equivalence relationships. However, whether the server implements a particular normalization form or implements form-insensitive file name matching has no effect on client behavior. In light of the uncertainty created by the lack of knowledge of the precise Unicode version used by the server to implement its canonical equivalence relation, the follow possibilities, arranged in order of increasing value (and difficulty of implementation) should be considered.¶
The client can simply decline to implement optimizations based on negative name caching on internationalization-aware file systems.¶
While this might have a negative effect on performance, it might be the best option for clients not heavily used to access internationalization-aware filesystems, or where, due to a lack of directory delegation support, the client has no assurance that will be notified of the invalidation of a previous assumption that a particular file does not exist.¶
Relatively simple name filtering can exclude the names for which negative name caching might cause difficulties. For example, the client could scan file names for characters whose presence might pose difficulties and allow negative name caching only for strings known not to contain such characters. Because the Unicode version used by the server file system is not known, this treatment would be limited to string only containing characters defined in the earliest version of Unicode which could be supported, that is, Unicode 4.0.¶
One simple way for a client to provide such filtering would be to establish an upper limit (e.g. U+00ff) and disallow negative name caching for strings containing characters above that value or characters below that value that might cause there to be canonically equivalent strings on the server. A simple mask could be used to allow each character to be examined allowing composed and combining characters to be identified together with code points unassigned in Unicode 4.0.¶
This approach would allow negative name caching to be disallowed for strings containing those characters while allowing it for other strings that do not. A larger limit (and a corresponding mask) would make sense for clients used to access many file names containing characters from non-Latin alphabets.¶
A client might implement its own internationalized file name handling paralleling that of the server. Because the Unicode version used by the server filesystem is unknown, strings for which it is possible that the canonically equivalent string might be different depending on the version of Unicode implemented by the server will have to be identified and excluded from using negative name caching. This would require that strings containing code points unassigned in Unicode version 4.0, and those denoting combining characters that could be parts of precomposed character added to later versions of Unicode be excluded from negative name caching. The necessary filtering could apply to all potential code points although clients might choose to simplify implementation by excluding strings containing code points beyond a certain point, e.g. (U+0FFFF).¶
When a client implements internationalized name handling, it needs to be able to detect when the apparent absence of a file within a directory is contradicted by the occurrence of a file with a distinct, but canonically equivalent, name. In order to efficiently find such names, when they exist, a client typically needs to implement a form of name hashing which always produces the same result for two canonically equivalent names. This can be done by making the contribution of any character to the name hash, equal to the contribution of the corresponding canonical decomposition string.¶
In the case of case-insensitive file systems, the uncertainty to be dealt with includes the version of Unicode implemented by the server as well as the details of the possible case-handling implemented by the server. In addition to the fact that different Unicode versions may have different canonical equivalence relationships, the server may implement different approaches to the handling of issues related to the handling of dotted and dotless i, in Turkish and Azeri. However, the question of whether the server's handling is case-preserving has no effect on client behavior, as is the question of whether the server implements a particular normalization form or implements form-insensitive file name matching. In light of the uncertainty created by the lack of knowledge of the details of the case-related equivalence relation together with the precise Unicode version used by the server to implement its canonical equivalence relation, the following possibilities, arranged in order of increasing value (and difficulty of implementation) should be considered.¶
The client can simply decline to implement optimizations based on negative name caching on case-insensitive file systems.¶
While this might have a negative effect on performance where significant benefits from negative name caching might be expected, it might be the best option for clients not heavily used to access case-insensitive filesystems.¶
Filtering similar to that discussed in item A2 could be implemented, although a higher limit is likely to be chosen (e.g. U+07ff) if significant use of non-Latin scripts is expected. Because of the uncertainty regarding the handling of case relationship among characters used for the variant of I used by Turkic languages, this filtering would have to exclude names containing LATIN CAPITAL LETTER I WITH DOT ABOVE and LATIN SMALL LETTER DOTLESS I together with precomposed characters derived from them.¶
In cases in which such filtering did not exclude the item from consideration, it would need to search for files with possibly equivalent names, including those equivalent by canonical equivalence, case-insensitive equivalence, or a combination of the two. This will typically require a form of name hashing which always produces the same hash for equivalent names, similar to that discussed in item A3 but including case-insensitive equivalence as well.¶
A client might implement its own internationalized, case-insensitive file name handling paralleling that of the server. Because the case mappings are uncertain and the Unicode version used by the server filesystem is unknown, strings for which it is possible that the equivalent string might be different depending on the version of Unicode implemented by the server or the choice of case mappings would have to be identified and excluded from using negative name caching. This would require that strings containing code points unassigned in Unicode version 4.0, and those denoting combining characters that could be parts of precomposed characters added to later versions of Unicode be excluded from negative name caching. The necessary filtering could apply to all potential code points although clients might choose to simplify implementation by excluding strings containing code points beyond a certain point (e.g. U+00FFFF).¶
When a client implements internationalized name handling, it needs to be able to detect when the apparent absence of a file within a directory is contradicted by the occurrence of a file with a distinct, but canonically equivalent name. In order to efficiently find such names, when they exist, a client typically needs to implements a form of name hashing which always produces the same result for two canonically equivalent names. This can be done by making the contribution of any character to the name hash, equal to contribution of the correspond canonical decomposition string.¶
Because of NFSv4 has an extension framework allowing the addition of new attributes in later minor version or in extensions to extensible minor versions. Such new attributes are likely to be optional. They could include a number of useful per-fs attributes to deal with the information gaps discussed in Section 11.2:¶
There is little prospect of such additional attributes being REQUIRED. Although the term "RECOMMENDED" has been used to describe NFSv4 attributes that are not REQUIRED, any such attributes are best considered OPTIONAL for the server to support with the client required to deal with the case in which the attribute is not supported.¶
When such attributes are defined and implemented, it would be possible for the client and server to implement compatible internationalization-related file name handling. However, as a practical matter, such compatibility would be considerably eased if there existed unencumbered open-source implementations of the algorithm and tables described in Appendix A. This would allow clients, servers, and server-based file systems, to easily adopt compatible approaches to these issues, each calling a common set of primitives, even though each might have a different execution environment and might be processing file names for different purposes.¶
In the case of case-sensitive file system, the case-mapping attribute is not relevant. In dealing with the non-support of the Unicode version attribute, the client is in the same position as that of clients described in Section 11.2. In the case in which the Unicode version is supported, the client would be able to implement the same version of the canonical equivalence relation implemented by the server, thus avoiding the need for the sort of overbroad filtering mentioned in items A2 and A3 within Section 11.2¶
The case of case-insensitive file systems is more complicated, since there are two OPTIONAL attributes to deal with:¶
There are two types of strings that NFSv4 deals with that are based on domain names. Processing of such strings is defined by other Internet standards, and hence the processing behavior for such strings should be consistent across all server operating systems and server file systems.¶
This section differs from other sections of this document in two respects:¶
Because of this shift, there could be compatibility issues to be expected between implementations obeying Section 12.6 of [3] and those following this document. Whether such compatibility issues actually exist depends on the behavior of NFSv4 implementations and how domain names are actually used in existing implementations. These matters will be discussed in Section 12.2.¶
The types of strings referred to above are as follows:¶
The general rules for handling all of these domain-related strings are similar and independent of the role of the sender or receiver as client or server, although the consequences of failure to obey these rules may be different for client or server. The server can report errors when it is sent invalid strings, whereas the client will simply ignore an invalid string or use a default value in its place.¶
The string sent SHOULD be in the form of one or more unvalidated U-labels as defined by [6]. In cases where this cannot be done, the string will instead be in the form of one or more LDH labels [6]. The receiver needs to be able to accept domain and server names in any of the formats allowed. The server MUST reject, using the error NFS4ERR_INVAL, any of the following:¶
When a domain string is part of id@domain or group@domain, there are two possible approaches:¶
A server SHOULD use the first method.¶
For VERIFY and NVERIFY, additional string processing requirements apply to verification of the owner and owner_group attributes; see the section entitled "Interpreting owner and owner_group" for the document specifying the minor version in question (RFC750 [3], RFC5661 [4])¶
Overall, the effect of the shift to IDNA2008 is to limit the degree of understanding of the IDNA-based restrictions on domain names that were expected of NFSv4 in RFC7530 [3]. Despite this specification, the degree to which implementations actually implemented such restrictions is open to question and will be discussed in detail in Section 12.2¶
In analyzing how various cases are to be dealt with according to RFC7530, there a number of troubling uncertainties that arise in trying to interpret the existing specification:¶
The following cases are those where RFC7530 requires use of IDNA handling and this requirement could, if implementations follow them, create potential compatibility issues, which need to be understood.¶
There are a number of factors relating to the handling of domain names within NFSv4 implementations that are important in understanding why any compatibility issues might be less troubling than a comparison of the two IDNA approaches might suggest:¶
The range of potential values for user and group attributes sent by clients are often quite small with implementations commonly restricting all such values to a single domain string. This is even though RFCs 7530 [3] and 5661 [4] are written without mention of such restrictions.¶
Specification of users and groups in the "id@domain" format within NFSv4 was adopted to enable expansion of the spaces of users and groups beyond the 32-bit id spaces mandated in NFSv3 [16] and NFsv2 [15]. While one obstacle to expansion was eliminated, most implementations were unable to actually effect that expansion, principally because the physical file systems used assume that user and group identifiers fit in 32 bits each and the vnode interfaces used by server implementations make similar assumptions.¶
Given these restrictions, the typical implementation pattern is for servers to accept only a single domain, specified as part of the server configuration, together with information necessary to effect the appropriate name-to-id mappings.¶
Keeping the above in mind, we can see that interoperability issues, while they might exist are unlikely to raise major challenges as looking to the following specific cases shows¶
When an internationalized domain name is used as part of a user or group, it would need to be configured as such, with the domain string known to both client and server.¶
While it is theoretically possible that a client might work with an invalid domain string and rely on the server to correct it to an IDNA-acceptable one, such a scenario has to be considered extremely unlikely, since it would depend on multiple servers implementing the same correction, especially since there is no evidence of such corrections ever having been implemented by NFSv4 servers.¶
When an internationalized domain in a location string is meant to specify a registered domain, similar considerations apply.¶
While it is theoretically possible that a client might work with an invalid domain string and rely on the server to correct it to the appropriate registered one, such a scenario has to be considered extremely unlikely, since it would depend on multiple servers implementing the same correction, especially since there is no evidence of such corrections ever having been implemented by NFSv4 servers.¶
When an internationalized domain in a location string is meant to specify a non-registered domain, any such server-applied corrections would be useless.¶
In this situation, any potential interoperability issue would arise from rejecting the name, which has to be considered as what should have been done in the first place.¶
Where the client sends an invalid UTF-8 string, the server MAY return an NFS4ERR_INVAL error. This includes cases in which inappropriate prefixes are detected and where the count includes trailing bytes that do not constitute a full Multiple-Octet Coded Universal Character Set (UCS) character.¶
Requirements for server handling of component names that are not valid UTF-8, when a server does not return NFS4ERR_INVAL in response to receiving them, are described in Section 14.¶
Where the string supplied by the client is not rejected with NFS4ERR_INVAL but contains characters that are not supported by the server as a value for that string (e.g., names containing slashes, or characters that do not fit into 16 bits when converted from UTF-8 to a Unicode codepoint), the server should return an NFS4ERR_BADCHAR error.¶
Where a UTF-8 string is used as a file name, and the file system, while supporting all of the characters within the name, does not allow that particular name to be used, the server should return the error NFS4ERR_BADNAME. This includes such situations as file system prohibitions of "." and ".." as file names for certain operations, and similar constraints.¶
As stated previously, servers MAY accept, on all or on some subset of the physical file systems exported, component names that are not valid UTF-8 strings. A typical pattern is for a server to use UTF‑8-unaware physical file systems that treat component names as uninterpreted strings of bytes, rather than having any awareness of the character set being used.¶
Such servers SHOULD NOT change the stored representation of component names from those received on the wire and SHOULD use an octet-by-octet comparison of component name strings to determine equivalence (as opposed to any broader notion of string comparison). This is because the server has no knowledge of the character encoding being used.¶
Nonetheless, when such a server uses a broader notion of string equivalence than what is recommended in the preceding paragraph, the following considerations apply:¶
When the above recommendations are not followed, the resulting string modification and aliasing can lead to both false negatives and false positives, depending on the strings in question, which can result in security issues such as elevation of privilege and denial of service (see [23] for further discussion).¶
As stated above, all current NFSv4 minor versions allow use of non-UTF-8 encodings, allow servers a choice of whether to be aware of normalization issues or not, and allows servers a number of choices about how to address normalization issues. This range of choices reflects the need to accommodate existing file systems and user expectations about character handling which in turn reflect the assumptions of the POSIX model of handling file names.¶
While it is theoretically possible for a subsequent minor version to change these aspects of the protocol (see [9]), this section will explain why any such change is highly unlikely, making it expected that these aspects of NFSv4 internationalization handling will be retained indefinitely. As a result, any new minor version specification document that made such a change would have to be marked as updating or obsoleting this document¶
No such change could be done as an extension to an existing minor version or in a new minor version consisting only of OPTIONAL features. Such a change could only be done in a new minor version, which like minor version one, was prepared to be incompatible to some degree with the previous minor versions. While it appears unlikely that such minor versions will be adopted, the possibility cannot be excluded, so we need to explore the difficulties of changing the aspects of internationalization handling mentioned above.¶
None of the above appears likely since there does not seem to be any corresponding benefits to justify the difficulties that they would create.¶
There would also be difficulties in otherwise reducing the set of three acceptable normalization handling options, without reducing it to a single option by imposing a specific normalization form.¶
Eliminating the possibility of a single possible normalization form, would pose similar difficulties to imposing the other one, even if representation-independent comparisons were also allowed.¶
In either case, a specific normalization form would be disfavored, with no corresponding benefit.¶
Allowing only representation-independent lookups would not impose difficulties for clients, but there are reasons to doubt it could be universally implemented, since such name comparisons would have to be done within the file system itself.¶
Such a change could only be made once support file system support for representation-independent file lookups would become commonly available. As long as the POSIX file naming model continues its sway, that would be unlikely to happen.¶
One possible internationalization-related extension that the working could adopt would be definition of an OPTIONAL per-fs attribute defining the internationalization-related handling for that file system. That would allow clients to be aware of server choices in this area and could be adopted without disrupting existing clients and servers.¶
The current document does not require any actions by IANA.¶
Unicode in the form of UTF-8 is generally is used for file component names (i.e., both directory and file components). However, other character sets may also be allowed for these names. For the owner and owner_group attributes and other sorts strings whose form is affected by standard outside NFSv4 (see Section 12.) are always encoded as UTF-8. String processing (e.g., Unicode normalization) raises security concerns for string comparison. See Sections 12 and 9 as well as the respective Sections 5.9 of RFC7530 [3] and RFC5661 [4] for further discussion. See [23] for related identifier comparison security considerations. File component names are identifiers with respect to the identifier comparison discussion in [23] because they are used to identify the objects to which ACLs are applied (See the respective Sections 6 of RFC7530 [3] and RFC5661 [4]).¶
This section deal with two varieties of form-insensitive string comparison:¶
The non-normative guidance provided in this Appendix is intended to be helpful to two distinct implementation areas:¶
There are three basic reasons that two strings being compared might be canonically equivalent even though not identical. For each such reason, the implementation will be similar in the cases in which form-insensitive comparison (only) is being done and in which the comparison is both case-insensitive and form- insensitive.¶
Two strings may differ only because each has a different one of two code points that are essentially the same. Three code points assigned to represent units, are essentially equivalent to the character denoting those units. For example, the OHM SIGN (U+2126) is essentially identical to the GREEK CAPITAL LETTER OMEGA (U+03A9) as MICRO SIGN (U+00B5) is to GREEK SMALL LETTER MU (U+03BC) and ANGSTROM SIGN (U+212B) is to LATIN CAPITAL LETTER A WITH RING ABOVE (U+00C5).¶
As discussed in items EX2 and EX3 in Section 10.2, it is possible to adjust for this situation using tables designed to resolve case-insensitive equivalence, essentially treating the unit symbols as an additional case variant, essentially ignoring the fact that the graphic representation is the same. As a result, those doing string comparisons that are both form-insensitive and case-insensitive do not need to address this issue as part of form-insensitivity, since it would be dealt with by existing case-insensitive comparison logic.¶
Where there is no case-insensitive comparison logic, this function needs to be performed using similar tables whose primary function is to provide the decomposition of precomposed characters, as described in Appendix A.2.¶
Two strings may differ in that one has the decomposed form consisting of a base character and an associated combining character while the other has a precomposed character equivalent.¶
Although, as discussed in items EX3 in Section 10.2, it is possible to use tables designed to resolve case-insensitive equivalence by providing as possible case-insensitively equivalent string, multi-character string providing the decomposition of precomposed characters, special logic to do so is only necessary when the decomposition is not a canonical one, i.e. it is a compatibility equivalence.¶
In general, the table used to do comparisons, whether case-sensitive or not, need to provide information about the canonical decomposition of precomposed characters. See Appendix A.2 for details.¶
Two strings may differ in that the strings consist of combining characters that have the same effect differ as to the order in which the characters appear.¶
There is no way this function could be performed within code primarily devoted to case-insensitive equivalence. However, this function could be added to implementations, providing both sorts of equivalence once it is determined that the base characters are case-equivalent while there is a difference of combining characters in to be resolved. (See Appendix A.5 for a discussion of how sets of combining characters can be compared).¶
We discussed in Section 10.1 the construction of a case-insensitive file name hash. While such a hash could also be form-insensitive if the hash contribution of every pre-composed character matched the combined contribution of the characters that it decomposes into.¶
However, there is no obvious way that sort of hash could respect the canonical equivalence of multiple combining characters modifying the same base character, when those combining characters appear in different orders. Addressing that issue would require a significantly different sort of hash, in which combining characters are treated differently from others, so that the re-ordering of a string of combining characters applying to the same base character will not affect the hash.¶
In the hash discussed in Section 10.1, there is no guarantee that the hash for multiple combining characters presented in different orders will be the same. This is because typically such hashes implement some transformation on the existing hash, together with adding the new character to the hash being accumulated. Such methods of hash construction will arrive at different values if the ordering of combining characters changes.¶
In order to create a hash with the necessary characteristics, one can construct a separate sub-hash for composite character, consisting of one non-combining character (may be pre-composed) together with the set (possibly null) of combining characters immediately following it. Each such composed character, whether precomposed or not, will have its own sub-hash, which will be the same regardless of the order of the combining characters.¶
If the hash is to include case-insensitivity, special handling is needed to deal with issues arising from the handling of COMBINING GREEK YPOGEGRAMMENI (U+0345). That combining character, as discussed in item EX6 of Section 10.2 is uppercased to the non-combining character GREEK CAPITAL LETTER IOTA (U+0399) which is in turn lowercased to the non-combining character GREEK SMALL LETTER IOTA (U+03B9). As a result, when computing a case-insensitive hash, when a base character is IOTA (of either case) and the previous base character is ALPHA, ETA, or OMEGA (of the same case as the IOTA), that IOTA is treated, for the purpose of defining the composite characters for which to generate sub-hashes as if it were a combining character. As a result, in this case a string of containing two composite characters will be treated as were a single composite character since the iota will be treated as if it were a combining character. This string will have its own sub-hash, which will be the same regardless of the order of combining characters.¶
The same outline will be followed for generating hashes which are to be form-insensitive (only) and for those which are to be both form-insensitive and case-insensitive. The initial value, representing the base character, will differ based on the type of hash.¶
Regardless of the type of hash to be produced, values based on the following combining characters need to reflected in the sub-hash. In order to make the sub-hash invariant to changes in the order of combining characters, values based on the particular combining character are combined with the hash being computed using a commutative associative operation, such as addition.¶
To reduce false-positives it is desirable to make the hash relatively wide (i.e. 32-64 bits) with the value based on base character in the upper portion of the word with the values for the combining characters appearing in a wide range of bit positions in the rest of the word to limit the degree that multiple distinct sets of combining characters have value that are the same. Although the details will be affected by processor cache structure and the distribution of names processed, a table of values will be used but typical implementations will be different in the two cases we are dealing as described in Appendix A.2.¶
As each sub-hash is computed, it is combined into a name-wide hash. There is no need for this computation to be order-independent and it will probably include a circular shift of the hash computed so far to be added to the contribution of the sub-hash for the new base or composed character.¶
As described in Appendix A.3 the appropriate full name hash will have the major role in excluding potential matches efficiently. However, in some small number of cases, there will be a hash match in which the names to be compared are not equivalent, requiring more involved processing. It is assumed below that a given name will be searching for potential cached matches within the directory so that for that name, on will be able retain information used to construct the full name hash (e.g. individual sub-hashes plus the bounds of each composite character. These will be compared against cached entries where only the full (e.g. 64-bit) name hash and the name itself will be available for comparison.¶
The per-character tables used in these algorithms have a number of type of entries for different types of characters. In some cases, information for a given character type will be essentially the same whether the comparison is to be form-insensitive or case- insensitive. In others, there will be differences. Also, there may be entry types that only exist for particular types of comparisons. In any case, some bits within the table entry will be devoted to representing the type of character and entry:¶
In the common case in which a two-stage mapping will be used, there will be common groups of characters in which no table entry will be required, allowing a default entry type to be used for some character groups with entry contents easily calculable from the code point.¶
We are assuming that comparisons will be based on the hash values computed as described in Appendix A.1, whether the comparison is to be form-insensitive or both case-insensitive and form-insensitive.¶
To facilitate this comparison, the name hash will be stored with the names to be compared. As a result, when there is a need to investigate a new name and whether there are existing matches, it will be possible to search for matches with existing names cached for that directory, using a hash for the new name which is computed and compared to all the existing names, with the result that the detailed comparisons described in Appendices A.4 and A.5 have to be done relatively rarely, since non-matching names together with matching hashes are likely to be atypical.¶
Given the above, it is a reasonable assumption, which we will take note of in the sections below, that for one of the names to be compared, we will have access to data generated in the process of computing the name hash while for the other names, such data would have to be generated anew, when necessary. When that data includes, as we expect it will, the offset and length of the string regions covered by each sub-hash, direct byte-by-byte comparisons between corresponding regions of the two strings can exclude the possibility of difference without invoking any detailed logic to deal with the possibility of canonical equivalence or case-based equivalence in the absence of identical name segment.¶
In the case in which the byte-by-byte comparisons fail, further analysis is necessary:¶
In general, the task of comparing based characters is simple, using a table lookup using the numeric value of the initial character in the substring. When doing form-insensitive comparison this is the base character associated with the initial (possibly pre-composed) character, while for case-insensitive comparison it is the case-based equivalence class associated with that character.¶
When doing case-insensitive comparison, issues may arise that result when there is a multi-character string that as the case- insensitive equivalent of a single base character, as discussed in items EX4 and EX5 within Section 10.2. These are best dealt with using the approach outlined in Section 10.1. When it is noted that the current base character (for either comparand) is a character whose associated equivalence class contains one or more multi-character strings, then these comparisons, normally requiring that each base character be mapped to the same case-based equivalence class by modified to allow equivalences allowed by these multi-character sequences.¶
In such cases, there may need to be comparisons involving the multi-character string, in addition to the normal comparisons using the base characters' equivalence class. As an illustration, we will consider possible comparison results that involve characters string within the equivalence class mentioned in item EX4 within Section 10.2¶
In order to effect the necessary comparison, one needs to assemble, for each comparand, the set of combining characters within the current substring. The means used might be different for different comparands since there might be useful information retained from the generation of the associated string hash for one of the comparands. In any case, there are two potential sources for these characters:¶
Although, the two sets of character can be checked to see if they are identical, this is a sufficient but not a necessary condition for equivalence since some permutations of a set of combining characters are considered canonically equivalent. To summarize the appropriate equivalence rules:¶
The rules above do not directly apply to the case, discussed above, in which some non-combining characters are the case-based equivalents of combining characters such as COMBINING GREEK YPOGEGRAMMENI (U+0345). Nevertheless, because of this equivalence, those implementing case-insensitive comparisons do have to deal with this potential equivalence when considering whether two strings containing combining characters or their case-based equivalents match. As a result when comparing strings of combining characters, we need to implement the following modified rules.¶
Although it is possible to divide combining characters based on their combining classes, sort each of the list and compare, that approach will not be discussed here. Even though the use of sorts might allow use of an overall N log N algorithm, the number of combining characters is likely to be too low for this to be a practical benefit. Instead, we present below an order N-squared algorithm based on searches.¶
In this algorithm, one string, chosen arbitrarily id designated the "source string" and successive character from it, are searched for in the other, designated the "target string". Associated with the target string is a mask to allow characters search for a found to be marked so that they will not be found a second time. In the treatment below, when a character is "searched for" only characters not yet in the mask are examined and the character sought has its associated mask bit set when it is found.¶
Each character in the source string is processed in turn with the actual processing depending on particular character being processed, with the following three possibilities to be dealt with.¶
For the typical case (i.e. a combining character with no case- insensitive equivalents), the character is searched for in the target string with the compare failing if it is not found.¶
If it is found, then the region of the target string between the point corresponding to the current position in the source string and the character found is examined to check for characters of the same combining class. If any are found, the overall comparison fails.¶
Once all characters in the source string has been processed, the mask associated is examined to see if there are combining character that were not found in the matching process described above. Normally, if there are such characters, the overall comparison fails. However, if the last character of the target was not matched and if it is a non-combining character that is case-insensitively equivalent to a combining character, then comparison succeeds and the remaining character needs to be matched with the next substring in the source.¶
This document is based, in large part, on Section 12 of [3] and all the people who contributed to that work, have helped make this document possible, including David Black, Peter Staubach, Nico Williams, Mike Eisler, Trond Myklebust, James Lentini, Mike Kupfer and Peter Saint-Andre.¶
The author wishes to thank Tom Haynes for his timely suggestion to pursue the task of dealing with internationalization on an NFSv4-wide basis.¶
The author wishes to thank Nico WIlliams for his insights regarding the need for clients implementing file access protocols to be aware of the details of the server's internationalization-related name processing, particularly when case-insensitive file systems are being accessed.¶