Network Working Group | J. Abley |
Internet-Draft | ICANN |
Intended status: Informational | March 27, 2013 |
Expires: September 28, 2013 |
A Summary of Various Mechanisms Deployed at L-Root for the Identification of Anycast Nodes
draft-jabley-dnsop-anycast-mapping-01
Anycast is a deployment technique commonly employed for authoritative-only servers in the Domain Name System (DNS). L-Root, one of the thirteen root servers, is deployed in this fashion.
Various techniques have been used to map deployed anycast infrastructure externally, i.e. without reference to inside knowledge about where and how such infrastructure has been deployed. Motivations for performing such measurement exercises include operational troubleshooting and infrastructure risk assessment. In the specific case of L-Root, the ability to measure and map anycast infrastructure using the techniques mentioned in this document is provided for reasons of operational transparency.
This document describes all facilities deployed at L-Root to facilitate mapping of its infrastructure and serves as documentation for L-Root as a measurable service.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 28, 2013.
Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
The Domain Name System (DNS) is described in [RFC1034] and [RFC1035]. L-Root, one of the thirteen root servers, is deployed using anycast [RFC4786]; its service addresses, published in the A and AAAA Resource Record (RR) Sets for "L.ROOT-SERVERS.NET", are made available from a substantial number of semi-autonomous servers deployed throughout the Internet. A list of locations served by L-Root can be found at http://www.root-servers.org.
[Fan2013] describes a technique using open DNS resolvers to distribute mapping queries to the service addresses of authoritative-only servers. This technique relies upon the ability to acquire meaningful information about individual anycast nodes by means of an IN-class query. At the time the experiments described in that paper were conducted, such ability existed with AS112 servers [RFC6304] but not with any root server. Modifications were subsequently made to the infrastructure of the L-Root service to facilitate this technique.
This document describes all facilities currently provided at L-Root to aid node identification.
Individual L-Root nodes have structured hostnames that are constructed as follows:
where
Where multiple airports exist in the vicinity of a single L-Root node, one is arbitrarily chosen.
More granular location data published for L-Root nodes (e.g. see Section 3.4) is derived from the lcoation of the airport, not the actual location of the node.
L-Root service is provided using a single IPv4 address (199.7.83.42) and a single IPv6 address (2001:500:3::42). It should be noted that it is preferable to refer to the service using its DNS name (L.ROOT-SERVERS.NET) rather than literal addresses, since addresses can change from time to time.
At the time of writing there are 273 separate name server elements ("nodes") deployed in 143 locations which together provide L-Root service. A DNS query sent to an L-Root service address will be routed towards exactly one of those nodes for processing, and the corresponding DNS response will be originated from the same node. Queries from different clients may be routed to different nodes.
The following sections provide a summary of all mechanisms provided by L-Root to allow a client to identify which L-Root node is being used.
L-Root supports the use of the Name Server Identifier (NSID) Option [RFC5001] to return the identity of an L-Root node along with the response to a DNS query. The NSID payload of such responses is the fully-qualified hostname of the responding L-Root node.
The NSID option allows the identification of a node sending a specific, requested response to the client. This is of particular use if (for example) there is a desire to identify unequivocably what node is responding with a particularly troublesome response; the output of the diagnostic tool dig with NSID requested provides the problem response with the node identification, and its output in that case could form the basis of a useful trouble report.
NSID is specified as an EDNS0 option [RFC2671]. Clients that do not support EDNS0 signalling (or depend on other systems that do not support EDNS0) may find this mechanism unavailable.
The NSID option can be specified using the widely-used diagnostic tool "dig" using the "+nsid" option, as shown below. Note that long lines have been truncated for the purposes of this document ("\" at the end of a line indicates continuation).
[monster:~]% dig -4 @L.ROOT-SERVERS.NET . SOA +nsid \ +norec +noall +comments ; <<>> DiG 9.6.-ESV-R3 <<>> -4 @L.ROOT-SERVERS.NET . SOA +nsid \ +norec +noall +comments ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14913 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 23 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; NSID: 79 74 7a 30 31 2e 6c 2e 72 6f 6f 74 2d 73 65 72 76 65 72 73 \ 2e 6f 72 67 (y) (t) (z) (0) (1) (.) (l) (.) (r) (o) (o) (t) (-) \ (s) (e) (r) (v) (e) (r) (s) (.) (o) (r) (g) [monster:~]%
[monster:~]% dig -6 @L.ROOT-SERVERS.NET . SOA +nsid \ +norec +noall +comments ; <<>> DiG 9.6.-ESV-R3 <<>> -6 @L.ROOT-SERVERS.NET . SOA +nsid \ +norec +noall +comments ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33374 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 23 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; NSID: 79 74 7a 30 31 2e 6c 2e 72 6f 6f 74 2d 73 65 72 76 65 72 73 \ 2e 6f 72 67 (y) (t) (z) (0) (1) (.) (l) (.) (r) (o) (o) (t) (-) \ (s) (e) (r) (v) (e) (r) (s) (.) (o) (r) (g) [monster:~]%
L-Root supports the use of HOSTNAME.BIND/CH/TXT queries to return the identity of an L-Root node. The TXT RDATA returned is the fully-qualified hostname of the responding L-Root node.
The HOSTNAME.BIND/CH/TXT convention is described in [RFC4892].
Using HOSTNAME.BIND/CH/TXT to identify a node for the purposes of reporting a problem is frequently reasonable, but it should be acknowledged that there is potential for re-routing between successive queries: an observed problem might relate to one node, whilst a subsequent HOSTNAME.BIND/CH/TXT query could be answered by a different node. Use of the NSID option can obviate this possiblity (see Section 3.1).
[monster:~]% dig -4 @L.ROOT-SERVERS.NET HOSTNAME.BIND CH TXT +short "ytz01.l.root-servers.org" [monster:~]% [monster:~]% dig -6 @L.ROOT-SERVERS.NET HOSTNAME.BIND CH TXT +short "ytz01.l.root-servers.org" [monster:~]%
L-Root supports the use of ID.SERVER/CH/TXT queries to return the identity of an L-Root node. The TXT RDATA returned is the fully-qualified hostname of the responding L-Root node.
ID.SERVER/CH/TXT functions identically (apart from the QNAME) to HOSTNAME.BIND/CH/TXT, as discussed in Section 3.2. The discussion there relating to the possibility of re-routing between successive queries also follows for ID.SERVER/CH/TXT.
The ID.SERVER/CH/TXT convention is described in [RFC4892].
[monster:~]% dig -4 @L.ROOT-SERVERS.NET ID.SERVER CH TXT +short "ytz01.l.root-servers.org" [monster:~]% [monster:~]% dig -6 @L.ROOT-SERVERS.NET ID.SERVER CH TXT +short "ytz01.l.root-servers.org" [monster:~]%
The operator of L-Root has distributed a separate DNS service in parallel with L-Root, operating on precisely the same set of nodes. Measurements of this separate service should give results which are representative of L-Root. Further discussion of this service can be found in Section 4.
The fully-qualified DNS name IDENTITY.L.ROOT-SERVERS.ORG (note the use of ORG, not NET) has associated TXT and A RR Sets which are unique to the responding node. Clients are hence able to issue queries for IDENTITY.L.ROOT-SERVERS.ORG/IN/A and IDENTITY.L.ROOT-SERVERS.ORG/IN/TXT and use the results both to identify individual nodes and to distinguish between responses generated by different nodes.
The TXT record returned in the response to such queries is structured as follows:
The A record returned in the response to such queries is guaranteed to be unique to the responding node.
Since in this case identity data is published using IN-class resource records, it is not necessary to send queries directly towards L-Root in order to obtain results. Responses can be obtained through recursive servers, the responses in those cases being the identity of L-Root as observed through the recursive server used rather than the "closest" L-Root node to the client. This facilitates some degree of remote troubleshooting, since a query for IDENTITY.L.ROOT-SERVERS.ORG/IN/TXT or .../IN/A directed a remote recursive resolver can help illustrate which L-Root node is being used by that server (or was used when the cache was populated).
A related caching effect is that responses to IDENTITY.L.ROOT-SERVERS.ORG/IN/A and IDENTITY.L.ROOT-SERVERS.ORG/IN/TXT may be cached at different times, and may hence persist in a cache for overlapping periods of time. One possible visible effect is that the responses to .../IN/A and .../IN/TXT as presented from a cache may appear to be incoherent (i.e. refer to different nodes) despite queries against of the cache happening (near) simultaneously. Caches may also discard the published TTLs in responses from the authoritative server and replace them with longer TTLs, as a matter of local policy. Interpretation of responses for these queries from caches should therefore be carried out with these possible effects in mind.
It has been observed that IDENTITY.L.ROOT-SERVERS.ORG/IN/A queries offer a useful mechanism for troubleshooting DNS problems with non-technical users, since such users can often be walked through the process of looking up an A record (e.g. as a side effect of utilities such as ping) far easier than they can be instructed on how use DNS-specific tools such as dig.
[monster:~]% dig IDENTITY.L.ROOT-SERVERS.ORG TXT +short "ytz01.l.root-servers.org" "Toronto" "Ontario" "Canada" "NorthAmerica" [monster:~]% [monster:~]% dig IDENTITY.L.ROOT-SERVERS.ORG A +short 67.215.199.91 [monster:~]%
The fully-qualified DNS name NODES.L.ROOT-SERVERS.ORG (note again the use of ORG, not NET) provides multiple TXT RRs, one per node, and represents the effective concatenation of all possible responses to the query IDENTITY.L.ROOT-SERVERS.ORG/IN/TXT.
Note that in the example below we have forced dig to send the query over TCP, since we expect the response to be too large for UDP transport to accommodate. Note also that the list shown is truncated for clarity, and can be expected to change from time to time as new L-Root nodes are provisioned and old ones decommissioned.
[monster:~]% dig NODES.L.ROOT-SERVERS.ORG TXT +short +tcp | head -10 "abj01.l.root-servers.org" "Abidjan" "" "Cote d'Ivoire" "Africa" "abj02.l.root-servers.org" "Abidjan" "" "Cote d'Ivoire" "Africa" "akl01.l.root-servers.org" "Mangere" "" "New Zealand" "AsiaPacific" "akl41.l.root-servers.org" "Mangere" "" "New Zealand" "AsiaPacific" "akl42.l.root-servers.org" "Mangere" "" "New Zealand" "AsiaPacific" "akl43.l.root-servers.org" "Mangere" "" "New Zealand" "AsiaPacific" "akl44.l.root-servers.org" "Mangere" "" "New Zealand" "AsiaPacific" "ams01.l.root-servers.org" "Haarlemmermeer" "" "Netherlands" "Europe" "anc01.l.root-servers.org" "Anchorage" "Alaska" "United States" \ "NorthAmerica" [monster:~]%
Individual L-Root nodes run a dedicated, separate authority-only DNS server process which serves the IDENTITY.L.ROOT-SERVERS.ORG zone. The contents of that zone are unique to every node, and hence each responding node will generate a node-specific response.
The contents of the IDENTITY.L.ROOT-SERVERS.ORG zone are hence deliberately incoherent, the apparent zone contents depending on the node responding to the corresponding query.
The IDENTITY.L.ROOT-SERVERS.ORG zone is delegated to the single name server BEACON.L.ROOT-SERVERS.ORG, numbered on IPv4 and IPv6 addresses that are covered by the same routing advertisements that cover the L-Root service addresses. Reachability of BEACON.L.ROOT-SERVERS.ORG is hence well-aligned with the reachability of L.ROOT-SERVERS.NET, and hence measurement of the IDENTITY service ought to give similar results to measurement of the L-Root service.
It is considered best practice always to delegate a DNS zone to more than one name server; however, as described, the IDENTITY.L.ROOT-SERVERS.ORG zone is delegated to just one server. Ordinarily this would present a risk of failure if that single server is not available; however, given the purpose of the delegation in this case and that the expected mitigation of a failure in a single node is the routing of a query to a different node, delegation to a single server in this particular use-case is effective.
The L.ROOT-SERVERS.ORG zone is signed using DNSSEC, and hence secure responses for BEACON.L.ROOT-SERVERS.ORG and NODES.L.ROOT-SERVERS.ORG are available. IDENTITY.L.ROOT-SERVERS.ORG is an insecure delegation from the L.ROOT-SERVERS.ORG zone, however, following the operational preference to serve static data from each node for that zone, and the disinclination to distribute key materials and zone signing machinery to every node.
Some operators of anycast services choose not to disclose locations (or even numbers) of nodes, citing security concerns. The operator of L-Root considers that none of the published information described in this document is truely secret, since any service element which provides service to the Internet can be can never truly be obscured from view. Given that location information can be found regardless of any conscious, deliberate disclosure, and since easy access to this information has diagnostic value, the operator of L-Root has adopted a policy of operational transparency.
The information presented in this document presents no new threat to the Internet.
This document makes no request of the IANA.
The aspects of the L-Root service that were deployed to facilitate IN-class mapping were discussed and implementated as part of an informal collaboration with Xun Fan, John Heidemann and Ramesh Govidan, whose contributions are acknowledged.
Helpful reviews and comments from Gaurab Upadhaya, Hugo Salgado, Brian Dixon and Bob Harold on earlier versions of this document were very much appreciated.
[RFC1034] | Mockapetris, P., "Domain names - concepts and facilities", STD 13, RFC 1034, November 1987. |
[RFC1035] | Mockapetris, P., "Domain names - implementation and specification", STD 13, RFC 1035, November 1987. |
[RFC2671] | Vixie, P., "Extension Mechanisms for DNS (EDNS0)", RFC 2671, August 1999. |
[RFC4786] | Abley, J. and K. Lindqvist, "Operation of Anycast Services", BCP 126, RFC 4786, December 2006. |
[RFC4892] | Woolf, S. and D. Conrad, "Requirements for a Mechanism Identifying a Name Server Instance", RFC 4892, June 2007. |
[RFC5001] | Austein, R., "DNS Name Server Identifier (NSID) Option", RFC 5001, August 2007. |
[RFC6304] | Abley, J. and W. Maton, "AS112 Nameserver Operations", RFC 6304, July 2011. |
[Fan2013] | Fan, X., Heidemann, J. and R. Govidan, "Evaluating Anycast in the Domain Name System", Proceedings of the IEEE Infocom Turin, Italy, April 2013, |
This section (and sub-sections) to be removed prior to publication.