Internet DRAFT - draft-clemm-netmod-peermount

draft-clemm-netmod-peermount







Network Working Group                                           A. Clemm
Internet-Draft                                                 Futurewei
Intended status: Experimental                                    E. Voit
Expires: 25 April 2024                                     Cisco Systems
                                                                  A. Guo
                                                               Futurewei
                                                            I. Dominguez
                                                          Telefonica I+D
                                                         23 October 2023


        Mounting YANG-Defined Information from Remote Datastores
                    draft-clemm-netmod-peermount-02

Abstract

   This document defines a mechanism, Peer-Mount, that allows YANG
   datastores to reference and incorporate information from remote
   datastores.  This is accomplished by extending YANG with the ability
   to define mount points that reference data nodes in other YANG
   subtrees and subsequently allowing those data nodes to be accessed by
   client applications as if they were part of the same local data
   hierarchy.  In addition, means to manage and administer tho mount
   points are provided.  This facilitates the development of
   applications that need to access network-wide data that treanscends
   individual network devices while ensuring network-wide data
   consistency.  One example concerns example applications that require
   a network inventory and/or network topology with access to select
   management data within the nodes that comprise it.

   The concept of Peer-Mount was first introduced in an earlier Internet
   Draft that was no longer pursued due to lack of interest at the time.
   It is being revived now in light of renewed IETF interest in network
   inventory, network topology, and related use cases, for which Peer-
   Mount is of specific interest.  Other concepts defined in the earlier
   draft, notably Alias-Mount, are not considered here since they
   provide other capabilities that are less applicable to those topics.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.




Clemm, et al.             Expires 25 April 2024                 [Page 1]

Internet-Draft                 Peer-Mount                   October 2023


   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on 25 April 2024.

Copyright Notice

   Copyright (c) 2023 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents (https://trustee.ietf.org/
   license-info) in effect on the date of publication of this document.
   Please review these documents carefully, as they describe your rights
   and restrictions with respect to this document.  Code Components
   extracted from this document must include Revised BSD License text as
   described in Section 4.e of the Trust Legal Provisions and are
   provided without warranty as described in the Revised BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   3
     1.1.  Overview  . . . . . . . . . . . . . . . . . . . . . . . .   3
     1.2.  Background and history  . . . . . . . . . . . . . . . . .   4
     1.3.  Restrictions and possible future extensions . . . . . . .   5
     1.4.  Differentiation from other work . . . . . . . . . . . . .   6
     1.5.  Example uses  . . . . . . . . . . . . . . . . . . . . . .   7
   2.  Key Words . . . . . . . . . . . . . . . . . . . . . . . . . .   9
   3.  Definitions and Acronyms  . . . . . . . . . . . . . . . . . .   9
   4.  Example scenarios . . . . . . . . . . . . . . . . . . . . . .  10
     4.1.  Network controller view, network topology, network
           inventory . . . . . . . . . . . . . . . . . . . . . . . .  10
     4.2.  Consistent network configuration  . . . . . . . . . . . .  12
   5.  Operating on mounted data . . . . . . . . . . . . . . . . . .  14
     5.1.  General principles  . . . . . . . . . . . . . . . . . . .  14
     5.2.  Data retrieval  . . . . . . . . . . . . . . . . . . . . .  14
     5.3.  Operations beyond data retrieval  . . . . . . . . . . . .  15
     5.4.  Other operational considerations  . . . . . . . . . . . .  16
   6.  Data model structure  . . . . . . . . . . . . . . . . . . . .  16
     6.1.  YANG mountpoint extensions  . . . . . . . . . . . . . . .  16
     6.2.  YANG structure diagrams . . . . . . . . . . . . . . . . .  17
     6.3.  Mountpoint management . . . . . . . . . . . . . . . . . .  18
   7.  Datastore mountpoint YANG module  . . . . . . . . . . . . . .  20
   8.  Other considerations  . . . . . . . . . . . . . . . . . . . .  28
     8.1.  Authorization . . . . . . . . . . . . . . . . . . . . . .  28
     8.2.  Datastore qualification . . . . . . . . . . . . . . . . .  29



Clemm, et al.             Expires 25 April 2024                 [Page 2]

Internet-Draft                 Peer-Mount                   October 2023


     8.3.  Mount cascades  . . . . . . . . . . . . . . . . . . . . .  29
     8.4.  Mountpoint status . . . . . . . . . . . . . . . . . . . .  29
     8.5.  Caching . . . . . . . . . . . . . . . . . . . . . . . . .  29
     8.6.  Filtering . . . . . . . . . . . . . . . . . . . . . . . .  30
     8.7.  Implementation considerations beyond caching  . . . . . .  30
     8.8.  Modeling best practices . . . . . . . . . . . . . . . . .  31
   9.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .  32
   10. Security Considerations . . . . . . . . . . . . . . . . . . .  32
   11. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . .  32
   12. References  . . . . . . . . . . . . . . . . . . . . . . . . .  32
     12.1.  Normative References . . . . . . . . . . . . . . . . . .  32
     12.2.  Informative References . . . . . . . . . . . . . . . . .  33
   Appendix A.  Open issues  . . . . . . . . . . . . . . . . . . . .  35
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  37

1.  Introduction

1.1.  Overview

   This document introduces a new capability that allows YANG datastores
   [RFC7950] to incorporate and reference information from other YANG
   subtrees that reside on separate servers.  The capability allows a
   client application to retrieve and have visibility of both local and
   remote YANG data as part of the same YANG tree accessed through a
   single server.  This is provided by introducing a mountpoint concept.
   This concept allows to declare a YANG data node in a primary
   datastore to serve as a "mount point" under which a subtree with YANG
   data from another server can be mounted.  To the client, this
   provides visibility to data from other subtrees, rendered in a way
   that makes it appear as if all of that data were an integral part of
   the same datastore.  This enables users to retrieve local data as
   well as mounted data from remote in integrated fashion, using e.g.
   Netconf [RFC6241] or Restconf [RFC8040] [RFC8527] data retrieval
   primitives.  Peer-Mount allows a server to effectively provide a
   federated datastore that includes YANG data from across the
   network.The concept is reminiscent of concepts in a Network File
   System that allows to mount remote folders and make them appear as if
   they were contained in the local file system of the user's machine.

   Peer-Mount also takes inspiration from a new technique in data
   management known as data virtualization
   (https://en.wikipedia.org/wiki/Data_virtualization).  Traditionally,
   data platforms like data lakes or data warehouses have relied on
   Extract-Transform-Load (ETL) pipelines in which data was ingested
   from sources and eventually, stored into the data platform for
   consumption.  Data virtualization defines a new data access approach
   wherein data remains at its source and is collected and served on
   demand by the data platform only when a consumer requests such data.



Clemm, et al.             Expires 25 April 2024                 [Page 3]

Internet-Draft                 Peer-Mount                   October 2023


   As a result, data is not duplicated in the data platform, but served
   directly to the consumer via the data platform.  To this end, the
   data platform maintains virtual pointers to the source where the data
   con be retrieved.

1.2.  Background and history

   This draft borrows heavily from an earlier draft
   [I-D.clemm-netmod-mount] in which the concept of Peer-Mount was first
   introduced.  That draft had been accompanied by a second draft
   articulating the requirements to be addressed
   [I-D.voit-netmod-yang-mount-requirements].  Both drafts were
   eventually no longer pursued due to limited interest at the time.

   Since then, things have changed in that use cases have emerged that
   would greatly benefit from Peer-Mount as a solution.  This includes
   in particularly interest in developing models for network inventory
   as well as for network topologies.  Both cases involve the need to
   provide a consolidated view of a network through a YANG data model
   that could be provided, for example, by a network controller.  This
   interest has manifested itself in the creation of a new working
   group, Network Inventory YANG (ivy).

   Peer-Mount can facilitate the development of network inventory as
   well as network topology models that allow to incorporate "live"
   management data from the network devices that make up the inventory.
   This can be achieved by mounting that data, for example aspects of
   their configuration or even current device state, below the network
   inventory and/or network topology entities.  Benefits of this include
   the avoidance of data model redundancy (defining overlapping YANG
   data models both at the device and at the network inventory level),
   simplification of dealing with replicated data (single authoritative
   data ownership), and ultimately faster time to market and lower
   development cost.  Other use cases to benefit from Peer-Mount include
   Digital Twin Networking and Digital Network Topology Maps, both of
   which require a holistic view of network inventory that includes live
   management data.














Clemm, et al.             Expires 25 April 2024                 [Page 4]

Internet-Draft                 Peer-Mount                   October 2023


   The earlier draft also included another variation of mount, Alias-
   Mount.  Alias-Mount allowed for the definition of mountpoints that
   reference a local YANG subtree residing on the same server.  That
   provided in effect an aliasing capability which provided for an
   alternative hierarchy and path to access the same YANG data.  Alias-
   Mount could be thought of as a simpler version of Peer-Mount that
   does not specify a remote server.  However, in the interest of
   simplicity, Alias-Mount is not included here as it does not
   contribute to the ability to provide a federated datastore providing
   a holistic network-wide view, which is the property that is of
   interest here.

1.3.  Restrictions and possible future extensions

   Data that is mounted is authoritatively still owned by the server
   where the mounted data originates and resides on.  That data is a
   part of that server's own datastore, regardless of whether or not it
   also happens to be mounted from a remote client somewhere else.  This
   implies that from the view of the mounting system, there are a number
   of differences that apply to data that is mounted.  Specifically, it
   means that the validation of integrity constraints is the
   responsibility of the authoritative owner, not of the server that is
   mounting that data as a client.  Mounting does not impose additional
   constraints on the remote data; it merely provides a different view
   of the same data from remote.

   The mountpoint concept applies in principle to operations beyond data
   retrieval, i.e. to configuration, RPCs, notification subscriptions
   [RFC8639], and YANG-Push subscriptions [RFC8641].  However, support
   for such operations involves additional considerations.  Most
   significantly, in the case of configuration operations, additional
   considerations regarding transactions and locking would apply (which
   might now have to be supported across the network).

   For this reason, in its initial version, only data retrieval
   operations (e.g.  GET) will be supported for data that is mounted.
   Other operations that are directed at subtrees that include mounted
   information will simply be capped at the mountpoints, i.e. not be
   applied to mounted data.

   It is conceivable that additional capabilities for operations on
   mounted information will be introduced at some point.  However, to
   keep things simple, the specification of such capabilities is beyond
   the scope of this specification; they can be introduced incrementally
   over time and advertised by YANG servers through additional features
   at that time.





Clemm, et al.             Expires 25 April 2024                 [Page 5]

Internet-Draft                 Peer-Mount                   October 2023


1.4.  Differentiation from other work

   YANG does provide means by which modules that have been separately
   defined can reference and augment one another.  YANG also does
   provide means to specify data nodes that reference other data nodes.
   However, all the data is assumed to be instantiated as part of the
   same datastore, for example a datastore provided through a NETCONF
   server.  Existing YANG mechanisms do not account for the possibility
   that some information that needs to be referred not only resides in a
   different subtree of the same datastore, or was defined in a separate
   module that is also instantiated in the same datastore, but that is
   genuinely part of a different datastore that is provided by a
   different server.

   The ability to mount information from local and remote datastores is
   new and not covered by existing YANG mechanisms.  Until now,
   management information provided in a datastore has been intrinsically
   tied to the same server and to a single data hierarchy.  In contrast,
   the capability introduced in this specification allows the server to
   present data that is instantiated on remote systems as if it were its
   own and contained in its own local data hierarchy.

   The capability of allowing the mounting of information from other
   subtrees is accomplished by a set of YANG extensions that allow to
   define such mount points.  For this purpose, a new YANG module is
   introduced.  The module defines the YANG extensions, as well as a
   data model that can be used to manage the mountpoints and mounting
   process itself.  Only the mounting module and its server (i.e.  the
   "receivers" or "consumers" of the mounted information) need to be
   aware of the concepts introduced here.  Mounting is transparent to
   the "providers" of the mounted information and models that are being
   mounted; any data nodes or subtrees within any YANG model can be
   mounted.

   It should be mentioned that Peer-Mount is not to be confused with the
   ability to mount a schema, aka Schema Mount [RFC8528].  A Schema
   Mount allows to instantiate an existing model definition underneath a
   mount point which is then locally instantiated at that point.  It
   does not allow to reference a set of YANG data that has already been
   instantiated somewhere else.  In that sense, Schema-Mount resembles
   more a "grouping" concept that allows to reuse an existing definition
   in a new context, as opposed to referencing and incorporating
   existing instance information into a new context.








Clemm, et al.             Expires 25 April 2024                 [Page 6]

Internet-Draft                 Peer-Mount                   October 2023


1.5.  Example uses

   The ability to mount data from remote datastores is useful to address
   various problems that several categories of applications are faced
   with.

   One category of applications that can leverage this capability are
   network controller applications that need to present a consolidated
   view of management information in datastores across a network.
   Generally speaking, applications may need to provide a network
   inventory [RFC8345] [I-D.wzwb-opsawg-network-inventory-management]
   which provides not only a list of inventory items in the network, but
   that also includes additional information about each of those items,
   such as their status or certain aspects of their configuration.
   Likewise, applications that provide a view of a network topology may
   want to include certain aspects about the status and other properties
   of nodes, termination points, and links that make up the topology.
   One example of such applications includes Network Digital Twins
   [I-D.irtf-nmrg-network-digital-twin-arch].

   These applications are faced with the problem that in order to expose
   information, that information needs to be part of their own
   datastore.  Today, this requires support of a corresponding YANG data
   module.  In order to expose information that concerns other network
   elements, that information has to be replicated into the controller's
   own datastore in the form of data nodes that may mirror but are
   clearly distinct from corresponding data nodes in the network
   element's datastore.  In addition, in many cases, a controller needs
   to impose its own hierarchy on the data that is different from the
   one that was defined as part of the original module.  An example for
   this concerns interface data, both operational data (e.g. various
   types of interface statistics) and configuration data, such as
   defined in [RFC7223].  This data will be contained in a top-level
   container ("interfaces", in this particular case) in a network
   element datastore.  The controller may need to provide its clients a
   view on interface data from multiple devices under its scope of
   control.  One way of to do so would involve organizing the data in a
   list with separate list elements for each device.  However, this in
   turn would require introduction of redundant YANG modules that
   effectively replicate the same interface data save for differences in
   hierarchy.

   By directly mounting information from network element datastores, the
   controller does not need to replicate the same information from
   multiple datastores, nor does it need to re-define any network
   element and system-level abstractions to be able to put them in the
   context of network abstractions.  Instead, the subtree of the remote
   system is attached to the local mount point.  Operations that need to



Clemm, et al.             Expires 25 April 2024                 [Page 7]

Internet-Draft                 Peer-Mount                   October 2023


   access data below the mount point are in effect transparently
   redirected to remote system, which is the authoritative owner of the
   data.  The mounting system does not even necessarily need to be aware
   of the specific data in the remote subtree.  Optionally, caching
   strategies can be employed in which the mounting system prefetches
   data.

   A second category of applications concerns decentralized networking
   applications that require globally consistent configuration of
   parameters.  When each network element maintains its own datastore
   with the same configurable settings, a single global change requires
   modifying the same information in many network elements across a
   network.  In case of inconsistent configurations, network failures
   can result that are difficult to troubleshoot.  In many cases, what
   is more desirable is the ability to configure such settings in a
   single place, then make them available to every network element.
   Today, this requires in general the introduction of specialized
   servers and configuration options outside the scope of NETCONF, such
   as RADIUS [RFC2866] or DHCP [RFC2131].  In order to address this
   within the scope of NETCONF and YANG, the same information would have
   to be redundantly modeled and maintained, representing operational
   data (mirroring some remote server) on some network elements and
   configuration data on a designated master.  Either way, additional
   complexity ensues.

   Instead of replicating the same global parameters across different
   datastores, the solution presented in this document allows a single
   copy to be maintained in a subtree of single datastore that is then
   mounted by every network element that requires awareness of these
   parameters.  The global parameters can be hosted in a controller or a
   designated network element.  This considerably simplifies the
   management of such parameters that need to be known across elements
   in a network and require global consistency.

   It should be noted that for these and many other applications merely
   having a view of the remote information is sufficient.  It allows to
   define consolidated views of information without the need for
   replicating data and models that have already been defined, to audit
   information, and to validate consistency of configurations across a
   network.  Only retrieval operations are required; no operations that
   involve configuring remote data are involved.










Clemm, et al.             Expires 25 April 2024                 [Page 8]

Internet-Draft                 Peer-Mount                   October 2023


2.  Key Words

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
   "OPTIONAL" in this document are to be interpreted as described in BCP
   14 [RFC2119] [RFC8174] when, and only when, theyappear in all
   capitals, as shown here.

3.  Definitions and Acronyms

   Data node: An instance of management information in a YANG datastore.

   DHCP: Dynamic Host Configuration Protocol.

   Datastore: A conceptual store of instantiated management information,
   with individual data items represented by data nodes which are
   arranged in hierarchical manner.

   Data subtree: An instantiated data node and the data nodes that are
   hierarchically contained within it.

   Mount client: The system at which the mount point resides, into which
   the remote subtree is mounted.

   Mount point: A data node that receives the root node of the remote
   datastore being mounted.

   Mount server: The server with which the mount client communicates and
   which provides the mount client with access to the mounted
   information.  Can be used synonymously with mount target.

   Mount target: A remote server whose datastore is being mounted.

   NACM: NETCONF Access Control Model

   NETCONF: Network Configuration Protocol

   RADIUS: Remote Authentication Dial In User Service.

   RPC: Remote Procedure Call

   Remote datastore: A datastore residing at a remote node.

   URI: Uniform Resource Identifier

   YANG: A data definition language for NETCONF





Clemm, et al.             Expires 25 April 2024                 [Page 9]

Internet-Draft                 Peer-Mount                   October 2023


   YANG-Push: A mechanism that allows a client to subscribe to updates
   from a datastore, which are then automatically pushed by the server
   to the client.

4.  Example scenarios

   The following example scenarios outline some of the ways in which the
   ability to mount YANG datastores can be applied.  Other mount
   topologies can be conceived in addition to the ones presented here.

4.1.  Network controller view, network topology, network inventory

   The need to maintain a network inventory is a requirement for many
   applications, for example applications that expect to operate on a
   network topology [RFC8345].  Network controllers can use the mounting
   capability as part of maintaining a network inventory and, more
   generally, presenting a consolidated view of management information
   across the network.  This allows network controllers to expose
   network-wide abstractions, such as topologies or paths, multi-device
   abstractions, such as VRRP [RFC5798], and network-element specific
   abstractions, such as information about a network element's
   interfaces.

   Without a mounting capability, a network controller would need to at
   least conceptually replicate data from network elements to provide
   such a view, incorporating network element information into its own
   controller model that is separate from the network element's,
   indicating that the information in the controller model is to be
   populated from network elements.  This can introduce issues such as
   data inconsistency and staleness, in addition to operational overhead
   that is required to populate and sync that data.  Equally important,
   it would lead to the need to define redundant data models: one model
   that is implemented by the network element itself, and another model
   to be implemented by the network controller.  This leads to poor
   maintainability, as analogous information has to be redundantly
   defined and implemented across different data models.  In general,
   controllers cannot simply support the same modules as their network
   elements for the same information because that information needs to
   be put into a different context.  This leads to "node"-information
   that needs to be instantiated and indexed differently, because there
   are multiple instances across different data stores.

   For example, a controller might want to maintain network inventory
   consisting of list of network elements.  Underneath each network
   element, the network inventory should also contain respectrive
   system-level information.  Without Peer-Mount, would require the
   definition of a YANG data model that defines the required system-
   level information as part of the network inventory, although the same



Clemm, et al.             Expires 25 April 2024                [Page 10]

Internet-Draft                 Peer-Mount                   October 2023


   information is also modeled as part of YANG data models that are
   instantiated at the respective network elements.  The controller-
   level network inventory would require a separate data model (or set
   of data models) that repeats the same system-level information of the
   network element and which needs to be redundantly defined,
   implemented, and maintained.  Any augmentations that add additional
   system-level information to the original module will likewise need to
   be redundantly defined, once for the YANG data model at the "system"
   level, a second time at the network inventory level.

   By allowing a network controller (or other system maintaining a
   network inventory) to use Peer-Mount and directly mount information
   from network element datastores, the controller does not need to
   replicate the same information from multiple datastores.  Perhaps
   even more importantly, the need to re-define any network element and
   system-level abstractions just to be able to put them in the context
   of network abstractions is avoided.  In this solution, a network
   controller's datastore mounts information from many network element
   datastores.  For example, the network controller datastore (the
   "primary" datastore) could implement a list in which each list
   element contains a mountpoint.  Each mountpoint mounts a subtree from
   a different network element's datastore.  The data from the mounted
   subtrees is then accessible to clients of the primary datastore using
   the usual data retrieval operations.

   This scenario is depicted in Figure 1.  In the figure, a Network
   Controller Datastore contains a network inventory rooted in Ninv.
   Ninv contains a list of nodes in the inventory, N11 and N12.  M1 is
   the mountpoint for a subtree in the datastore in Network Element 1
   and M2 is the mountpoint for a subtree in the datastore in Network
   Element 2.  MDN1 is the mounted data node that is mounted from
   Network Element 1 below N11, and MDN2 is the data node mounted from
   Network Element 2 below N12.


















Clemm, et al.             Expires 25 April 2024                [Page 11]

Internet-Draft                 Peer-Mount                   October 2023


   +-------------+
   |   Network   |
   |  Controller |
   |  Datastore  |
   |             |
   | +--Ninv     |
   |    +--N11   |
   |    |  +--M1*******************************
   |    +--N12   |                            *
   |       +--M2******                        *
   |             |   *                        *
   +-------------+   *                        *
                     *   +---------------+    *    +---------------+
                     *   | +--N1         |    *    | +--N5         |
                     *   |     +--N2     |    *    |     +--N6     |
                     ********> +--MDN2   |    *********> +--MDN1   |
                         |         +--N3 |         |         +--N7 |
                         |         +--N4 |         |         +--N8 |
                         |               |         |               |
                         |    Network    |         |    Network    |
                         |    Element    |         |    Element    |
                         |  Datastore 1  |         |  Datastore 2  |
                         +---------------+         +---------------+

                Figure 1: Network controller mount topology

4.2.  Consistent network configuration

   While network inventory serves as a primary motivator for the
   introduction of Peer-Mount, it can be used also for other
   applications.  A second category of such applications concerns
   decentralized networking applications that require globally
   consistent configuration of parameters that need to be known across
   elements in a network.  Today, the configuration of such parameters
   is generally performed on a per network element basis, which is not
   only redundant but, more importantly, error-prone.  Inconsistent
   configurations lead to erroneous network behavior that can be
   challenging to troubleshoot.

   Using the ability to mount information from remote datastores opens
   up a new possibility for managing such settings.  Instead of
   replicating the same global parameters across different datastores, a
   single copy is maintained in a subtree of single datastore.  This
   datastore can hosted in a controller or a designated network element.
   The subtree is subsequently mounted by every network element that
   requires access to these parameters.





Clemm, et al.             Expires 25 April 2024                [Page 12]

Internet-Draft                 Peer-Mount                   October 2023


   In many ways, this category of applications is an inverse of the
   previous category: Whereas in the network controller case data from
   many different datastores would be mounted into the same datastore
   with multiple mountpoints, in this case many elements, each with
   their own datastore, mount the same remote datastore, which is then
   mounted by many different systems.

   The scenario is depicted in Figure 2.  In the figure, M1 is the
   mountpoint for the Network Controller datastore in Network Element 1
   and M2 is the mountpoint for the Network Controller datastore in
   Network Element 2.  MDN is the mounted data node (i.e. the root of
   the mounted subtree) in the Network Controller datastore that
   contains the data nodes that represent the shared configuration
   settings.  (Note that there is no reason why the Network Controller
   Datastore in this figure could not simply reside on a network element
   itself; the division of responsibilities is a logical one.

   +---------------+         +---------------+
   |    Network    |         |    Network    |
   |    Element    |         |    Element    |
   |  Datastore 1  |         |  Datastore 2  |
   |               |         |               |
   | +--N1         |         | +--N5         |
   | |   +--N2     |         | |   +--N6     |
   | |   +--N2     |         | |   +--N6     |
   | |       +--N3 |         | |       +--N7 |
   | |       +--N4 |         | |       +--N8 |
   | |             |         | |             |
   | +--M1         |         | +--M2         |
   +-----*---------+         +-----*---------+
         *                         *               +---------------+
         *                         *               |               |
         *                         *               | +--N10        |
         *                         *               |    +--N11     |
         *********************************************> +--MDN     |
                                                   |        +--N20 |
                                                   |        +--N21 |
                                                   |         ...   |
                                                   |        +--N22 |
                                                   |               |
                                                   |    Network    |
                                                   |   Controller  |
                                                   |   Datastore   |
                                                   +---------------+

               Figure 2: Distributed config settings topology





Clemm, et al.             Expires 25 April 2024                [Page 13]

Internet-Draft                 Peer-Mount                   October 2023


5.  Operating on mounted data

   This section provides a rough illustration of the operations flow
   involving mounted datastores.

5.1.  General principles

   The first thing that should be noted about these operations flows
   concerns the fact that a mount client essentially constitutes a
   special management application that interacts with a subtree to
   render the data of that subtree as an alternative tree hierarchy.  In
   Peer-Mount, the mount client constitutes in effect another
   application, with the remote system remaining the authoritative owner
   of the data.  While it is conceivable that the remote system (or an
   application that proxies for the remote system) provides certain
   functionality to facilitate the specific needs of the mount client to
   make it more efficient, the fact that another system decides to
   expose a certain "view" of that data is fundamentally not the remote
   system's concern.

   When a client application makes a request to a server that involves
   data that is mounted from a remote system, the server will
   effectively act as a proxy to the remote system on the client
   application's behalf.  It will extract from the client application
   request the portion that involves the mounted subtree from the remote
   system.  It will strip that portion of the local context, i.e. remove
   any local data paths and insert the data path of the mounted remote
   subtree, as appropriate.  The server will then forward the transposed
   request to the remote system that is the authoritative owner of the
   mounted data, acting itself as a client to the remote server.  Upon
   receiving the reply, the server will transpose the results into the
   local context as needed, for example map the data paths into the
   local data tree structure, and combine those results with the results
   of the remainder portion of the original request.

5.2.  Data retrieval

   Data retrieval operations are the only category of operations that is
   supported for peer-mounted data.  In that case, a Netconf "get" or
   "get-configuration" operation might be applied on a subtree whose
   scope includes a mount point.  When resolving the mount point, the
   server issues its own "get" or "get-configuration" request against
   the remote system's subtree that is attached to the mount point.  Any
   filters in the request are transposed to include only those portions
   that would be applicable to the remote subtree (in the process
   removing portions that would be applied locally above the
   mountpoint).  The data that is returned is then inserted into the
   data structure that is in turn returned to the client that originally



Clemm, et al.             Expires 25 April 2024                [Page 14]

Internet-Draft                 Peer-Mount                   October 2023


   invoked the request.

5.3.  Operations beyond data retrieval

   The fact that data retrieval operations are the only category of
   operations that are supported for peer-mounted data does not preclude
   other operations to be applied to datastore subtrees that contain
   mountpoints and peer-mounted data.  Peer-mounted data will simply be
   transparent to those operations.  When an operation is applied to a
   subtree which includes mountpoints, mounted data is ignored for
   purposes of the operation.  For example, for a Netconf "edit-config"
   operation that includes a subtree with a mountpoint, a server will
   ignore the data under the mountpoint and apply the operation only to
   the local configuration.  Mounted data is treated as "read-only"
   data.  The server does not even need to return an error message that
   the operation could not be applied to mounted data; the mountpoint is
   simply ignored.

   In principle, it is conceivable that operations other than data-
   retrieval are applied to mounted data as well.  For example, an
   operation to edit configuration information might expect edits to be
   applied to remote systems as part of the operation, where the edited
   subtree involves mounted information.  However, editing of
   information and "writing through" to remote systems potentially
   involves significant complexity, particularly if transactions and
   locking across multiple configuration items across multiple remote
   systems are involved.  Support for such operations will require
   additional capabilities, specification of which is beyond the scope
   of this specification.

   Likewise, Peer-Mount does not extend towards RPCs that are defined as
   part of YANG modules whose contents is being mounted.  Support for
   RPCs that involve mounted portions of the datastore, while
   conceivable, would require introduction of an additional capability,
   whose definition is outside the scope of this specification.

   Finally, Peer-Mount does not extend towards notifications [RFC8639]
   nor YANG-Push [RFC8641].  However, it is conceivable and fairly
   straightforward to offer support for those operations in the future
   using a separate capability, definition of which is once again
   outside the scope of this specification.










Clemm, et al.             Expires 25 April 2024                [Page 15]

Internet-Draft                 Peer-Mount                   October 2023


5.4.  Other operational considerations

   Since mounting of data typically involves communication with a remote
   system, there is a possibility that the remote system will not
   respond within a certain amount of time, that connectivity is lost,
   or that other errors occur.  Accordingly, the ability to mount
   datastores also involves mountpoint management, which includes the
   ability to configure timeouts, retries, and management of mountpoint
   state (including dynamic addition removal of mountpoints).
   Mountpoint management is discussed in section Section 6.3.

   It is expected that some implementations will introduce caching
   schemes.  Caching can increase performance and efficiency in certain
   scenarios (for example, in the case of data that is frequently read
   but that rarely changes), but increases implementation complexity.
   Caching is not required for Peer-Mount to work - in which case access
   to mounted data is "on-demand", in which the authoritative data node
   always gets accessed.  Whether to perform caching is a local
   implementation decision.

   When caching is supported by an implementation, it can benefit from
   the ability to subscribe to updates on remote data by remote servers.
   Some optimizations to facilitate caching support are discussed in
   section Section 8.5.

6.  Data model structure

6.1.  YANG mountpoint extensions

   At the center of the module is a set of YANG extensions that allow to
   define a mountpoint in a YANG data model.

   *  The first extension, "mountpoint", is used to declare a
      mountpoint.  The extension takes the name of the mountpoint as an
      argument.

   *  The second extension, "subtree", serves as substatement underneath
      a mountpoint statement.  It takes an argument that defines the
      root node of the datastore subtree that is to be mounted,
      specified as string that contains a path expression.  This
      extension is used to define mountpoints for Peer-Mount.

   *  The third extension, "target", also serves as a substatement
      underneath a mountpoint statement.  It takes an argument that
      identifies the target system from where a subtree is mounted.  The
      argument is a reference to a data node that contains the
      information that is needed to identify and address a remote
      server, such as an IP address, a host name, or a URI [RFC3986].



Clemm, et al.             Expires 25 April 2024                [Page 16]

Internet-Draft                 Peer-Mount                   October 2023


      It is conceivable that a mount point is contained in a container
      that is part of a list, with each list element containing a mount
      point instance that references a different target system, with
      target system information itself part of a separate list.  The
      argument in this case will include the required index information
      to identify the list element which identifies the target system.

   A mountpoint MUST be contained underneath a container, a list, or a
   case.

   Only a single data node respectively subtree can be mounted at one
   time.  While the mount target could refer to any data node, it is
   recommended that as a best practice, the mount target SHOULD refer to
   a container.  It is possible to maintain e.g. a list of mount points,
   with each mount point each of which has a mount target an element of
   a remote list.  However, to avoid unnecessary proliferation of the
   number of mount points and associated management overhead, when data
   from lists or leaf-lists is to be mounted, a container containing the
   list respectively leaf-list SHOULD be mounted instead of individual
   list elements.

   It is possible for a mounted datastore to contain another mountpoint,
   thus leading to several levels of mount indirections.  However,
   mountpoints MUST NOT introduce circular dependencies.  In particular,
   a mounted datastore MUST NOT contain a mountpoint which specifies the
   mounting datastore as a target and a subtree which contains as root
   node a data node that in turn contains the original mountpoint.
   Whenever a mount operation is performed, this condition MUST be
   validated by the mount client.

6.2.  YANG structure diagrams

   YANG tree diagrams [RFC8340] have proven very useful to convey the
   "Big Picture".  It would be useful to indicate in YANG tree diagrams
   where a given node serves as a mountpoint.  We propose for this
   purpose also a corresponding extension to the structure
   representation convention.  Specifically, we propose to prefix the
   name of the mounting data node with upper-case 'M'.  The subtree
   being mounted is depicted with a "-->" and path to the subtree root.
   The identification of the target system is not depicted.  The
   following diagram depicts a mountpoint "node-system-info" contained
   under data node "node", which contains also another data node "node-
   ID".








Clemm, et al.             Expires 25 April 2024                [Page 17]

Internet-Draft                 Peer-Mount                   October 2023


   rw network
   +-- rw nodes
       +-- rw node [node-ID]
           +-- rw node-ID
           +-- M node-system-info --> /system

               Figure 3: Mountpoint structure diagram example

6.3.  Mountpoint management

   In addition to allowing to define mountpoints, the YANG module also
   contains facilities to manage the mountpoints themselves.

   For this purpose, a list of the mountpoints is introduced.  Each list
   element represents a single mountpoint.  It includes an
   identification of the mount point, i.e. its location in the local
   datatree, and a mount target, i.e. the remote system hosting the
   remote datastore and a definition of the subtree of the remote data
   node being mounted.  It also includes monitoring information about
   current status (indicating whether the mount has been successful and
   is operational, or whether an error condition applies such as the
   target being unreachable or referring to an invalid subtree).

   In addition to the list of mountpoints, a set of global mount policy
   settings allows to set parameters such as mount retries and timeouts.

   Each mountpoint list element also contains a set of the same
   configuration knobs, allowing administrators to override global mount
   policies and configure mount policies on a per-mountpoint basis if
   needed.

   There are two ways how mounting occurs: automatic (dynamically
   performed as part of system operation) or manually (administered by a
   user or client application, mounted on request not in an arbitrary
   location but in a place that is permissible as per the model).  A
   separate mountpoint-origin object is used to distinguish between
   manually configured and automatically populated mountpoints.

   Whether mounting occurs automatically or is subject to management by
   a user or an application can depend on the mountpoint being defined,
   i.e. the semantics of the model.










Clemm, et al.             Expires 25 April 2024                [Page 18]

Internet-Draft                 Peer-Mount                   October 2023


   When configured automatically, mountpoint information is
   automatically populated by the datastore that implements the
   mountpoint.  The precise mechanisms for discovering mount targets and
   bootstrapping mount points are provided by the mount client
   infrastructure and outside the scope of this specification.
   Likewise, when a mountpoint should be deleted and when it should
   merely have its mount-status indicate that the target is unreachable
   is a system-specific implementation decision.

   Manual mounting consists of two steps.  In a first step, a mountpoint
   is manually configured by a user or client application through
   administrative action.  Once a mountpoint has been configured, actual
   mounting occurs through an RPCs that is defined specifically for that
   purpose.  To unmount, a separate RPC is invoked; mountpoint
   configuration information needs to be explicitly deleted.  Manual
   mounting can also be used to override automatic mounting, for example
   to allow an administrator to set up or remove a mountpoint.

   It should be noted that mountpoint management does not allow users to
   manually "extend" the model, i.e. simply add a subtree underneath
   some arbitrary data node into a datastore, without a supporting
   mountpoint defined in the model to support it.  A mountpoint
   definition is a formal part of the model with well-defined semantics.
   Accordingly, mountpoint management does not allow users to
   dynamically "extend" the data model itself.  It allows users to
   populate the datastore and mount structure within the confines of a
   model that has been defined prior.

   The structure of the mountpoint management data model is depicted in
   the following figure as a YANG Tree Diagram [RFC8340].





















Clemm, et al.             Expires 25 April 2024                [Page 19]

Internet-Draft                 Peer-Mount                   October 2023


   module: ietf-peer-mount
      +--rw mount-server-mgmt {mount-server-mgmt}?
         +--rw mountpoints
         |  +--rw mountpoint* [mountpoint-id]
         |     +--rw mountpoint-id        string
         |     +--ro mountpoint-origin?   enumeration
         |     +--rw subtree-ref          subtree-ref
         |     +--rw mount-target
         |     |  +--rw (target-address-type)
         |     |     +--:(IP)
         |     |     |  +--rw target-ip?          inet:ip-address
         |     |     +--:(URI)
         |     |     |  +--rw uri?                inet:uri
         |     |     +--:(host-name)
         |     |     |  +--rw hostname?           inet:host
         |     |     +--:(node-ID)
         |     |     |  +--rw node-info-ref?      subtree-ref
         |     |     +--:(other)
         |     |        +--rw opaque-target-ID?   string
         |     +--ro mount-status?        mount-status
         |     +--rw manual-mount?        empty
         |     +--rw retry-timer?         uint16
         |     +--rw number-of-retries?   uint8
         +--rw global-mount-policies
            +--rw manual-mount?        empty
            +--rw retry-timer?         uint16
            +--rw number-of-retries?   uint8

              Figure 4: YANG tree diagram of Peer-Mount module

7.  Datastore mountpoint YANG module

   <CODE BEGINS> file "ietf-peer-mount@20231023.yang"
   module ietf-peer-mount {
     namespace "urn:ietf:params:xml:ns:yang:ietf-peer-mount";
     prefix pmt;

     import ietf-inet-types {
       prefix inet;
     }

     organization
       "IETF NETMOD (NETCONF Data Modeling Language) Working Group";
     contact
       "WG Web:   <http://tools.ietf.org/wg/netmod/>
        WG List:  <mailto:netmod@ietf.org>

        WG Chair: Kent Watsen



Clemm, et al.             Expires 25 April 2024                [Page 20]

Internet-Draft                 Peer-Mount                   October 2023


                  <mailto:kwatsen@juniper.net>

        WG Chair: Lou Berger
                  <mailto:lberger@labn.net>

        Editor: Alexander Clemm
        <mailto:ludwig@clemm.org>

        Editor: Eric Voit
        <mailto:evoit@cisco.com>

        Editor: Aihua Guo
        <mailto:aihuaguo.ietf@gmail.com>

        Editor: Ignacio Dominguez Martinez-Casanueva
        <mailto:ignacio.dominguezmartinez@telefonica.com>";
     description
       "This module provides a set of YANG extensions and definitions
        that can be used to mount information from remote datastores.";

     revision 2023-10-23 {
       description
         "Initial revision.";
       reference
         "draft-clemm-netmod-peermount-02.txt";
     }

     extension mountpoint {
       argument name;
       description
         "This YANG extension is used to mount data from another
          subtree in place of the node under which this YANG extension
          statement is used.

          This extension takes one argument which specifies the name
          of the mountpoint.

          This extension can occur as a substatement underneath a
          container statement, a list statement, or a case statement.
          As a best practice, it SHOULD occur as statement only
          underneath a container statement, but it MAY also occur
          underneath a list or a case statement.

          The extension can take two parameters, target and subtree,
          each defined as their own YANG extensions.
          For Peer Mount, a mountpoint statement MUST contain both a
          target and a subtree substatement for the mountpoint
          definition to be valid.



Clemm, et al.             Expires 25 April 2024                [Page 21]

Internet-Draft                 Peer-Mount                   October 2023


          The subtree SHOULD be specified in terms of a data node of
          type 'pmt:subtree-ref'. The targeted data node MUST
          represent a container.

          The target system MAY be specified in terms of a data node
          that uses the grouping 'pmt:mount-target'.  However, it
          can be specified also in terms of any other data node that
          contains sufficient information to address the mount target,
          such as an IP address, a host name, or a URI.

          It is possible for the mounted subtree to in turn contain a
          mountpoint.  However, circular mount relationships MUST NOT
          be introduced. For this reason, a mounted subtree MUST NOT
          contain a mountpoint that refers back to the mounting system
          with a mount target that directly or indirectly contains the
          originating mountpoint.";
     }

     extension target {
       argument target-name;
       description
         "This YANG extension is used to perform a Peer-Mount.
          It is used to specify a remote target system from which to
          mount a datastore subtree.  This YANG
          extension takes one argument which specifies the remote
          system. In general, this argument will contain the name of
          a data node that contains the remote system information. It
          is recommended that the reference data node uses the
          mount-target grouping that is defined further below in this
          module.

          This YANG extension can occur only as a substatement below
          a mountpoint statement. It MUST NOT occur as a substatement
          below any other YANG statement.";
     }

     extension subtree {
       argument subtree-path;
       description
         "This YANG extension is used to specify a subtree in a
          datastore that is to be mounted.  This YANG extension takes
          one argument which specifies the path to the root of the
          subtree. The root of the subtree SHOULD represent an
          instance of a YANG container.  However, it MAY represent
          also another data node.

          This YANG extension can occur only as a substatement below
          a mountpoint statement. It MUST NOT occur as a substatement



Clemm, et al.             Expires 25 April 2024                [Page 22]

Internet-Draft                 Peer-Mount                   October 2023


          below any other YANG statement.";
     }

     feature mount-server-mgmt {
       description
         "Provide additional capabilities to manage remote mount
          points";
     }

     typedef mount-status {
       type enumeration {
         enum "ok" {
           description
             "Mounted";
         }
         enum "no-target" {
           description
             "The argument of the mountpoint does not define a
              target system";
         }
         enum "no-subtree" {
           description
             "The argument of the mountpoint does not define a
               root of a subtree";
         }
         enum "target-unreachable" {
           description
             "The specified target system is currently
              unreachable";
         }
         enum "mount-failure" {
           description
             "Any other mount failure";
         }
         enum "unmounted" {
           description
             "The specified mountpoint has been unmounted as the
              result of a management operation";
         }
       }
       description
         "This type is used to represent the status of a
          mountpoint.";
     }

     typedef subtree-ref {
       type string;
       description



Clemm, et al.             Expires 25 April 2024                [Page 23]

Internet-Draft                 Peer-Mount                   October 2023


         "This string specifies a path to a datanode. It corresponds
          to the path substatement of a leafref type statement.  Its
          syntax needs to conform to the corresponding subset of the
          XPath abbreviated syntax. Contrary to a leafref type,
          subtree-ref allows to refer to a node in a remote datastore.
          Also, a subtree-ref refers only to a single node, not a list
          of nodes.";
     }

     grouping mount-monitor {
       description
         "This grouping contains data nodes that indicate the
          current status of a mount point.";
       leaf mount-status {
         type mount-status;
         config false;
         description
           "Indicates whether a mountpoint has been successfully
            mounted or whether some kind of fault condition is
            present.";
       }
     }

     grouping mount-target {
       description
         "This grouping contains data nodes that can be used to
          identify a remote system from which to mount a datastore
          subtree.";
       container mount-target {
         description
           "A container is used to keep mount target information
            together.";
         choice target-address-type {
           mandatory true;
           description
             "Allows to identify mount target in different ways,
              i.e. using different types of addresses.";
           case IP {
             leaf target-ip {
               type inet:ip-address;
               description
                 "IP address identifying the mount target.";
             }
           }
           case URI {
             leaf uri {
               type inet:uri;
               description



Clemm, et al.             Expires 25 April 2024                [Page 24]

Internet-Draft                 Peer-Mount                   October 2023


                 "URI identifying the mount target";
             }
           }
           case host-name {
             leaf hostname {
               type inet:host;
               description
                 "Host name of mount target.";
             }
           }
           case node-ID {
             leaf node-info-ref {
               type subtree-ref;
               description
                 "Node identified by named subtree.";
             }
           }
           case other {
             leaf opaque-target-ID {
               type string;
               description
                 "Catch-all; could be used also for mounting
                  of data nodes that are local.";
             }
           }
         }
       }
     }

     grouping mount-policies {
       description
         "This grouping contains data nodes that allow to configure
          policies associated with mountpoints.";
       leaf manual-mount {
         type empty;
         description
           "When present, a specified mountpoint is not
            automatically mounted when the mount data node is
            created, but needs to mounted via specific RPC
            invocation.";
       }
       leaf retry-timer {
         type uint16;
         units "seconds";
         description
           "When specified, provides the period after which
            mounting will be automatically reattempted in case of a
            mount status of an unreachable target";



Clemm, et al.             Expires 25 April 2024                [Page 25]

Internet-Draft                 Peer-Mount                   October 2023


       }
       leaf number-of-retries {
         type uint8;
         description
           "When specified, provides a limit for the number of
            times for which retries will be automatically
            attempted";
       }
     }

     rpc mount {
       description
         "This RPC allows an application or administrative user to
          perform a mount operation.  If successful, it will result in
          the creation of a new mountpoint.";
       input {
         leaf mountpoint-id {
           type string {
             length "1..32";
           }
           description
             "Identifier for the mountpoint to be created.
              The mountpoint-id needs to be unique;
              if the mountpoint-id of an existing mountpoint is
              chosen, an error is returned.";
         }
         uses mount-target;
         leaf mountpoint {
           type leafref;
           description
             "Identifies the data node to mount the target under."
       }
       output {
         leaf mount-status {
           type mount-status;
           description
             "Indicates if the mount operation was successful.";
         }
       }
     }
     rpc unmount {
       description
         "This RPC allows an application or administrative user to
          unmount information from a remote datastore.  If successful,
          the corresponding mountpoint will be removed from the
          datastore.";
       input {
         leaf mountpoint-id {



Clemm, et al.             Expires 25 April 2024                [Page 26]

Internet-Draft                 Peer-Mount                   October 2023


           type string {
             length "1..32";
           }
           description
             "Identifies the mountpoint to be unmounted.";
         }
       }
       output {
         leaf mount-status {
           type mount-status;
           description
             "Indicates if the unmount operation was successful.";
         }
       }
     }
     container mount-server-mgmt {
       if-feature mount-server-mgmt;
       description
         "Contains information associated with managing the
          mountpoints of a datastore.";
       container mountpoints {
         description
           "Keep the mountpoint information consolidated
            in one place.";
         list mountpoint {
           key "mountpoint-id";
           description
             "There can be multiple mountpoints.
              Each mountpoint is represented by its own
              list element.";
           leaf mountpoint-id {
             type string {
               length "1..32";
             }
             description
               "An identifier of the mountpoint.
                RPC operations refer to the mountpoint
                using this identifier.";
           }
           leaf mountpoint-origin {
             type enumeration {
               enum "client" {
                 description
                   "Mountpoint has been supplied and is
                    manually administered by a client";
               }
               enum "auto" {
                 description



Clemm, et al.             Expires 25 April 2024                [Page 27]

Internet-Draft                 Peer-Mount                   October 2023


                   "Mountpoint is automatically
                    administered by the server";
               }
             }
             config false;
             description
               "This describes how the mountpoint came
                into being.";
           }
           leaf subtree-ref {
             type subtree-ref;
             mandatory true;
             description
               "Identifies the root of the subtree in the
                target system that is to be mounted.";
           }
           uses mount-target;
           uses mount-monitor;
           uses mount-policies;
         }
       }
       container global-mount-policies {
         description
           "Provides mount policies applicable for all mountpoints,
            unless overridden for a specific mountpoint.";
         uses mount-policies;
       }
     }
   }
   <CODE ENDS>

8.  Other considerations

8.1.  Authorization

   Access to mounted information is subject to authorization rules.  To
   the mounted system, a mounting client will in general appear like any
   other client.  Authorization privileges for remote mounting clients
   need to be specified through NACM (NETCONF Access Control Model)
   [RFC8341].











Clemm, et al.             Expires 25 April 2024                [Page 28]

Internet-Draft                 Peer-Mount                   October 2023


8.2.  Datastore qualification

   It is conceivable to differentiate between different datastores on
   the remote server, that is, to designate the name of the actual
   datastore to mount, e.g. "running" or "startup".  However, for the
   purposes of this spec, we assume that the datastore to be mounted is
   generally implied.  Mounted information is treated as analogous to
   operational data; in general, this means the running or "effective"
   datastore is the target.  That said, the information which targets to
   mount does constitute configuration and can hence be part of a
   startup or candidate datastore.

8.3.  Mount cascades

   It is possible for the mounted subtree to in turn contain a
   mountpoint.  However, circular mount relationships MUST NOT be
   introduced.  For this reason, a mounted subtree MUST NOT contain a
   mountpoint that refers back to the mounting system with a mount
   target that directly or indirectly contains the originating
   mountpoint.  As part of a mount operation, the mount points of the
   mounted system need to be checked accordingly.

8.4.  Mountpoint status

   It is possible that a mountpoint is broken.  For example, a remote
   system could be unreachable due to many reasons, such as
   misconfiguration of the target system, communications failure, or
   administrative shutdown.  When a mount client experiences such an
   issue, a retrieval operation will simply return the empty mountpoint
   (i.e., the data node representing the mountpoint without the mounted
   subtree underneath).  The mount status can be retrieved separately if
   needed.

8.5.  Caching

   Under certain circumstances, it can be useful to maintain a cache of
   remote information.  Instead of accessing the remote system, requests
   are served from a copy that is locally maintained.  This is
   particularly advantageous in cases where data is slow changing, i.e.
   when there are many more "read" operations than changes to the
   underlying data node, and in cases when a significant delay were
   incurred when accessing the remote system, which might be prohibitive
   for certain applications.  Examples of such applications are
   applications that involve real-time control loops requiring response
   times that are measured in milliseconds.  However, as data nodes that
   are mounted from an authoritative datastore represent the "golden
   copy", it is important that any modifications are reflected as soon
   as they are made.



Clemm, et al.             Expires 25 April 2024                [Page 29]

Internet-Draft                 Peer-Mount                   October 2023


   It is a local implementation decision of mount clients whether to
   cache information once it has been fetched.  However, in order to
   support more powerful caching schemes, it becomes necessary for the
   mount server to "push" information proactively.  For this purpose, it
   is useful for the mount client to subscribe for updates to the
   mounted information at the mount server.  YANG-Push can be used for
   this purpose, creating for each mountpoint a subscription at the
   remote system for the mounted data and updating the local cache as
   updates are received.

8.6.  Filtering

   It is conceivable to add a mechanism that allows to limit the data in
   a mounted subtree that would be returned as part of retrieval
   requests.  This could be accomplished by specifying a filter
   expression as part of the mountpoint definition (for example via an
   additional substatement) or as part of the mountpoint instantiation
   (for example for manual mount operations via a separate RPC
   parameter).  However, doing so would add significant complexity,
   requiring those filters to be specified as well as applied as part of
   proxy operations on top of any other filters.  Users always have the
   option to specify their own subtree filter when requesting data
   retrieval, hence the only potential benefit of such a mechanism would
   lie in the simplification of caching implementations, limiting the
   amount of data to include in the cache.  In the interest of keeping
   Peer-Mount simple, an additional filtering mechanism beyond that
   which is already supported by standard Netconf and RESTCONF
   operations is therefore not included.

8.7.  Implementation considerations beyond caching

   Implementation specifics are outside the scope of this specification.
   That said, the following considerations apply:

   Systems that wish to mount information from remote datastores need to
   implement a mount client.  The mount client communicates with a
   remote system to access the remote datastore.  To do so, there are
   several options:

   *  The mount client acts as a NETCONF client to a remote system.  To
      the remote system, the mount client constitutes essentially a
      client application like any other.

   *  The mount client communicates with a remote mount server through a
      separate protocol or API.






Clemm, et al.             Expires 25 April 2024                [Page 30]

Internet-Draft                 Peer-Mount                   October 2023


   It is the responsibility of the mount client to manage the
   association with the target system, e.g. to validate it is still
   reachable by maintaining a permanent association, perform
   reachability checks in case of a connectionless transport, etc.

   It is the responsibility of the mount client to manage the
   mountpoints.  This means that the mount client needs to populate the
   mountpoint monitoring information (e.g. keep mount-status up to data
   and determine in the case of automatic mounting when to add and
   remove mountpoint configuration).  In the case of automatic mounting,
   the mount client also interacts with the mountpoint discovery and
   bootstrap process.

   The mount client needs to also participate in servicing datastore
   operations involving mounted information.  An operation requested
   involving a mountpoint is relayed by the mounting system's
   infrastructure to the mount client.  For example, a request to
   retrieve information from a datastore leads to an invocation of an
   internal mount client API when a mount point is reached.  The mount
   client then relays a corresponding operation to the remote datastore.
   It subsequently relays the result along with any responses back to
   the invoking infrastructure, which then merges the result (e.g. a
   retrieved subtree with the rest of the information that was
   retrieved) as needed.  Relaying the result may involve the need to
   transpose error response codes in certain corner cases, e.g.  when
   mounted information could not be reached due to loss of connectivity
   with the remote server, or when a configuration request failed due to
   validation error.

   It is possible for a mount client to contain several mountpoints that
   each mount a different subtree from the same remote system.
   Implementations should consider maintaining a single management
   association (e.g., a single Netconf session) per target system, as
   opposed to maintaining a separate association for each mountpoint.

8.8.  Modeling best practices

   There is a certain amount of overhead associated with each mount
   point.  The mount point needs to be managed and state maintained.
   Data subscriptions need to be maintained.  Requests including mounted
   subtrees need to be decomposed and responses from multiple systems
   combined.

   For those reasons, as a general best practice, models that make use
   of mount points SHOULD be defined in a way that minimizes the number
   of mountpoints required.  Finely granular mounts, in which multiple
   mountpoints are maintained with the same remote system, each
   containing only very small data subtrees, SHOULD be avoided.  For



Clemm, et al.             Expires 25 April 2024                [Page 31]

Internet-Draft                 Peer-Mount                   October 2023


   example, lists SHOULD only contain mountpoints when individual list
   elements are associated with different remote systems.  To mount data
   from lists in remote datastores, a container node that contains all
   list elements SHOULD be mounted instead of mounting each list element
   individually.  Likewise, instead of having mount points refer to
   nodes contained underneath choices, a mountpoint should refer to a
   container of the choice.

9.  IANA Considerations

   TBD

10.  Security Considerations

   TBD

11.  Acknowledgements

   TBD

12.  References

12.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [RFC3986]  Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
              Resource Identifier (URI): Generic Syntax", STD 66,
              RFC 3986, DOI 10.17487/RFC3986, January 2005,
              <https://www.rfc-editor.org/info/rfc3986>.

   [RFC6241]  Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed.,
              and A. Bierman, Ed., "Network Configuration Protocol
              (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011,
              <https://www.rfc-editor.org/info/rfc6241>.

   [RFC7950]  Bjorklund, M., Ed., "The YANG 1.1 Data Modeling Language",
              RFC 7950, DOI 10.17487/RFC7950, August 2016,
              <https://www.rfc-editor.org/info/rfc7950>.

   [RFC8040]  Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF
              Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017,
              <https://www.rfc-editor.org/info/rfc8040>.





Clemm, et al.             Expires 25 April 2024                [Page 32]

Internet-Draft                 Peer-Mount                   October 2023


   [RFC8174]  Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
              2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
              May 2017, <https://www.rfc-editor.org/info/rfc8174>.

   [RFC8341]  Bierman, A. and M. Bjorklund, "Network Configuration
              Access Control Model", STD 91, RFC 8341,
              DOI 10.17487/RFC8341, March 2018,
              <https://www.rfc-editor.org/info/rfc8341>.

   [RFC8342]  Bjorklund, M., Schoenwaelder, J., Shafer, P., Watsen, K.,
              and R. Wilton, "Network Management Datastore Architecture
              (NMDA)", RFC 8342, DOI 10.17487/RFC8342, March 2018,
              <https://www.rfc-editor.org/info/rfc8342>.

   [RFC8526]  Bjorklund, M., Schoenwaelder, J., Shafer, P., Watsen, K.,
              and R. Wilton, "NETCONF Extensions to Support the Network
              Management Datastore Architecture", RFC 8526,
              DOI 10.17487/RFC8526, March 2019,
              <https://www.rfc-editor.org/info/rfc8526>.

   [RFC8527]  Bjorklund, M., Schoenwaelder, J., Shafer, P., Watsen, K.,
              and R. Wilton, "RESTCONF Extensions to Support the Network
              Management Datastore Architecture", RFC 8527,
              DOI 10.17487/RFC8527, March 2019,
              <https://www.rfc-editor.org/info/rfc8527>.

12.2.  Informative References

   [I-D.clemm-netmod-mount]
              Clemm, A., Voit, E., and J. Medved, "Mounting YANG-Defined
              Information from Remote Datastores", Work in Progress,
              Internet-Draft, draft-clemm-netmod-mount-06, 29 March
              2017, <https://datatracker.ietf.org/doc/html/draft-clemm-
              netmod-mount-06>.

   [I-D.ietf-opsawg-collected-data-manifest]
              Claise, B., Quilbeuf, J., Lopez, D., Dominguez, I., and T.
              Graf, "A Data Manifest for Contextualized Telemetry Data",
              Work in Progress, Internet-Draft, draft-ietf-opsawg-
              collected-data-manifest-01, 27 April 2023,
              <https://datatracker.ietf.org/doc/draft-ietf-opsawg-
              collected-data-manifest/01/>.

   [I-D.irtf-nmrg-network-digital-twin-arch]
              Zhou, C., Yang, H., Duan, X., Lopez, D., Pastor, A., Wu,
              Q., Boucadair, M., and C. Jacquenet, "Digital Twin
              Network: Concepts and Reference Architecture", Work in
              Progress, Internet-Draft, draft-irtf-nmrg-network-digital-



Clemm, et al.             Expires 25 April 2024                [Page 33]

Internet-Draft                 Peer-Mount                   October 2023


              twin-arch-03, 27 April 2023,
              <https://datatracker.ietf.org/doc/html/draft-irtf-nmrg-
              network-digital-twin-arch-03>.

   [I-D.voit-netmod-yang-mount-requirements]
              Voit, E., Clemm, A., and S. Mertens, "Requirements for
              mounting of local and remote YANG subtrees", Work in
              Progress, Internet-Draft, draft-voit-netmod-yang-mount-
              requirements-00, 18 March 2016,
              <https://datatracker.ietf.org/doc/html/draft-voit-netmod-
              yang-mount-requirements-00>.

   [I-D.wzwb-opsawg-network-inventory-management]
              Wu, B., Zhou, C., Wu, Q., and M. Boucadair, "An Inventory
              Management Model for Enterprise Networks", Work in
              Progress, Internet-Draft, draft-wzwb-opsawg-network-
              inventory-management-04, 19 October 2023,
              <https://datatracker.ietf.org/doc/html/draft-wzwb-opsawg-
              network-inventory-management-04>.

   [RFC2131]  Droms, R., "Dynamic Host Configuration Protocol",
              RFC 2131, DOI 10.17487/RFC2131, March 1997,
              <https://www.rfc-editor.org/info/rfc2131>.

   [RFC2866]  Rigney, C., "RADIUS Accounting", RFC 2866,
              DOI 10.17487/RFC2866, June 2000,
              <https://www.rfc-editor.org/info/rfc2866>.

   [RFC5798]  Nadas, S., Ed., "Virtual Router Redundancy Protocol (VRRP)
              Version 3 for IPv4 and IPv6", RFC 5798,
              DOI 10.17487/RFC5798, March 2010,
              <https://www.rfc-editor.org/info/rfc5798>.

   [RFC7223]  Bjorklund, M., "A YANG Data Model for Interface
              Management", RFC 7223, DOI 10.17487/RFC7223, May 2014,
              <https://www.rfc-editor.org/info/rfc7223>.

   [RFC8340]  Bjorklund, M. and L. Berger, Ed., "YANG Tree Diagrams",
              BCP 215, RFC 8340, DOI 10.17487/RFC8340, March 2018,
              <https://www.rfc-editor.org/info/rfc8340>.

   [RFC8345]  Clemm, A., Medved, J., Varga, R., Bahadur, N.,
              Ananthakrishnan, H., and X. Liu, "A YANG Data Model for
              Network Topologies", RFC 8345, DOI 10.17487/RFC8345, March
              2018, <https://www.rfc-editor.org/info/rfc8345>.






Clemm, et al.             Expires 25 April 2024                [Page 34]

Internet-Draft                 Peer-Mount                   October 2023


   [RFC8528]  Bjorklund, M. and L. Lhotka, "YANG Schema Mount",
              RFC 8528, DOI 10.17487/RFC8528, March 2019,
              <https://www.rfc-editor.org/info/rfc8528>.

   [RFC8639]  Voit, E., Clemm, A., Gonzalez Prieto, A., Nilsen-Nygaard,
              E., and A. Tripathy, "Subscription to YANG Notifications",
              RFC 8639, DOI 10.17487/RFC8639, September 2019,
              <https://www.rfc-editor.org/info/rfc8639>.

   [RFC8641]  Clemm, A. and E. Voit, "Subscription to YANG Notifications
              for Datastore Updates", RFC 8641, DOI 10.17487/RFC8641,
              September 2019, <https://www.rfc-editor.org/info/rfc8641>.

Appendix A.  Open issues

   The following is a list of technical items for further discussion.

   *  Get vs get-configuration.  Should both get and get-configuration
      be supported as data retrieval operations, or get only?  The
      reason for the distinction between get and get-configuration is
      generally ease of implementation and efficiency of the operation,
      simply returning contents from config file versus having to build
      that content from memory structures, hence having higher
      performance.  Perhaps only "get" should be supported, with "get-
      config" ignoring any mounted information.

   *  Target and Subtree YANG extension.  Should target and/or subtree
      be mandatory statements?  When RPCs to support manual mounting are
      supported, it is conceivable to allow for the manual mounting of
      any subtree from any remote system (as provided per a parameter of
      the RPC request), not just a specific subtree.  In that case, any
      information mounted would be effectively treated analogous to
      anyxml - it will constitute just a blob.  In that case, both
      "subtree" and "target" statements could be optional.  On the other
      hand, including subtree as a statement will facilitate validation
      and letting client applications know what information to expect.
      Including target as a statement will facilitate system operation
      without needing to rely on manual mounting/unmounting.  Removing
      target will faciliate implementation, as the system does not need
      to worry about automated mounting, with the system administrator
      in that case taking on the responsibility of applying the mount
      operations against the proper target.

   *  RPC definitions and manual mount operations.  Are manual mount
      operations really required or should they be removed?  Also, check
      need for mountpoint ID and its use in mount/unmount operations.





Clemm, et al.             Expires 25 April 2024                [Page 35]

Internet-Draft                 Peer-Mount                   October 2023


   *  Manual mount operations: If supported, need to decide whether to
      make manual mounting a one step or two step procedure.  If done as
      two steps, a mountpoint would first be created (step 1), then the
      mount operation applied (step 2).  An unpopulated mountpoint would
      in effect resemble a special container - it might be considered as
      a special type of container, where what is contained would be a
      remote subtree (populated by a second operation).  This means an
      empty mountpoint would be considered as part of configuration.
      Alternatively, this could be done as one step - a mount operation
      instantiates the mountpoint and links it to the remote system in
      one step.

   *  System-provided mountpoints.  Clarify behavior and semantics of
      mountpoints that are automatically maintained by the system vs
      mountpoint managed via mount operations that are performed on
      request.

   *  Mount point location.  Would it simplify to only allow mountpoints
      below a container?  This might facilitate the way in which remote
      systems are referenced, e.g. not needing to contain an index/key
      to a list.  However, it would make paths longer than necessary and
      require introduction of more container objects (e.g. below list
      elements) than would otherwise be required.  Containers can of
      course themselves always also be transitively contained underneath
      other data nodes such as lists.

   *  Mount status.  Define mount status and associated Finite State
      Machine.

   *  NMDA.  Doublecheck for NMDA compliance [RFC8342][RFC8526].

   *  Example.  Provide examples for:

      -  Definition of a data model with mountpoints

      -  Mountpoint instance creation

      -  Data retrieval involving mounted data

   *  Mountpoint status.  Decide whether to also define a piece of
      metadata to indicate mountpoint status that can be returned with
      the data node representing the mountpoint.  It may make sense to
      do this to be able to distinguish the case when there is no data
      in the remote subtree from the case when there is an issue with
      the mountpoint status.






Clemm, et al.             Expires 25 April 2024                [Page 36]

Internet-Draft                 Peer-Mount                   October 2023


   *  Filters on mounted data.  Determine whether it would make sense to
      add a filter capability to reduce the amount of data retrieved as
      part of subtrees.

   *  Local mountpoint.  Peer-Mount mechanism could also become the
      means for combining data from multiple YANG modules implemented in
      the same datastore.  This mechanism enables extending YANG modules
      with subtrees from other YANG modules without defining a new
      augmented YANG module and in cases where importing groupings or
      YANG schema mount cannot fit.  For example, the Data Manifest
      [I-D.ietf-opsawg-collected-data-manifest] needs to reuse fragments
      of the YANG Library module, but since the available groupings
      cannot fit the needs of this use case, the Data Manifest copy-
      pasted fragments from YANG Library.

Authors' Addresses

   Alexander Clemm
   Futurewei
   Email: ludwig@clemm.org


   Eric Voit
   Cisco Systems
   Email: evoit@cisco.com


   Aihua Guo
   Futurewei
   Email: aihuaguo.ietf@gmail.com


   Ignacio Dominguez
   Telefonica I+D
   Ronda de la Comunicacion, S/N
   Madrid 28050
   Spain
   Email: ignacio.dominguezmartinez@telefonica.com













Clemm, et al.             Expires 25 April 2024                [Page 37]