Security Automation and Continuous Monitoring WG | D. Waltermire |
Internet-Draft | NIST |
Intended status: Informational | D. Harrington |
Expires: September 25, 2015 | Effective Software |
March 24, 2015 |
Endpoint Security Posture Assessment - Enterprise Use Cases
draft-ietf-sacm-use-cases-09
This memo documents a sampling of use cases for securely aggregating configuration and operational data and evaluating that data to determine an organization's security posture. From these operational use cases, we can derive common functional capabilities and requirements to guide development of vendor-neutral, interoperable standards for aggregating and evaluating data relevant to security posture.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 25, 2015.
Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This document describes the core set of use cases for endpoint posture assessment for enterprises. It provides a discussion of these use cases and associated building block capabilities. The described use cases support:
Additionally, this document describes a set of usage scenarios that provide examples for using the use cases and associated building blocks to address a variety of operational functions.
These operational use cases and related usage scenarios cross many IT security domains. The use cases enable the derivation of common:
Together these ideas will be used to guide development of vendor-neutral, interoperable standards for collecting, aggregating, and evaluating data relevant to security posture.
Using this standard data, tools can analyze the state of endpoints, user activities and behaviour, and evaluate the security posture of an organization. Common expression of information should enable interoperability between tools (whether customized, commercial, or freely available), and the ability to automate portions of security processes to gain efficiency, react to new threats in a timely manner, and free up security personnel to work on more advanced problems.
The goal is to enable organizations to make informed decisions that support organizational objectives, to enforce policies for hardening systems, to prevent network misuse, to quantify business risk, and to collaborate with partners to identify and mitigate threats.
It is expected that use cases for enterprises and for service providers will largely overlap. When considering this overlap, there are additional complications for service providers, especially in handling information that crosses administrative domains.
The output of endpoint posture assessment is expected to feed into additional processes, such as policy-based enforcement of acceptable state, verification and monitoring of security controls, and compliance to regulatory requirements.
Endpoint posture assessment involves orchestrating and performing data collection and evaluating the posture of a given endpoint. Typically, endpoint posture information is gathered and then published to appropriate data repositories to make collected information available for further analysis supporting organizational security processes.
Endpoint posture assessment typically includes:
As part of these activities, it is often necessary to identify and acquire any supporting security automation data that is needed to drive and feed data collection and evaluation processes.
The following is a typical workflow scenario for assessing endpoint posture:
The following subsections detail specific use cases for assessment planning, data collection, analysis, and related operations pertaining to the publication and use of supporting data. Each use case is defined by a short summary containing a simple problem statement, followed by a discussion of related concepts, and a listing of associated building blocks which represent the capabilities needed to support the use case. These use cases and building blocks identify separate units of functionality that may be supported by different components of an architectural model.
This use case describes the need for security automation data to be defined and published to one or more data stores, as well as queried and retrieved from these data stores for the explicit use of posture collection and evaluation.
Security automation data is a general concept that refers to any data expression that may be generated and/or used as part of the process of collecting and evaluating endpoint posture. Different types of security automation data will generally fall into one of three categories:
The information model for security automation data must support a variety of different data types as described above, along with the associated metadata that is needed to support publication, query, and retrieval operations. It is expected that multiple data models will be used to express specific data types requiring specialized or extensible security automation data repositories. The different temporal characteristics, access patterns, and access control dimensions of each data type may also require different protocols and data models to be supported furthering the potential requirement for specialized data repositories. See [RFC3444] for a description and discussion of distinctions between an information and data model. It is likely that additional kinds of data will be identified through the process of defining requirements and an architectural model. Implementations supporting this building block will need to be extensible to accommodate the addition of new types of data, both proprietary or (preferably) using a standard format.
The building blocks of this use case are:
These building blocks are used to enable acquisition of various instances of security automation data based on specific data models that are used to drive assessment planning (see section 2.1.2), posture attribute value collection (see section 2.1.3), and posture evaluation (see section 2.1.4).
This use case describes the process of discovering endpoints, understanding their composition, identifying the desired state to assess against, and calculating what posture attributes to collect to enable evaluation. This process may be a set of manual, automated, or hybrid steps that are performed for each assessment.
The building blocks of this use case are:
At this point the set of posture attribute values to use for evaluation are known and they can be collected if necessary (see section 2.1.3).
This use case describes the process of collecting a set of posture attribute values related to one or more endpoints. This use case can be initiated by a variety of triggers including:
The building blocks of this use case are:
Once the posture attribute values are collected, they may be persisted for later use or they may be immediately used for posture evaluation.
This use case represents the action of analyzing collected posture attribute values as part of an assessment. The primary focus of this use case is to support evaluation of actual endpoint state against the expected state selected for the assessment.
This use case can be initiated by a variety of triggers including:
The building blocks of this use case are:
While the primary focus of this use case is around enabling the comparison of expected vs. actual state, the same building blocks can support other analysis techniques that are applied to collected posture attribute data (e.g., trending, historic analysis).
Completion of this process represents a complete assessment cycle as defined in Section 2.
In this section, we describe a number of usage scenarios that utilize aspects of endpoint posture assessment. These are examples of common problems that can be solved with the building blocks defined above.
A vendor manufactures a number of specialized endpoint devices. They also develop and maintain an operating system for these devices that enables end-user organizations to configure a number of security and operational settings. As part of their customer support activities, they publish a number of secure configuration guides that provide minimum security guidelines for configuring their devices.
Each guide they produce applies to a specific model of device and version of the operating system and provides a number of specialized configurations depending on the device's intended function and what add-on hardware modules and software licenses are installed on the device. To enable their customers to evaluate the security posture of their devices to ensure that all appropriate minimal security settings are enabled, they publish an automatable configuration checklists using a popular data format that defines what settings to collect using a network management protocol and appropriate values for each setting. They publish these checklists to a public security automation data store that customers can query to retrieve applicable checklist(s) for their deployed specialized endpoint devices.
Automatable configuration checklist could also come from sources other than a device vendor, such as industry groups or regulatory authorities, or enterprises could develop their own checklists.
This usage scenario employs the following building blocks defined in Section 2.1.1 above:
While each building block can be used in a manual fashion by a human operator, it is also likely that these capabilities will be implemented together in some form of a guidance editor or generator application.
A financial services company operates a heterogeneous IT environment. In support of their risk management program, they utilize vendor provided automatable security configuration checklists for each operating system and application used within their IT environment. Multiple checklists are used from different vendors to insure adequate coverage of all IT assets.
To identify what checklists are needed, they use automation to gather an inventory of the software versions utilized by all IT assets in the enterprise. This data gathering will involve querying existing data stores of previously collected endpoint software inventory posture data and actively collecting data from reachable endpoints as needed utilizing network and systems management protocols. Previously collected data may be provided by periodic data collection, network connection-driven data collection, or ongoing event-driven monitoring of endpoint posture changes.
Appropriate checklists are queried, located and downloaded from the relevant guidance data stores. The specific data stores queried and the specifics of each query may be driven by data including:
Checklists may be sourced from guidance data stores maintained by an application or OS vendor, an industry group, a regulatory authority, or directly by the enterprise.
The retrieved guidance is cached locally to reduce the need to retrieve the data multiple times.
Driven by the setting data provided in the checklist, a combination of existing configuration data stores and data collection methods are used to gather the appropriate posture attributes from (or pertaining to) each endpoint. Specific posture attribute values are gathered based on the defined enterprise function and software inventory of each endpoint. The collection mechanisms used to collect software inventory posture will be used again for this purpose. Once the data is gathered, the actual state is evaluated against the expected state criteria defined in each applicable checklist.
A checklist can be assessed as a whole, or a specific subset of the checklist can be assessed resulting in partial data collection and evaluation.
The results of checklist evaluation are provided to appropriate operators and applications to drive additional business logic. Specific applications for checklist evaluation results are out-of-scope for current SACM efforts. Irrespective of specific applications, the availability, timeliness, and liveness of results is often of general concern. Network latency and available bandwidth often create operational constraints that require trade-offs between these concerns and need to be considered.
Uses of checklists and associated evaluation results may include, but are not limited to:
This usage scenario employs the following building blocks defined in Section 2.1.1 above:
Example corporation has established secure configuration baselines for each different type of endpoint within their enterprise including: network infrastructure, mobile, client, and server computing platforms. These baselines define an approved list of hardware, software (i.e., operating system, applications, and patches), and associated required configurations. When an endpoint connects to the network, the appropriate baseline configuration is communicated to the endpoint based on its location in the network, the expected function of the device, and other asset management data. It is checked for compliance with the baseline indicating any deviations to the device's operators. Once the baseline has been established, the endpoint is monitored for any change events pertaining to the baseline on an ongoing basis. When a change occurs to posture defined in the baseline, updated posture information is exchanged, allowing operators to be notified and/or automated action to be taken.
Like the Automated Checklist Verification usage scenario (see section 2.2.2), this usage scenario supports assessment based on automatable checklists. It differs from that scenario by monitoring for specific endpoint posture changes on an ongoing basis. When the endpoint detects a posture change, an alert is generated identifying the specific changes in posture allowing assessment of the delta to be performed instead of a full assessment in the previous case. This usage scenario employs the same building blocks as Automated Checklist Verification (see section 2.2.2). It differs slightly in how it uses the following building blocks:
This usage scenario highlights the need to query a data store to prepare a compliance report for a specific endpoint and also the need for a change in endpoint state to trigger Collection and Evaluation.
Freed from the drudgery of manual endpoint compliance monitoring, one of the security administrators at Example Corporation notices (not using SACM standards) that five endpoints have been uploading lots of data to a suspicious server on the Internet. The administrator queries data stores for specific endpoint posture to see what software is installed on those endpoints and finds that they all have a particular program installed. She then queries the appropriate data stores to see which other endpoints have that program installed. All these endpoints are monitored carefully (not using SACM standards), which allows the administrator to detect that the other endpoints are also infected.
This is just one example of the useful analysis that a skilled analyst can do using data stores of endpoint posture.
This usage scenario employs the following building blocks defined in Section 2.1.1 above:
This usage scenario highlights the need to query a repository for attributes to see which attributes certain endpoints have in common.
A university team receives a grant to do research at a government facility in the arctic. The only network communications will be via an intermittent, low-speed, high-latency, high-cost satellite link. During their extended expedition, they will need to show continue compliance with the security policies of the university, the government, and the provider of the satellite network as well as keep current on vulnerability testing. Interactive assessments are therefore not reliable, and since the researchers have very limited funding they need to minimize how much money they spend on network data.
Prior to departure they register all equipment with an asset management system owned by the university, which will also initiate and track assessments.
On a periodic basis -- either after a maximum time delta or when the security automation data store has received a threshold level of new vulnerability definitions -- the university uses the information in the asset management system to put together a collection request for all of the deployed assets that encompasses the minimal set of artifacts necessary to evaluate all three security policies as well as vulnerability testing.
In the case of new critical vulnerabilities, this collection request consists only of the artifacts necessary for those vulnerabilities and collection is only initiated for those assets that could potentially have a new vulnerability.
(Optional) Asset artifacts are cached in a local CMDB. When new vulnerabilities are reported to the security automation data store, a request to the live asset is only done if the artifacts in the CMDB are incomplete and/or not current enough.
The collection request is queued for the next window of connectivity. The deployed assets eventually receive the request, fulfill it, and queue the results for the next return opportunity.
The collected artifacts eventually make it back to the university where the level of compliance and vulnerability exposed is calculated and asset characteristics are compared to what is in the asset management system for accuracy and completeness.
Like the Automated Checklist Verification usage scenario (see section 2.2.2), this usage scenario supports assessment based on checklists. It differs from that scenario in how guidance, collected posture attribute values, and evaluation results are exchanged due to bandwidth limitations and availability. This usage scenario employs the same building blocks as Automated Checklist Verification (see section 2.2.2). It differs slightly in how it uses the following building blocks:
This usage scenario highlights the need to support low-bandwidth, intermittent, or high-latency links.
In preparation for performing an assessment, an operator or application will need to identify one or more security automation data stores that contain the guidance entries necessary to perform data collection and evaluation tasks. The location of a given guidance entry will either be known a priori or known security automation data stores will need to be queried to retrieve applicable guidance.
To query guidance it will be necessary to define a set of search criteria. This criteria will often utilize a logical combination of publication metadata (e.g. publishing identity, create time, modification time) and guidance data-specific criteria elements. Once the criteria is defined, one or more security automation data stores will need to be queried generating a result set. Depending on how the results are used, it may be desirable to return the matching guidance directly, a snippet of the guidance matching the query, or a resolvable location to retrieve the data at a later time. The guidance matching the query will be restricted based the authorized level of access allowed to the requester.
If the location of guidance is identified in the query result set, the guidance will be retrieved when needed using one or more data retrieval requests. A variation on this approach would be to maintain a local cache of previously retrieved data. In this case, only guidance that is determined to be stale by some measure will be retrieved from the remote data store.
Alternately, guidance can be discovered by iterating over data published with a given context within a security automation data store. Specific guidance can be selected and retrieved as needed.
This usage scenario employs the following building blocks defined in Section 2.1.1 above:
An operator or application may need to identify new, updated, or deleted guidance in a security automation data store for which they have been authorized to access. This may be achieved by querying or iterating over guidance in a security automation data store, or through a notification mechanism that alerts to changes made to a security automation data store.
Once guidance changes have been determined, data collection and evaluation activities may be triggered.
This usage scenario employs the following building blocks defined in Section 2.1.1 above:
This memo includes no request to IANA.
This memo documents, for informational purposes, use cases for security automation. Specific security considerations will be provided in related documents (e.g., requirements, architecture, information model, data model, protocol) as appropriate to the function described in each related document.
One consideration for security automation is that a malicious actor could use the security automation infrastructure and related collected data to determine endpoint weaknesses to exploit. It is important that security considerations in the related documents identify methods to both identify and prevent such activity. Specifically, means for protecting the communications as well as the systems that store the information. For communications between the varying SACM components there should be considerations for protecting the confidentiality, data integrity and peer entity authentication. Also, for any systems that store information that could be used for malicious purposes, methods to identify and protect against unauthorized usage, inappropriate usage and denial of service need to be considered.
Adam Montville edited early versions of this draft.
Kathleen Moriarty, and Stephen Hanna contributed text describing the scope of the document.
Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and Aron Woland provided use cases text for various revisions of this draft.
Fixed a number of gramatical nits throughout the draft identified by the SECDIR review.
Added additional text to the security considerations about malicious actors.
Reworked long sentences throughout the document by shortening or using bulleted lists.
Re-ordered and condensed text in the "Automated Checklist Verification" sub-section to improve the conceptual presentation and to clarify longer sentences.
Clarified that the "Posture Attribute Value Query" building block represents a standardized interface in the context of SACM.
Removed the "others" sub-section within the "usage scenarios" section.
Updated the "Security Considerations" section to identify that actual SACM security considerations will be discussed in the appropriate related documents.
A number of edits were made to section 2 to resolve open questions in the draft based on meeting and mailing list discussions.
Section 2.1.5 was merged into section 2.1.4.
Updated the "Introduction" section to better reflect the use case, building block, and usage scenario structure changes from previous revisions.
Updated most uses of the terms "content" and "content repository" to use "guidance" and "security automation data store" respectively.
In section 2.1.1, added a discussion of different data types and renamed "content" to "data" in the building block names.
In section 2.1.2, separated out the building block concepts of "Endpoint Discovery" and "Endpoint Characterization" based on mailing list discussions.
Addressed some open questions throughout the draft based on consensus from mailing list discussions and the two virtual interim meetings.
Changed many section/sub-section names to better reflect their content.
Changes in this revision are focused on section 2 and the subsequent subsections:
Updated acknowledgements to recognize those that helped with editing the use case text.
Added four new use cases regarding content repository.
Expanded the workflow description based on ML input.
Changed the ambiguous "assess" to better separate data collection from evaluation.
Added use case for Search for Signs of Infection.
Added use case for Remediation and Mitigation.
Added use case for Endpoint Information Analysis and Reporting.
Added use case for Asynchronous Compliance/Vulnerability Assessment at Ice Station Zebra.
Added use case for Traditional endpoint assessment with stored results.
Added use case for NAC/NAP connection with no stored results using an endpoint evaluator.
Added use case for NAC/NAP connection with no stored results using a third-party evaluator.
Added use case for Compromised Endpoint Identification.
Added use case for Suspicious Endpoint Behavior.
Added use case for Vulnerable Endpoint Identification.
Updated Acknowledgements
Changed title
removed section 4, expecting it will be moved into the requirements document.
removed the list of proposed capabilities from section 3.1
Added empty sections for Search for Signs of Infection, Remediation and Mitigation, and Endpoint Information Analysis and Reporting.
Removed Requirements Language section and rfc2119 reference.
Removed unused references (which ended up being all references).
[RFC3444] | Pras, A. and J. Schoenwaelder, "On the Difference between Information Models and Data Models", RFC 3444, January 2003. |