Internet DRAFT - draft-hallambaker-prismproof-trust
draft-hallambaker-prismproof-trust
Internet Engineering Task Force (IETF) Phillip Hallam-Baker
Internet-Draft Comodo Group Inc.
Intended Status: Standards Track October 27, 2014
Expires: April 30, 2015
PRISM Proof Trust Model
draft-hallambaker-prismproof-trust-01
Abstract
This paper extends Shanon's concept of a 'work factor' to provide an
objective measure of the practical security offered by a protocol or
infrastructure design. Considering the hypothetical work factor based
on an informed estimate of the probable capabilities of an attacker
with unknown resources provides a better indication of the relative
strength of protocol designs than the computational work factor of
the best known attack.
The social work factor is a measure of the trustworthiness of a
credential issued in a PKI based on the cost of having obtained the
credential through fraud at a certain point in time. Use of the
social work factor allows evaluation of Certificate Authority based
trust models, peer to peer (Web of Trust) models to be evaluated in
the same framework. The analysis shows that each model has clear
benefits over the other for some classes of user but most classes of
user are served better by a combination of both.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
Hallam-Baker April 30, 2015 [Page 1]
Internet-Draft PRISM Proof Trust Model October 2014
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Hallam-Baker April 30, 2015 [Page 2]
Internet-Draft PRISM Proof Trust Model October 2014
Table of Contents
1. Work Factor . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1. Computational Work Factor . . . . . . . . . . . . . . . . 4
1.2. Hypothetical Work Factor . . . . . . . . . . . . . . . . 5
1.2.1. Known Unknowns . . . . . . . . . . . . . . . . . . . 6
1.2.2. Defense in Depth . . . . . . . . . . . . . . . . . . 7
1.2.3. Mutual Reinforcement . . . . . . . . . . . . . . . . 8
1.2.4. Safety in Numbers . . . . . . . . . . . . . . . . . 8
1.3. Cost Factor . . . . . . . . . . . . . . . . . . . . . . . 9
1.4. Social Work Factor . . . . . . . . . . . . . . . . . . . 11
2. The Problem of Evaluating Trust . . . . . . . . . . . . . . . 13
2.1. Probability and Risk . . . . . . . . . . . . . . . . . . 13
2.2. Reputation . . . . . . . . . . . . . . . . . . . . . . . 14
2.3. Curated Spaces . . . . . . . . . . . . . . . . . . . . . 15
2.4. Trustworthy Time . . . . . . . . . . . . . . . . . . . . 15
3. Maximizing Social Work Factor to Maximize Trust . . . . . . . 15
3.1. Trust Specifiers . . . . . . . . . . . . . . . . . . . . 16
3.1.1. Key Identifiers . . . . . . . . . . . . . . . . . . 16
3.1.2. Self Signed Certificates . . . . . . . . . . . . . . 17
3.2. Trust Assertions . . . . . . . . . . . . . . . . . . . . 17
3.2.1. Certificate Authority Issued Certificates . . . . . 17
3.2.2. Key Signingey Signing . . . . . . . . . . . . . . . 19
3.2.3. Adding Key Endorsement to PKIX . . . . . . . . . . . 19
3.3. Trust Meta Assertions . . . . . . . . . . . . . . . . . . 22
3.3.1. Revocation and Status Checkings Checking . . . . . . 22
3.3.2. Notarization . . . . . . . . . . . . . . . . . . . . 22
3.3.3. Transparency . . . . . . . . . . . . . . . . . . . . 22
3.4. Other Approaches . . . . . . . . . . . . . . . . . . . . 22
3.4.1. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2. SPKI / SDSI . . . . . . . . . . . . . . . . . . . . 23
3.4.3. Identity Based Cryptography . . . . . . . . . . . . 23
4. Maximizing Social Work Factor in a Notary Infrastructure . . . 24
5. Conclusions and Related Work . . . . . . . . . . . . . . . . . 25
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 25
Hallam-Baker April 30, 2015 [Page 3]
Internet-Draft PRISM Proof Trust Model October 2014
1. Work Factor
Recent events have highlighted both the need for open standards based
security protocols and the possibility that the design of such
protocols may have been sabotaged. The community thus faces two
important and difficult challenges, first to design an Internet
security infrastructure that offers practical security against the
class of attacks revealed, and secondly, to convince potential users
that the proposed new infrastructure has not been similarly
sabotaged.
The security of a system should measured by the difficulty of
attacking it. The security of a safe is measured by the length time
it is expected to resist attack using a specified set of techniques.
The security of a cryptographic algorithm against a known attack is
measured by the computational cost of the attack.
This paper extends Shanon's concept of a 'work factor' to provide an
objective measure of the security a protocol or infrastructure offers
against other forms of attack.
1.1. Computational Work Factor
The term 'Computational Work Factor' is used to refer to Shanon's
original concept.
One of Shanon's key insights was that the work factor of a
cryptographic algorithm could be exponential. Adding a single bit to
the key size of an ideal symmetric algorithm presents only a modest
increase in computational effort for the defender but doubles the
work factor for the attacker.
More precisely, the difficulty of breaking a cryptographic algorithm
is generally measured by the work-factor ratio. If the cost of
encrypting a block with 56 bit DES is x, the worst case cost of
recovering the key through a brute force attack is x * 2^56. The
security of DES has changed over time because the cost x has fallen
exponentially.
While the work factor is traditionally measured in terms of the
number of operations, many cryptanalytic techniques permit memory
used to be traded for computational complexity. An attack requiring
2^64 bytes of memory that reduces the number of operations required
to break a 128 bit cipher to 2^64 is a rather lower concern than one
which reduces the number of operations to 2^80. The term 'cost' is
used to gloss over such distinctions.
[Note that in the following analysis, the constraints of the IETF
document format make use of the established notation impractical and
a confusing mess, hence the departure from Shannon's notation.]
Hallam-Baker April 30, 2015 [Page 4]
Internet-Draft PRISM Proof Trust Model October 2014
The Computational Work Factor ratio WF_C (A) of a cryptographic
algorithm A, is the cost of the best known attack divided by the cost
of the algorithm itself.
1.2. Hypothetical Work Factor
Modern cryptographic algorithms use keys of 128 bits or more and
present a work factor ratio of 2^128 against brute force attack. This
work factor is at least 2^72 times higher than DES and comfortably
higher than the work factor of 2^80 operations that is generally
believed to be about the limit to current attacks.
While an exceptionally well resourced attacker may gain performance
advances through use of massive parallelism, faster clock rates made
possible by operating at super-low temperatures and custom designed
circuits, the return on such approaches is incremental rather than
exponential.
Performance improvements may allow an attacker to break systems with
a work factor several orders of magnitue greater than the public
state of the art. But an advance in cryptanalysis might permit a
potentially more significant reduction in the work factor.
The primary consideration in the choice of a cryptographic algorithm
therefore is not the known computational work factor as measured
according to the best publicly known attack but the confidence that
the computational work factor of the best attack that might be known
to the attacker.
While the exact capabilities of the adversary are unknown, a group of
informed experts may arrive at a conservative estimate of their
likely capabilities. The probability that a government attacker has
discovered an attack against AES-128 with a work factor ratio of
2^120 might be considered relatively high while the probability that
an attack with a work factor ratio of less than 2^64 is very low.
We define the hypothetical work factor function WF_H (A, p) as
follows: If WF is a work factor ratio and p is an informed estimate
of the probability that an adversary has developed an attack with a
work factor ratio against algorithm A of WF or less then WF_H (A, p)
= WF.
Since the best known public attack is known to the attacker, WF_H(A,
1) <= WF_C (A)
The inverse function WF_H' (A, WF) returns the estimated probability
that the work factor of algorithm A is at least WF.
The hypothetical work factor and its inverse may be used to compare
the relative strengths of protocol designs. Given designs A and B, we
can state that B is an improvement on A if WF_H(A,p) > WF_H (B,p) for
Hallam-Baker April 30, 2015 [Page 5]
Internet-Draft PRISM Proof Trust Model October 2014
all p.
When considering a protocol or infrastructure design we can thus
improve a protocol by either:
* Increasing WF_H(A,p) for some p, or
* Decreasing WF_H'(A,WF)
1.2.1. Known Unknowns
Unlike the computational work factor, the hypothetical work factor
does not provide an objective measure of the security offered by a
design. The purpose of the hypothetical work factor is to allow the
protocol designer to compare the security offered by different design
choices.
The task that the security engineer faces is to secure the system
from all attacks whether the attacks themselves are known or unknown.
In the current case it is known that an attacker is capable of
breaking at least some of the cryptographic algorithms in use but not
which algorithms are affected or the nature of the attack(s).
Unlike the computational work factor, the hypothetical work factor
does not deliver an academically rigorous, publication and citation
worthy measure of the strength of a design. That is not its purpose.
the purpose of the hypothetical work factor is to assist the protocol
designer in designing protocols.
Design of security protocols has always required the designer to
consider attackers whose capabilities are not currently known and
thus involved a considerable degree of informed opinion and
guesswork. Whether correctly or not, the decision to reject changes
to the DNSSEC protocol to enable deployment in 2002 rested in part on
a statement by a Security Area Director that a proposed change gave
him a bad feeling in his gut. The hypothetical work factor permits
the security designer to model to quantify such intestinally based
assumptions and model the effect on the security of the resulting
design.
Security is a property of systems rather than individual components.
While it is quite possible that there are no royal roads to
cryptanalysis and cryptanalysis of algorithms such as AES 128 is
infeasible even for the PRISM-class adversaries, such adversaries are
not limited to use of cryptanalytic attacks.
Despite the rise of organized cyber-crime, many financial systems
still employ weak cryptographic systems that are known to be
vulnerable to cryptanalytic attacks that are well within the
capabilities of the attackers. But fraud based on such techniques
remains vanishingly rare as it is much easier for the attackers to
Hallam-Baker April 30, 2015 [Page 6]
Internet-Draft PRISM Proof Trust Model October 2014
persuade bank customers to simply give their access credentials to
the attacker.
Even if a PRISM-class attacker has a factoring attack which renders
an attack on RSA-2048 feasible, it is almost certainly easier for a
PRISM-class attacker to compromise a system using RSA-2048 in other
ways. For example persuading the target of the surveillance to use
cryptographic devices with a random number generator that leaks a
crib for the attacker. Analyzing the second form of attack requires a
different type of analysis which is addressed in the following
section on social work factor.
1.2.2. Defense in Depth
The motivation behind introducing the concept of the hypothetical
work factor is a long experience of seeing attempts to make security
protocols more robust being deflected by recourse to specious
arguments based on the computational work factor.
For example, consider the case in which a choice between a single
security control and a defense in depth strategy is being considered:
* Option A: Uses algorithm X for protection.
* Option B: Uses a combination of algorithm X and algorithm Y for
protection such that the attacker must defeat both to break the
system and algorithms based on different cryptographic
principles are chosen so as to minimize the risk of a common
failure mode.
If the computational work factor for both algorithms X and Y is
2^128, both options present the same work factor ratio. Although
Option B offers twice the security, it also requires twice the work.
The argument that normally wins is that both options present the same
computational work factor ratio of 2^128, Option A is simpler and
therefore Option A should be chosen. This despite the obvious fact
that only Option B offers defense in depth.
If we consider the adversary of being capable of performing a work
factor ratio of 2^80 and the probability the attacker has discovered
an attack capable of breaking algorithms X and Y to be 10% in each
case, the probability that the attacker can break Option A is 10%
while the probability that an attack on Option B is only 1%, a
significant improvement.
While Option B clearly offers a significant potential improvement in
security, this improvement is only fully realized if the
probabilities of a feasible attack are independent.
Hallam-Baker April 30, 2015 [Page 7]
Internet-Draft PRISM Proof Trust Model October 2014
1.2.3. Mutual Reinforcement
The defense in depth approach affords a significant improvement in
security but an improvement that is incremental rather than
exponential in character. With mutual reinforcement we design the
mechanism such that in addition to requiring the attacker to break
each of the component algorithms, the difficulty of the attacks is
increased.
For example, consider the use of a Deterministic Random Number
Generator R(s) which returns a sequence of values R(s)_1, R(s)_2 from
an initial seed s.
Two major concerns in the design of such generators are the
possibility of bias and that the seed value be somehow leaked through
a side channel.
Both concerns are mitigated if instead of using the output of one
generator directly, the value R1(s1) XOR R2(s2) is used where R1 and
R2 are independent random number generators and s1, s2 are distinct
seeds.
The XOR function has the property of preserving randomness so that
the output is guaranteed to be at least as random as either of the
generators from which it is built (provided that there is not a
common failure mode). Further, recovery of either random seed is at
least as hard as using the corresponding generator on its own. Thus
the Hypothetical work factor for the combined system is improved to
at least the same extent as in the defense in depth case.
But any attempt to break either generator must now face the
additional complexity introduced by the output being masked with the
unknown output of the other. An attacker cannot cryptanalyze the two
generator functions independently. If the two generators and the
seeds are genuinely independent, the combined hypothetical work
factor is the product of the hypothetical work factors from which it
is built.
While implementing two independent generators and seeds represents a
significant increase in cost for the implementer, a similar
exponential leverage might be realized with negligible additional
complexity through use of a cryptographic digest of the generator
output to produce the masking value.
1.2.4. Safety in Numbers
In a traditional security analysis the question of concern is whether
a cryptanalytic attack is feasible or not. When considering an
indiscriminate intercept capability as in a PRISM-class attack, the
concern is not just whether an individual communication might be
compromised but the number of communications that may be compromised
Hallam-Baker April 30, 2015 [Page 8]
Internet-Draft PRISM Proof Trust Model October 2014
for a given amount of effort.
'Perfect' Forward Secrecy is an optional feature supported in IPSec
and TLS. Current implementations of TLS offer a choice between:
* Direct key exchange with a work factor dependent on the
difficulty of breaking RSA 2048
* Direct key exchange followed by a perfect forward secrecy
exchange with a work factor dependent on the difficulty of
breaking RSA 2048 and DH 1024.
Using the computational work factor alone suggests that the second
scheme has little advantage over the first since the computational
work factor of Diffie Hellman using the best known techniques 2^80
while the computational work factor for RSA 2048 is 2^112. Use of the
perfect forward secrecy exchange has a significant impact on server
performance but does not increase the difficulty of cryptanalysis.
Use of perfect forward secrecy with a combination of RSA and Diffie
Hellman does not provide a significant improvement in the
hypothetical work factor either if individual messages are
considered. The RSA and Diffie Hellman systems are closely related
and so an attacker that can break RSA 2048 can almost certainly break
RSA 1024. Moreover computational work factor for DH 1024 is only 2^80
and thus feasibly within the reach of a well funded and determined
attacker.
Use of perfect forward secrecy does provide an important security
benefit when multiple messages are considered. While a sufficiently
funded and determined attacker could conceivably break tens, hundreds
or even thousands of DH 1024 keys a year, it is rather less likely
that an attacker could break millions a year. The Comodo OCSP server
receives over 2 billion hits a day and this represents only a
fraction of the number of uses of SSL on the Internet. Use of perfect
forward secrecy does not prevent an attacker from decrypting any
particular message but raises the cost of indiscriminate intercept
and decryption.
There is security in numbers: If every communication is protected by
perfect forward secrecy the hypothetical work factor for decrypting
every communication is the hypothetical work factor of decrypting one
communication times the number of communications.
1.3. Cost Factor
As previously discussed, cryptanalysis is not the only tool available
to an attacker. Faced with a robust cryptographic defense, Internet
criminals have employed 'social engineering' instead. A PRISM-class
attacker may use any and every tool at their disposal including tools
that are unique to government backed adversaries such as the threat
Hallam-Baker April 30, 2015 [Page 9]
Internet-Draft PRISM Proof Trust Model October 2014
of legal sanctions against trusted intermediaries.
Although attackers can and will use every tool at their disposal,
each tool carries a cost and some tools require considerable advance
planning to use. It is conceivable that the AES standard published by
NIST contains a backdoor that somehow escaped the extensive peer
review. But any such effort would have had to have begun well in
advance of 1998 when the Rijndael cipher was first published.
Subversion of cryptographic apparatus such as Hardware Security
Modules (HSMs) and SSL accelerators faces similar constraints. A HSM
may be compromised by an adversary but the compromise must have taken
place before the device was manufactured or serviced.
Just as computational attacks are limited by the cryptanalytic
techniques known to and the computational resources available to the
attacker, social attacks are limited by the cost of the attack and
the capacity of the attacker.
The Cost Factor C(t) is an estimate of the cost of performing an
attack on or before a particular date in time (t).
For the sake of simplicity, currency units are used under the
assumption that all the resources required are fungible and that all
attackers face the same costs. But such assumptions may need to be
reconsidered when there is a range of attackers with very different
costs and capabilities. A hacktivist group could not conceivably
amass the computational and covert technical resources available to
the NSA but such a group could in certain circumstances conceivably
organize a protest with a million or more participants while the
number of NSA employees is believed to still be somewhat fewer.
The computational and hypothetical work factors are compared against
estimates of the computational resources of the attacker. An attack
is considered to be infeasible if that available computational
resources do not allow the attack to be performed within a useful
period of time.
The cost factor is likewise compared against an incentive estimate,
I(t) which is also time based.
An attack is considered to be productive for an attacker if there was
a time t for which I(t) > C(t).
An attack is considered to be unproductive if there is no time at
which it was productive for that attacker.
Unlike Cost Factor for which a lower bound based on the lowest cost
and highest capacity may be usefully applied to all attackers,
differences in the incentive estimate between attackers are likely to
be very significant. Almost every government has the means to perform
financial fraud on a vast scale but only rarely does a government
Hallam-Baker April 30, 2015 [Page 10]
Internet-Draft PRISM Proof Trust Model October 2014
have the incentive and when governments do engage in activities such
as counterfeiting banknotes this has been done for motives beyond
mere peculation.
While government actors do not respond to the same incentives as
Internet criminals, governments fund espionage activities in the
expectation of a return on their investment. A government agency
director who does not produce the desired returns is likely to be
replaced.
For example, when the viability of SSL and the Web PKI for protecting
Internet payments was considered in the mid 1990s, the key question
was whether the full cost of obtaining a fraudulently issued
certificate would exceed the expected financial return where the full
cost is understood to include the cost of registering a bogus
corporation, submitting the documents and all the other activities
that would be required if a sustainable model for payments fraud was
to be established.
For an attack to be attractive to an attacker it is not just
necessary for it to be productive, the time between the initial
investment and the reward and the likelihood of success are also
important factors. An attack that requires several years of advance
planning is much less attractive than an attack which returns an
immediate profit.
An attack may be made less attractive by
* Increasing the cost
* Reducing the incentive
* Reducing the expected gain
* Reducing the probability that the incentive will be realized
* Increasing the time between the initial investment and the
return.
1.4. Social Work Factor
In the cost factor analysis it is assumed that all costs are fungible
and the attack capacity of the attacker is only limited by their
financial resources. Some costs are not fungible however, in
particular inducing a large number of people to accept a forgery
without the effort being noticed requires much more than a limitless
supply of funds.
In a computational attack an operation will at worst fail to deliver
success. There is no penalty for failure beyond having failed to
succeed. When attempting to perpetuate a fraud on the general public,
Hallam-Baker April 30, 2015 [Page 11]
Internet-Draft PRISM Proof Trust Model October 2014
every attempt carries a risk of exposure of the entire scheme. When
attempting to perform any covert activity, every additional person
who is indoctrinated into the conspiracy increases the chance of
exposure.
The totalitarian state envisioned by George Orwell in 1984 is only
possible because each and every citizen is in effect a party to the
conspiracy. The erasure and replacement of the past is possible
because the risk of exposure is nil.
In 2011 I expressed concern to a retired senior member of the NSA
staff that the number of contractors being hired to perform cyber-
sabotage operations represented a security risk and might be creating
a powerful constituency with an interest in the aggressive
militarization of cyberspace rather than preparing for its defense.
Subsequent disclosures by Robert Snowden have validated the
disclosure risk aspect of these concerns.
Empirically, the NSA, an organization charged with protecting the
secrecy of government documents, was unable to maintain the secrecy
of their most important secrets when the size of the conspiracy
reached a few ten thousand people.
The community of commercial practitioners cryptographic information
security is small in size but encompases many nationalities. Many
members of the community are bound by ideological commitments to
protecting personal privacy as an unqualified moral objective.
Introducing a backdoor into a HSM, application or operating system
platform requires that every person with access to the platform
source or who might be called in to audit the code be a party to the
conspiracy. Tapping the fiber optic cables that support the Internet
backbone requires only a small work crew and digging equipment.
Maintaining a covert backdoor in a major operating system platform
would require hundreds if not thousands of engineers to participate
in the conspiracy.
The Social Work Factor WF_S(t) is a measure of the cost of
establishing a fraud in a conspiracy starting at date t. The cost is
measured in the number of actions that the party perpetrating the
fraud must perform that carry a risk of exposure.
In general, the Social Work Factor will increase over time.
Perpetrating a fraud claiming that the Roman emperor Nero never
existed today would require that millions of printed histories be
erased and rewritten, every person who has ever taught or taken a
lesson in Roman history would have to participate in the fraud. The
Social Work Factor would be clearly prohibitive.
Hallam-Baker April 30, 2015 [Page 12]
Internet-Draft PRISM Proof Trust Model October 2014
The Social Work Factor of perpetrating such a fraud today is
prohibitive, the cost in the immediate aftermath of Nero's
assassination in 68 would have been considerably lower. While the
emperor Nero was obviously not erased from history there is a strong
consensus among Egyptian archeologists that something of the sort
happened to Tutankhamun before the discovery of his tomb by Howard
Carter.
2. The Problem of Evaluating Trust
The Prism-Proof Email testbed attempts to facilitate the development
and deployment of a new email privacy protection infrastructure by
dividing the problem into the parts for which there are known, well
established (if not necessarily perfect) solutions and the parts for
which there are not with clearly defined interfaces between the two
parts.
The Trust Publication Web Service is a JSON/REST Web service that
supports the publication of all existing forms of trust assertion
(PKIX, OpenPGP, SAML). For the sake of future simplicity, a new ASN.1
message format for OpenPGP-style key endorsement is proposed so that
all the forms of trust assertion that might be used in a PKI may be
expressed without recourse to multiple data encoding formats. The
Trust Publication Web Service need not be a trusted service since its
role is essentially that of a proxy, routing messages such as
certificate requests to the appropriate destination(s).
The Omnibroker Web Service is a trusted service that a mail user
agent or proxy can query to determine which security enhancements
(encryption, signature, etc.) should be added to an outbound message
(among other functions).
Between the Trust Publication Web Service and the Omnibroker service
sits the hard research problem of how to make sense of and what value
to place on the CA issued certificates, peer to peer key
endorsements, revocation information and other signed assertions that
might exist.
Robust implementation of public key cryptography allows the signature
on a signed assertion to be verified as belonging to a holder of the
corresponding signature key with near certainty. But such an
assertion can only be considered trustworthy if the purported signer
is trustworthy and is the actual holder of the corresponding signing
key, claims that are in turn established by more signed assertions.
Expanding the scope of our search increases the number of documents
on which we are relying for trust rather than answering the question
we wish to answer.
Hallam-Baker April 30, 2015 [Page 13]
Internet-Draft PRISM Proof Trust Model October 2014
2.1. Probability and Risk
Attempting to analyze the trustworthiness of a signed assertion in a
heterarchical topology such as Phil Zimmerman's Web of Trust leads to
an infinite regression. Alice may see that Bob's key has been signed
by Carol, Doug and Edward but this should only give Alice more
confidence in the validity of Bon's key if she knows that Carol Doug
and Edward are distinct individuals.
Probability is a model of events that are random. An attack is a
conscious act on the part of an attacker and is only random insofar
as the attacker's motive may not require a particular choice of
victim or method. In such cases the particular attack is 'random'
from the point of view of the victim but that an attack would take
place is due to the fact that the attacker had motive, means and
opportunity.
The motive for an attacker depends on the perceived rather than the
actual difficulty of breaking a system. It might be some time before
a competent attacker attempts to break an insecure system that is for
some reason generally believed to be secure. But the rate of attack
is likely to increase rapidly once the vulnerability is widely known.
The system has not become less trustworthy over time, rather the
system was always untrustworthy and it is only the consequences of
that fact that have changed.
Analyzing the trustworthiness of a Web of Trust using an estimate of
the probability that an assertion might be fraudulent is
unsatisfactory because it requires us to provide as an input to our
calculations the very quantity we are trying to arrive at as an
output.
Attempting to estimate the probability of default for each assertion
in a Web of trust leads us to an infinite recursion as Alice trusts
Bob trusts Carol trusts Alice. We can define the inductive step but
have no base case to ground it with.
2.2. Reputation
Another frequently proposed metric for analyzing the trustworthiness
of assertions is 'reputation'. Reputation is a measure of risk based
on reports of past behavior.
Reputation has proved somewhat effective in online restaurant
reviews. A restaurant that receives a high number of good reviews is
likely to be worth visiting but ratings based on a small number of
reviews can be wildly inaccurate. A glowing review may have been
written by a satisfied customer or by an unscrupulous proprietor. A
series of negative reviews may be written by unsatisfied customers or
a jealous competitor.
Hallam-Baker April 30, 2015 [Page 14]
Internet-Draft PRISM Proof Trust Model October 2014
Restaurant reviews work because there is a widely shared
understanding of what makes a good or a bad restaurant. There is no
similar shared understanding of the quality of a public key
validation process except among specialists in the field.
2.3. Curated Spaces
'Reputation based' systems have proved highly effective at
controlling email abuse but these systems use a large quantity of
empirical data including from honeypot email servers, expert analysis
of abuse traffic and content analysis to arrive at reputation scores.
It is the action of the curator that turns the raw data into a useful
measure of risk rather than the mechanical application of a clever
algorithm.
There is a good argument to be made for introducing a curator into a
PKI trust model but that is an argument about who should perform the
analysis rather than how the analysis is to be performed.
2.4. Trustworthy Time
The problem of grounding the Web of Trust is solved if there is
available a notary authority whose trustworthiness is beyond
reasonable dispute. Once a trust assertion has been notarized by such
a notary authority, the cost of forgery becomes the cost of suborning
the notary authority.
As is demonstrated later, use of linked timestamps and cross-
notarization amongst notaries makes it possible to establish a
timestamp notary infrastructure such that perpetrating a forgery
requires each and every notary in the infrastructure is compromised.
The analysis of the set of trust assertions begins by asserting a
Social Work Factor to the earliest assertion prior to the time at
which it was notarized. This provides the base case of the induction
from which the rest of the analysis proceeds.
3. Maximizing Social Work Factor to Maximize Trust
As previously described, the purpose of the Social Work Factor is to
support the design process by allowing the consequences of different
design approaches to be considered.
When designing a trust infrastructure, there are two different
attacks that need to be considered. First there is the attack where a
credential is issued for an entirely fictitious persona, secondly
there is the attack where a credential is issued to an impostor
impersonating a real persona.
Hallam-Baker April 30, 2015 [Page 15]
Internet-Draft PRISM Proof Trust Model October 2014
The consequences of the two attacks are very different, particularly
where a confidentiality infrastructure is concerned. Indeed it might
be considered desirable to encourage participants to create and use
fictitious personas to provide anonymity for their actions in certain
circumstances. An impostor who gains a credential for a real person
can use it to persuade relying parties that their communications are
confidential when they are in fact compromised and can steal the use
of the target's reputation.
The context of an attack is also important. The confidentiality of
the private communications of an individual is an issue for that
individual and their correspondents alone. The confidentiality of the
communications of an individual acting for their employer is much
more complex. In addition to the employer having an interest in
protecting the confidentiality of the communication, there may be a
legitimate employer interest in being able to view the contents. For
example, it is now generally accepted in many countries that most
government employees do not have a right of privacy from the people
who they ultimately work for unless their job function falls into a
narrowly scoped exception.
3.1. Trust Specifiers
A trust specifier is a mechanism that identifies a public key either
directly (e.g. a self-signed certificate) or indirectly (e.g. a Key
identifier).
Trust specifiers are not trust assertions but may be used to create
trust assertions. For example, an OpenPGP fingerprint does not make
any statement about the owner of a public key but an OpenPGP
fingerprint printed on a business card is an explicit claim that the
specified public key may be used to send encrypted email to the
individual named on the card.
3.1.1. Key Identifiers
PGP introduced the use of key fingerprints as the basis for key
exchange. A cryptographic digest value is computed from the user's
public key and used as the basis for key endorsement (called key
signing in the PGP terminology).
The term 'fingerprint' has no formal definition in the PKIX
specifications but the term is widely used to refer to a message
digest of the entire contents of a certificate. Since this use is
incompatible with the PGP usage, the term Key Identifier is prefered
as this is unambiguous in both contexts.
In PKIX, the key identifier value is a value chosen by the issuer to
uniquely identify a public key. The Key Identifier value of the
issuing public key is specified using the authorityKeyInfo extension
and the Key Identifier value of the subject is specified in the
Hallam-Baker April 30, 2015 [Page 16]
Internet-Draft PRISM Proof Trust Model October 2014
subjectKeyIdentifier extension.
While a PKIX Key Identifier is not required to have the strong
binding to the corresponding public key that an OpenPGP identifier
does, a profile could require that certificates specify 'strong' Key
Identifiers formed using a cryptographic message digest of the public
key parameters.
Strong Key Identifiers are not trust assertions but they may be used
to facilitate the creation of trust assertions through key signing, a
form of the endorsement mechanism discussed below.
Strong Key Identifiers may also be used to publish informal key
assertions by adding them to a business card or a Web Page. Such uses
might be facilitated through definition of appropriate URI and QR
code formats.
3.1.2. Self Signed Certificates
In the context of PKI, the term 'certificate' is generally understood
to refer to an X.509 public key certificate that binds a name and/or
an Internet address to a public key.ublic key.
A certificate may be either a self signed certificate or a CA issued
certificate. Since the work factor of creating a self-signed
certificate is negligible, such certificates demonstrate little in
themselves but present the subject's public key data in a format that
is compatible with many existing applications.
As with Key Identifiers, a self signed certificate is not a useful
trust assertion in its own right but may be used to facilitate the
creation of trust assertions through notarization or endorsement. A
public key certificate may also be used as the basis for making a
Certificate Signing Request to a Certificate Authority.
3.2. Trust Assertions
In a PKI, a trust assertion makes a statement about the holder of a
public key. In the PKIX model as currently deployed and used, the
only forms of trust assertion are Key Signing Certificates and End
Entity Certificates. In the OpenPGP model every user is also a trust
provider and trust assertions are created in peer-to-peer fashion to
create a Web of Trust.
3.2.1. Certificate Authority Issued Certificates
A Certificate Authority (CA) is a trusted third party that issues
digital certificates. A private CA issues certificates for a closed
community of relying parties, a public CA issues certificates without
restriction on the relying parties.
Hallam-Baker April 30, 2015 [Page 17]
Internet-Draft PRISM Proof Trust Model October 2014
In the Web PKI, providers of Web browsers and platform providers
embed the trust anchors of selected public Certificate Authorities
into the application as default trust providers.
Operation of a Certificate Authority involves two types of
certificate. A certificate is either a certificate signing
certificate or an end entity certificate. While it is possible for a
Certificate Authority to lose control of a signing key used to issue
certificates, the Social Work Factor for such attacks can be made
prohibitively high.
A CA Certificate Policy defines (among other things) the validation
criteria that the CA applies before deciding to issue a certificate.
A certificate policy is designed to provide a balance between the
social work factor presented to an attacker and the cost to the CA
and the subject. The CA-Browser forum Extended Validation practices
are designed to present a very high social work factor to attackers
while Domain Validation presents a significantly lower cost to both
subjects and attackers.
The EV guidelines in particular are designed to present a social work
factor that increases each time an attacker attempts an attack.
Registering one corporation is relatively straightforward.
Registering a corporation in a way that prevents ownership being
traced back to the owners is rather more difficult. Registering
hundreds of false front corporations is considerably harder as any
common link between the corporations means that if one corporation is
discovered to be a front for fraudulent purposes, the rest will come
under close scrutiny. scrutiny.
A major advantage of the CA certificate issue approach is that they
allow a high degree of trust to be established very quickly while it
takes a considerable time to establish trust in a pure Web of Trust
approach.
The main drawback to the CA issue approach is in providing trust to
individuals for personal use rather than corporate or government
entities or to employees working for such entities. Corporations and
Government entities invest in obtaining EV validated certificates as
a cost of doing business that has an established return. There is no
obvious return on obtaining a similar high assurance credential for
personal use but validating an individual's credentials is just as
complicated as validating the credentials of a corporation. The
history of the UK National Identity card suggests that there is no
reason to expect that the cost of provisioning credentials would be
significantly reduced at scale.
The CA issue model has been successfully applied to issue of
credentials to employees for use within an organization and such
credentials are occasionally used for external purposes such as
S/MIME email. But such certificates are not intended for personal use
Hallam-Baker April 30, 2015 [Page 18]
Internet-Draft PRISM Proof Trust Model October 2014
and are typically revoked when employment ends.
3.2.2. Key Signingey Signing
In the OpenPGP model every key holder is a trust provider and every
key may be used to provide trust through 'key signing'. The trust
value of key signing depends on how close the relying party is to the
signer. A key that I have signed myself is far more trustworthy than
a key signed by a friend of a friend of a friend.
The Social Work Factor of forging an individual key signing is low
but could be fixed in time through use of a notary. A recent key
signing purporting to be for the public key of Barack Obama would
have negligible evidentiary value unless produced by someone very
close to me or endorsed through other means. A key signing that had
been notarized during his time as a Harvard student would be
considerably more trustworthy.
As the separation between the relying party and the signer becomes
larger it becomes increasingly desirable to require multiple
independent key signing paths. The Social Work Factor combined with
an absolutely reliable notary service provide a firm basis for
evaluating the trust value of such Webs of trust: The trustworthiness
of a key is dependent on whether the incentive to create a forgery
ever exceeded the Social Work Factor of performing a forgery.
Use of peer-to-peer key signing does provide a viable model for
establishing high assurance credentials for personal use but
establishing a high degree of assurance is like making the best
quality Scotch whisky: the product takes an inordinately long time to
reach maturity. It is hard to see how key-signing could be made a
viable model for commercial use. It is an especially poor fit for
issue of credentials to employees as the employer has no control over
the issue or use of the credential and no ability to revoke the
credential when an employee is terminated.
One advantage frequently cited for OpenPGP over PKIX is that there is
no CA role and the infrastructure is therefore 'free'. Neither the
premise, nor the conclusion holds, Many profitable businesses have
been built on the basis of open source software and a Web of Trust
that reached critical mass would offer abundant opportunities to
companies with PKI expertise.
Nor is the lack of business stakeholders in an infrastructure
necessarily an advantage. One of the reasons that the Web PKI has
been successful is that there are many businesses that promote the
use of SSL certificates.
Hallam-Baker April 30, 2015 [Page 19]
Internet-Draft PRISM Proof Trust Model October 2014
3.2.3. Adding Key Endorsement to PKIX
Considering the Social Work Factor presented by trust assertions in
the PKIX and OpenPGP model establishes clear benefits to both
approaches. Each model best suits a different community.
One approach to a next generation PKI therefore would be to simply
enable the use of either scheme in any application. This is not the
best approach however as it leaves the email security market in a
BetaMax/VHS standards war that has been stalemated for almost two
decades. Further, attempting to maintain the use of two different
PKIs leads to an unnecessary increase in complexity.
A PKI that combines the PKIX and OpenPGP approaches offers clear
advantages over both. For many practical reasons that will only be
summarised here, it is better to extend the existing PKIX
infrastructure to support Key Signing rather than start from the
OpenPGP message formats or attempt to design a system based on a new
encoding format. These include the deployed base of S/MIME email
clients, the use of PKIX to support SSL and the widespread support
for PKIX in common code platforms.
Rather than attempting to force peer-to-peer Key Signing model into
the existing PKIX model, a new structure, the Key Endorsement is
proposed instead. While it is technically possible to use PKIX cross
certificates as a substitute for PGP Key Signing, the legacy PKIX
infrastructure is designed on the assumption that a cross certificate
is either completely trustworthy or completely untrustworthy. It is
better to introduce a new data format to represent new semantics than
to attempt to retrofit new semantics into a legacy format with a
deployed infrastructure.
I propose the introduction of a Key Endorsement, similar to a PKIX
certificate except that:
* A Key Endorsement is signed by a PKIX end entity certificate,
not a Key Signing Certificate.
* Instead of a validity interval, there is an issue time.
* The signer and subject key identifiers are required elements of
the structure rather than optional extensions.
* Key endorsements are not intended for direct use by relying
applications.
Adding Key Endorsements to the PKIX model allows the use of both
trust models in combination.
Hallam-Baker April 30, 2015 [Page 20]
Internet-Draft PRISM Proof Trust Model October 2014
3.2.3.1. Peer Endorsement
Key endorsement may also be used to endorse keys of peers. While the
ability of the 16 year old Alice to accurately validate the public
keys of her peers is questionable, the value of a notarized
endorsement increases over time.
Relying on peer endorsement alone requires that the relying party be
'close' to the signer. Using a combination of CA issued certificates
and Key Endorsements allows a high social work factor to be
established even for a remote population. It is quite feasible for an
attacker to generate a network of one, a hundred or even a million
key signing events but generating a fraudulent network that contains
a mixture of CA validated certificates and key endorsements has a
much higher social work factor.
3.2.3.2. Self Endorsement
Self endorsement is a special form of endorsement as a person is
least likely to make false statements that might harm their own
security. Although this is possible in the case that coercion or
undue psychological pressure is applied.
A user may be expected to have multiple public keys issued over the
course of their life, for use at school, university, at different
employers and for personal use. In the existing PKIX model each of
these keys are independent. Key endorsement allows a user to
countersign new keys with older keys, establishing a personal web of
trust that develops over time so that the social work factor is
preserved and increased over time rather than starting from scratch
each time a certificate is issued.
For example, Alice is issued a certificate at school which she uses
to sign a key endorsement for a personal key she creates. When Alice
goes to University, she endorses her university key with her personal
key and vice versa. On graduating, Alice becomes a journalist.
Sources can send her encrypted messages containing tips confident in
the knowledge that Alice is the same Alice who attended the high
school and university listed in her official biography.
3.2.3.3. Key Endorsement Parties
A PGP Key Signing party is an event held to facilitate exchange of
PGP fingerprints. Like the use of hashtags and many other important
constructs in social media, key signing parties are a practice that
have arisen out of use rather than being part of the original model.
Recognizing a Key Endorsement Party as being a special form of peer
endorsement enables special consideration when making a trust
evaluation. It is not practical for a thousand attendees at an
international conference to perform mutual key endorsements with
Hallam-Baker April 30, 2015 [Page 21]
Internet-Draft PRISM Proof Trust Model October 2014
every other participant (a half million pairs!) but it is entirely
practical to establish a scheme in which anyone who gets their key
endorsed by some number (e.g. five) of qualified Key Endorsers will
have their key endorsed by the Key Endorsement Party key at the end.
3.3. Trust Meta Assertions
Trust Meta Assertions are trust assertions that make statements about
other trust assertions
3.3.1. Revocation and Status Checkings Checking
Real users and real administrators make mistakes from time to time.
Private keys are lost or stolen or misused. Any system that does not
provide a mechanism for forgiving mistakes is unlikely to be
practical in the real world.
Revocation checking limits the incentive for attack by allowing the
time window of vulnerability to be limited in the case of a
fraudulent certificate issue or that a private key is discovered to
be lost or stolen. This does not increase the social work factor but
decreases the value of an attack.
3.3.2. Notarization
Notarization or a Trust Assertion of Key Identifier by a trustworthy
notary allows the social work factor of forging the trust assertion
or key identifier to be raised to an infeasible level for dates after
the notarization took place.
While a traditional notary might be suborned relatively easily, a
digital notary can be constructed in such a fashion that post dating
a notary assertion by more than a few hours or days is infeasible.
3.3.3. Transparency
Certificate Transparency is a proposed infrastructure to (Laurie et.
al.). [[Describe]
Publishing issued certificates increases the probability that an
attempted fraud will be discovered and thus increases the Social Work
Factor and reduces the incentive.
3.4. Other Approaches
The X.509/PKIX and OpenPGP infrastructures are not the only
infrastructures that have been proposed by they are the only large
scale infrastructures which large numbers of users rely on today. The
Social Work Factor over time may be used to evaluate alternative PKI
options that have not (yet) achieved widespread use.
Hallam-Baker April 30, 2015 [Page 22]
Internet-Draft PRISM Proof Trust Model October 2014
3.4.1. DNSSEC
Like PKIX, DNSSEC is based on a hierarchical trust model. Unlike
PKIX, DNSSEC is only capable of making statements about DNS names.
The Social Work Factor is not a useful tool to analyze DNSSEC
assertions because there is no historical dimension to the DNSSEC
infrastructure. The trustworthiness of a DNSSEC assertion depends on
the trustworthiness of the root operator, the TLD registry and the
party that signed the corresponding DNS zone. If these are
trustworthy then so is the assertion, there is no alternative if not.
While the root and registry operators have strong commercial
incentives not to default, a default may be coerced through
government action and relying parties have no independent means to
determine if a default has taken place.
3.4.2. SPKI / SDSI
The chief distinguishing characteristic of SPKI is that SPKI names
are not universal. While this has the interesting effect of
simplifying the evaluation of trust within the SPKI naming
infrastructure, this effect is lost when attempting to send an email
because the Internet email system is based on the assumption that the
namespace is universal. It does not make sense to talk about 'Alice's
bob@example.com' (although UUCP email did use a scheme of that type.
3.4.3. Identity Based Cryptography
Identity based cryptography is a frequently touted 'alternative' to
conventional public key cryptography in which the public key of each
subject is a deterministic function of their name and a master public
key which is known to everyone. Each user obtains their private key
from the party that created the master public key pair using the
master private key.
While many benefits have been claimed for the Identity based
approach, applying to the holder of the master private key for a
private key offers no benefits to key holders over applying to a
certificate from a CA.
Identity based cryptography does offer relying parties the ability to
obtain the public key for a counterparty without the need to
communicate with another party, but this is hardly much of an
advantage unless there is no means for certificates to be passed in-
ban and there is no advantage at all if an external source has to be
queried to obtain the status of a public key to determine that the
private key has not been reported lost or compromised.
Hallam-Baker April 30, 2015 [Page 23]
Internet-Draft PRISM Proof Trust Model October 2014
The simplicity of certificate chain validation and status checking in
Identity Based Cryptography is the result of the technology being
unable to support these features rather than the features being
unnecessary.
4. Maximizing Social Work Factor in a Notary Infrastructure
The ability to determine that a trust event occurred before a certain
point in time increases the social work factor for forging the event
after that point in time. If suborning the notary is infeasible, the
Social Work Factor is raised to an infeasible level.
Harber and Stornetta proposed a notary that produced a 'catenate
certificate' in which each notary output is fed as an input into the
next. It is thus impossible to insert notary events between two prior
notary events without breaking the cryptographic algorithm used for
authentication.
A chain of notary events may be fixed in time by notarizing events
that are unpredictable in advance but known with a high degree of
certainty afterward. For example the weekly lottery numbers or the
closing prices on various equity markets.
The principal drawback to grounding the notary chains using external
events is that the ability of a party to verify the trustworthiness
of the notary is bounded by the trustworthiness of the reports of the
external events used for verification.
A better approach is to have establish a large number of
independently operated notaries and a procedure for cross
notification that ensures that no notary can default unless every
other notary defaults. The Social Work Factor of suborning any notary
then becomes the Social Work Factor of suborning every notary and
erasing all histories for their notary events.
For ease of explanation, a two tier notary infrastructure is
envisaged in which a notary is either a local notary or a meta
notary.
Local notaries are notaries that produce a catenate log of
notarization requests submitted by its users. The current value of
the notary chain is presented to one or more meta-notaries at regular
time intervals (e.g. an hour) to prevent backdating of notary claims
and notarizes the output of one or more meta notaries at regular
intervals to prevent predating.
Meta notaries are notaries that only notarize the requests submitted
by local notaries and by other meta notaries. Before accepting a
notarization request, a meta notary audits the actions of the notary
making the request to ensure that the time-stamp values etc. are
consistent and correct.
Hallam-Baker April 30, 2015 [Page 24]
Internet-Draft PRISM Proof Trust Model October 2014
The schedule of peer to peer notification among meta notaries is set
such that a notary request made to any of the local notaries served
will affect the output of every meta notary within a predetermined
period of time.
To make use of such a notary infrastructure, a relying party chooses
at least one meta notary and obtains and maintains a record of the
catenate certificate chain over time. The trust chain of the chosen
meta notary may then be used to verify any notary assertion presented
by any other meta notary which may in turn be used to verify any
notary assertion from a local notary that participates in the scheme.
A significant benefit of this approach is that the ability to verify
notary assertions is assured even if the Local Notary that originally
produced it ceases functioning. All that is necessary for the
continued operation of the system to be assured is for the pool of
meta notaries to be sufficiently large to render suborning all of
them infeasible.
5. Conclusions and Related Work
It has not escaped the notice of the author that the social work
factor might be applied as a general metric for assessing the
viability of a political conspiracy hypothesis.
Author's Address
Phillip Hallam-Baker
Comodo Group Inc.
philliph@comodo.com
Hallam-Baker April 30, 2015 [Page 25]