May 26, 2005

Seven Laws for seven brothers

Notwithstanding my initial reaction to and further comments about the Seven Laws of Identity, contained at Kim Cameron's blog, I finally took a good read through The Laws of Identity (5/19/2005 ver.). I think it is both a very well though-out attempt to codify the constraints and conditions for an identity metasystem and a damn fine piece of writing. Literate beyond the norm. All of the primary and secondary authors and contributors are to be commended for their work.

Having said that, I stand by my earlier statement (from here):

Assuming away critical system conditions to develop the mechanics may be a necessary evil at this stage, but can't continue much longer. I specifically refer to the stance taken by technology-centric solution developers that their concern is not the integrity of the identity and initial credentialing. Rather, the solution assumes proofed inputs suitable for "trust" to develop. Thus, STS or what have you can exchange credentials and tokens satisfying the mechanical aspects of questioning, presenting, and authenticating, etc. "assertions." [sic - I recognize now that the chosen description is "claims."]
Let me expand. It is without doubt that The Laws were developed from and prepared for the technical-mechanical needs of digital identity. The authors signal this framework and perspective by referring to an "identity fabric" throughout the document in reference to a loosely coupled, complex ecosystem that (begins to) satisfy the needs for identity in the digital contexts. This is supported with statements such as: "Why is it so hard to create an identity layer for the Internet? Mainly because there is little agreement on what it should be and how it should be run." (p. 2) It's a double whammy: the "identity layer" is a proxy for the digital identity system at large, and the "main" reason for its absence is the lack of agreement about mechanics. The perspective and scope of the paper is made explicit with:
we specifically did not want to denote legal or moral precepts, nor embark on a discussion of the "philosophy of identity." (p. 4)
and
Matters of trust, attribution and usefulness can then be factored out and addressed at a higher layer in the system than the mechanism for expressing digital identity itself. (p. 6)
That is a respectable limitation of scope and I have no truck with it. Others (who: me? the government? you?) will deal with such esoterica. But, it does bring me back to the quibble I noted above: there is an implicit assumption that identity validation will be done by some means beyond scope so that the technical mechanics will work. Not to belabour the point, but again the authors propose that:
. . . Our claims-based approach succeeds in this [very limited claim] regard. It permits one digital subject . . . to assert things about another digital subject without using any unique identifier.

This definition of digital identity calls upon us to separate cleanly the presentation of claims from the provability of the link to a real world object.

Our definition leaves the evaluation of the usefulness (or the truthfulness or the trustworthiness) of the claim to the relying party. (p. 6)

Don't get me wrong. Each of the Laws is individually a strong statement and collectively they represent a nice set of rules within which developers et al can be assured of weaving a good contribution to the identity fabric. I also think that the mere publication of a language, with definitions, is in itself a giant step forward. But, so long as "a set of claims by one digital subject about another" is the heart of digital identity, authentication and liability (extended to the natural logical fullness of a 1:1 mapping of physical individual to a digital "identity" -- avitar, proxy, or what-have-you -- that represents uniquely and digitally a single, real individual from which identity "facets" can be derived) will continue to cycle back and impose itself. To be sure, I am referring only to the subset of identity subjects Homo sapiens. Regardless, it is a different challenge; one I would argue needs to be addressed coincidentally to this development. If some group is focused on this matter, please let me know as I would gladly join their conversation.

I'm not sure exactly how to assess or respond to a statement such as this:

The truth and possible linkage is not in the claim but results from the evaluation. If the evaluating party decides it should accept the claim being made, then this decision just represents a further claim about the subject, this time made by the evaluating party (it may or may not be conveyed further). (p. 6)
It's unreasonable to disagree with the first part: the truth is not in the claim but in the evaluation. Put another way, trust exists not because I say trust me but because you choose to do so (and hope for a good result). Trustworthiness results through iteration of this step and the accumulation of these claims by others, creating a reputation for trustworthiness and making the initial claim -- by me, to trust me -- more effective and valuable. Yet, the logic of the argument may tend to fall down in practice.

We take a risk any time we make a decision: the risk is ours and ours alone. In this case it is to accept a claim by another viz. his/her identity and the integrity of that claim. To mitigate the risks associated with accepting a self-asserted claim, we tend to seek corroborating or refuting evidence from other sources and by other means. We seek references and third-party reports. We look to history. If we are with the self-asserting claimant (identity subject), we have a sniff of their body aroma, look deeply into their eyes for signs of deception, ask for other proof to satisfy our skepticism. Only in the absence of any other such evidence (and an anticipated more valuable reward) do we accept a self-made claim and accept the risk. So, my problem with the statement is that it is logically valid but practically misses a crucial part of the claim-substantiate-evaluate-accept dynamic of seeking "truth" about the identity subject.

I believe in user control and consent, and I really love the user-control statement of dogma, "It is essential to retain the paradigm of consent even when refusal might break a company's conditions of employment." (p. 6) This may very well be a fundamental requirement to maintain the strength of user control and consent, and it is an admirable ambition. I, for one, question just how many people will choose to stand for this principle in the face of termination for insubordination. The implication then is that the bad forces of government and regulation would have to rear their ugly head to create law that protects workers in this instance.

I think that minimal disclosure for a constrained use is essential for privacy and user control, which, presumably, is what drives Law no. 2. The statement, "There is no longer the possibility of collecting and keeping information 'just in case' . . ." [emphasis mine] is, however desirable and logical an outcome of a need-to-know minimal distribution of information, not part of technical mechanics. It is, as everyone doubtlessly knows, a matter of policy and practice. Somewhere I read not all that long ago that two of the non-obvious forces that are driving the creation of massive directories and databases -- about people -- are that (a) thanks to computing capability it's easy to accumulate rich records over time and (b) thanks to cheap storage there's no disincentive to keep accumulating information. These together with the underlying belief that "information is power" and all the other marketing and security-driven forces for creation of directories may be a little bit more than the principle of minimal disclosure can overcome, methinks.

Under Law no. 3 the authors write, "the system must be predictable and 'translucent' in order to earn trust." (p. 7) Wonderfully said. It also corresponds, nicely if not directly, with a statement made by lawyer Geoffrey Rosen:

Privacy is not primarily about secrecy. It's about opacity. It's about the difference between information about a person and knowledge of that person. Privacy is the ability to protect parts of ourselves in different contexts.

Time, space, and direction are, as I recall, three crucial data points in physics. Law no. 4, raising the matter of informational direction, is an excellent observation. The framework viz. identity may very well benefit from the obvious addition of time (to some extent arguably addressed by Law no. 2) and space (touched on at least a little bit in Law no. 3). This may have been the intent of these three Laws -- or not. No matter, it works very nicely.

Human integration (Law no. 6) will prove to be, I would suspect, the most challenging of the Laws to be satisfied. Not only does it require the general adherence by and acceptance of millions upon millions of unique individuals, but a change in human behaviour. [note the proper Canadian spelling] Which is not to say it can't or won't be done, only that it will take the most time and effort to achieve. Within the section on Law no. 6 is a statement in reference to air traffic communication: "The limited semiotics of the channel mean there is a very high reliability in communications." (p. 10) Is this really a matter of semiotics? Regardless, the strength of the causal relationship made in the statement is questionable. It may very well be that the long-established standards for the specific-use language (and rigorous education of all new participants viz. the structure and meaning of that language) and the awareness of each actors' role and dialog, has as much or more to do with the reliability of these communications than does the limited interpretation opportunity of that language. It might also be that the intense -- arguably exclusive -- focus on the task at hand (i.e., no bleeding of other discussion that has a specific language with overlapping and inconsistent meanings) has higher causality. It may also be that this is exactly what the authors meant.

Further down the page on the same subject the authors make a critical observation:

But we definitely don't want unintended consequences when figuring out who we are talking to or what personal information to reveal. (p. 10)
True enough, and I would extend that caution to include being wary of what unintended consequences might arise in the system (meta or otherwise) as a whole. Of course, a rigorous process designed now to achieve high reliability in communications may challenge the earlier caveat that the metasystem need not require "the whole world to agree a priori." (p. 3)

It's worth noting, in keeping with an observation that frameworks, words, and system designs all include inherent biases that, like initial conditions, can have significant effect on outcomes (because of their dependence on those initial conditions). Here again, at the end of the last block quotation, we see a bias toward information ("claim") distribution. There is or should be an offsetting concern about claim acceptance, as I've flogged above.

Law no. 7 addresses the future with an axiomatic statement that in a unifying system comprising multiple contexts the experience ought to be consistent. Sometimes its essential to hit the bull between the eyes with a 2x4, and in this rapidly evolving area that may be what's required. I have no vendor agenda so in my view there can be no qualm or disagreement here. The observations and questions I have are more about how this part of the argument could be clarified.

First, several examples of contextual identity choices are provided (p. 11). Although the full range of contexts and identity choices may approach inifinity, I wonder whether this set is not a good start at a classification of contexts. For the sake of standardization and order, would it not make sense to classify identity strength, breadth, requirement, or what-have-you in some way whether they are called tiers (as Andre Durand proposed) or layers or classes . . . ?

Second, the example of "personal" identity context is explained as "a self-asserted identity for sites . . ." The operative adjective here is "self-asserted." I'm not entirely sure I believe that there is not a finite -- and the limit is not especially far away -- number of places and activities where self-assertion without some form of support (see above) will be satisfactory. Yet in the exemplar set there is -- not even in the "community" identity -- no hint that such a personal, social (i.e., non-enterprise) identity might be contextually required, let alone relevant. If so, then we have to circle back around to initial authentication (not to bang a drum too hard).

Third, the final paragraph of the section says:

As users, we need to see our various identities as part of an integrated world which none the less respects our need for independent contexts. (p. 11)
Bravo! Under the definition of the term "identity" used here and our understanding of what independent contexts might constitute, this statement is specifically straightforward and nicely summarizes the sentiment of the Law. It is also generally right as I've supported at the outset of these observations. My only caution would be that in the naturally disconnected contour of digital identity that will evolve broadly among businesses, governments, etc., etc., the risk of identity multiplicity -- contradictory duplicate identities hidden in corners and distant reaches of the environment -- is real. The risk of systemic contamination as a result is also real.


So as I come to realize that my response may, in fact, be longer than the original document, I want to reiterate that the work is an impressive start. That there is something to disagree with -- or fawn over -- is an important step. As should be obvious by now, my perspective leads me to argue that there has to be corresponding work in the policy and philosophical realms to co-evolve with these Laws.

Posted by Grayson at May 26, 2005 09:07 AM