Since I’m editing the User Managed Access (UMA) Core spec, I always seem to be getting questions about UMA. Also I’ve noticed that some folks in the IETF OAuth2.0 WG have not really understood the UMA flows (not suprising, since the UMA Core Rev 5C is now over 40 pages long). So I thought a very high level tutorial would benefit lots of people.
So here goes Part 1 of the tutorial.
When a Requester (acting on behalf of a Requesting Party) seeks access to resources sitting at the Host, the Requester must obtain an OAuth2.0 token from the Authorization Manager (AM). In this case the Requester is behaving like an OAuth Client, while the AM is behaving like an OAuth2.0 Authorization Server (AS). The resulting token is referred to in the UMA specs as the Authorization API Token (AAT). This token allows the Requester to use the other APIs at the AM at a later stage. It signals authorization by the AM for the Requester to access the other APIs at the AM.
After the Requester gets its own AAT token, it must now get another token specific to the Requesting Party (call him Bob). This token is referred to as the Requester Permission Token (RPT). Later when Bob (the Requesting Party) is seeking to access resources to at the Host (e.g. MyCalendarsz.com) by employing the Requester (e.g. TripItz.com), the requester must forward both tokens (the AAT and the RPT tokens) to the Host.
The reason why two tokens are needed is that there might be multiple Requesting Parties (eg. Bob, Billy, Becky, etc) at the same Requester (TripItz.com) seeking to access the same resources at the same Host. However, the Requester needs to be authorized only once by the AM to access the AM’s APIs.
[In the next post, we'll see how the Host has to do the same basic registration to the AM]
Aaron Titus writes an interesting piece based on his analysis of the recent proposal from Trent Adams (PayPal) to modify the NSTIC governance rules. The abolition of the NSTIC Privacy Standing Committee may have unforeseen impact on the acceptance of the whole NSTIC Identity Ecosystem idea, notably from the privacy front.
During the last decade — starting from the Liberty vs Passport kerfufle — we have seen a number of proposals for components of an “identity infrastructure” for the Internet. All in all, there has been little adoption (by consumers) of these technologies for high-value transactions due IMHO to the lack of privacy-preserving features.
So far I have yet to see a sustainable business model for identities which is focus on the “individual” (i.e. individual centric) and which preserves his/her personal data. All the agreements and EULAs that we click “yes” to seem to be titled in favor of the provider. If a provider “loses” my personal information (including credit-card information), there is really little incentive (positive or negative) to get them to recover my data. The individual suffers all the losses. Little wonder there is no buy-in from the consumer
Today NSTIC started its 2 day Ecosystem Steering Group meeting in Chicago. Never thought that dialing-in all day would be so tiring. Glad that the group (of about 300 people, half in-person and half virtual) got over the initial confusion about voting for the candidates and dealing with proposed changes to the Charter and By Laws.
Its kinda late, but here is a link to the recent interview with Dana Gardner from ZDnet. On the panel was Jim Hietala (VP of Security, Open Group), Dazza Greenwood (Civics.com & MIT) and myself. This was in preparation for the open group conference in DC.
More info on the Open group conference can be found here.
So the news this week was that Eran has decided that OAuth2.0 is a bad specification and wants nothing to do with it. Its kinda a bit too late to complain about OAuth2.0. Its out there, its being used as the basis for many other protocols, such as OpenID-Connect and UMA. Its going to stay around for a while, and perhaps even evolve further. Its a workable solution for this current generation of web applications APIs.
John Bradley got it right: the OAuth2.0 sky is not falling.
PS. I don’t know why people are so upset about the IETF process (see comments by Eran & responses by other folks). How many people in the OAuth WG were around for the creation of IPsec protocol? What about the IKE protocol (starting from Hilary Orman’s ISAKMP draft). All in all it took 5 years at least. Not to mention the PKIX WG (still around after 15 years). This *is* the IETF process. Love it or leave it.
For those who knew RL “Bob” Morgan, below is a link (sent by Mike Gettes at CMU) where you are invited to share thoughts/memories of Bob:
- Professional relationships to Bob: https://spaces.internet2.edu/display/rlbob/Home
- Personal and Family relations: https://spaces.internet2.edu/display/rlbob/Personal+Family+Life
RL Bob provided serious contributions to several key specs, including the SAML2.0 specs. He was also a key contributor to one of the important whitepapers produced by the MIT Kerberos Consortium.
Earlier this year I invited RL Bob to our 2012 MIT Kerberos conference at MIT but he kindly declined, for obvious reasons.
RL Bob will be sorely missed.
For SAML2.0 developers, users and vendors, it is perhaps worthwhile noting that the OASIS Security Services TC (SSTC) has started the process of revising the SAML2.0 specs.
Here is what the SSTC group has agreed to so far:
- All approved errata, along with any errata presented to the TC subsequent to the last approval, are to be applied to the specifications, or the specifications may be reworded to include the spirit of the errata identified.
- All original SAML 2.0 message formats are intended to remain unchanged in the new version except in cases where outright errors existed and were corrected through errata or subsequent specifications. This includes preservation of core XML namespaces.
- To the greatest extent possible, existing implementations of SAML 2.0 features should be compatible with the new standard, and any areas of divergence should be minimized and clearly identified.
- Some extensions and profiles published after SAML 2.0 ought to be incorporated in some fashion into the base standard to promote adoption and reduce the number of documents needed to address critical features.
- Significant changes to the Conformance statements for the standard are to be expected. We do not expect that every new feature or existing extension would be made mandatory to implement.
- Material related to a variety of threats implementers ought to be aware of should be drafted and incorporated.
Please visit the wiki containing the SAML2Revision plans. The SSTC is seeking input from the broader SAML2.0 community.
So the topic of “trust” always generates a million emails on various lists. Rather than rolling-up my own definition, I thought I’d borrow a good definition from the Trusted Computing Group community (courtesy of Graeme Proudler of HP Labs, UK).
It is safe to trust something when:
- It can be unambiguously identified.
- It operates unhindered.
- The user has first hand experience of consistent, good, behavior.
The definition is that of “technical trust”, namely “trust” in the mechanics of some computation (e.g. cryptographic computation, etc). In this case it refers to the TPM hardware. Note that “unhindered operation” is paramount for technical trust. This is still somewhat of a challenge for software (eg. think multi-tenant clouds and VMs).
Eve Maler has devised a very useful diagram (for our Google techTalk presentation), comparing the features and intended purposes of OAuth2.0, OpenID-Connect and UMA. Interestingly, the diagram also shows what can be achieved using the venn combinations of two out of three technologies.
At lunch today Sal summarized in one sentence what I have been trying to express for the last couple of years:
There is a market out there for leakage in derived identities (in the Internet)
What we had been talking about was the (inevitable) need for something similar to what the Jericho Forum folks call Core Identity. In simple words, this is the notion that every entity/person should have a “main” secret identity (like a confidential SSN number), from which other usable identities (personas) are derived via a one-way function. (See the Jericho Forum “Identity Commandments”).
The question was how and who was going to manage the issuance and maintenance of the core identities of US citizens. Since the US federal government was the issuing authority for social security numbers in the US, one possibility would be for the US federal government to be the issuing and maintenance authority for core identities (independent of whether the day-to-day managing was actually outsourced to private sector organizations).
The “leakage” here refers to the obtaining (e.g. scraping off the Internet) of pieces of information about one of my persona stemming off my core identity. For example, in my “home-persona” the location of my house may be of interest to advertisers who are doing target-marketing in my neighborhood. We can call this “leakage” because I never authorized the release (and usage) of my home-persona to the relying party (i.e. the marketing company).