@FindThomas

Digital Identity, Trust and Privacy on the open Internet

Archive for the ‘Uncategorized’ Category

New Scientist article about MIT OpenPDS

without comments

(NB. I love it when people get it.  Hal Hodson definitely gets it.  Many folks at the MIT-KIT conference this week got it.)

 

Private data gatekeeper stands between you and the NSA

03 October 2013 by Hal Hodson

Magazine issue 2937.

Software like openPDS acts as a bodyguard for your personal data when apps – or even governments – come snooping

Editorial: “Time for us all to take charge of our personal data”

BIG BROTHER is watching you. But that doesn’t mean you can’t do something about it – by wresting back control of your data.

Everything we do online generates information about us. The tacit deal is that we swap this data for free access to services like Gmail. But many people are becoming uncomfortable about companies like Facebook and Google hoarding vast amounts of our personal information – particularly in the wake of revelations about the intrusion of the US National Security Agency (NSA) into what we do online. So computer scientists at the Massachusetts Institute of Technology have created software that lets users take control.

OpenPDS was designed in MIT’s Media Lab by Sandy Pentland and Yves-Alexandre de Montjoye. They say it disrupts what NSA whistleblower Edward Snowden called the “architecture of oppression”, by letting users see and control any third-party requests for their information – whether that’s from the NSA or Google.

If you want to install an app on your smartphone, you usually have to agree to give the program access to various functions and to data on the phone, such as your contacts. Instead of letting the apps have direct access to the data, openPDS sits in between them, controlling the flow of information. Hosted either on a smartphone or on an internet-connected hard drive in your house, it siphons off data from your phone or computer as you generate it.

It can store your current and historical location, browsing history, content and information related to sent and received emails, and any other personal data required. When external applications or services need to know things about you to provide a service, they ask openPDS the question, and it tells them the answer – if you allow it to. People hosting openPDS at home would always know when entities like the NSA request their data, because the law requires a warrant to access data stored in a private home.

Pentland says openPDS provides a technical solution to an issue the European Commission raised in 2012, when it declared that people have the right to easier access to and control of their own data. “I realised something needed to be done about data control,” he says. “With openPDS, you control your own data and share it with third parties on an opt-in basis.”

Storing this information on your smartphone or on a hard drive in your house are not the only options. ID3, an MIT spin-off, is building a cloud version of openPDS. A personal data store hosted on US cloud servers would still be secretly searchable by the NSA, but it would allow users to have more control over their data, and keep an eye on who is using it.

“OpenPDS is a building block for the emerging personal data ecosystem,” says Thomas Hardjono, the technical lead of the MIT Consortium for Kerberos and Internet Trust, a collection of the world’s largest technology companies who are working together to make data access fairer. “We want people to have equitable access to their data. Today, AT&T and Verizon have access to my GPS data, but I don’t.”

Other groups also think such personal data stores are a good idea. A project funded by the European Union, called digital.me, focuses on giving people more control over their social networks, and the non-profit Personal Data Ecosystem Consortium advocates for individuals’ right to control their own data.

OpenPDS is already being put to use. Massachusetts General Hospital wants to use the software to protect patient privacy for a program called CATCH. It involves continuously monitoring variables including glucose levels, temperature, heart rate and brain activity, as well as smartphone-based analytics that can give insight into mood, activity and social connections. “We want to begin interrogating the medical data of real people in real time in real life, in a way that does not invade privacy,” says Dennis Ausiello, head of the hospital’s department of medicine.

OpenPDS will help people keep a handle on their own data, but getting back information already in private hands is a different matter. “As soon as you give access to that raw data, there’s no way back,” says de Montjoye.

 

Written by thomas

October 9th, 2013 at 2:18 pm

Posted in Uncategorized

Intel’s foray into Personal Data

without comments

So this is getting very interesting: The world’s largest chip maker wants to see a new kind of economy bloom around personal data (article here).

It looks like Intel is entering into the personal data & big data narrative. Given that Intel owns a considerable chunk of the motherboard & SoC real-estate (think Processors, discrete TPMs, AMT, etc. etc), they do indeed have access to the plumbing of my machine.

One question is whether hardware and chipset providers will begin to require end-users to agree to Terms of Service (allowing them to access data bits moving around the board). Such a move would complicate the user’s life.  A typical person would then be forced to accept TOS and EULAs at three layers (at least):

  • The hardware layer.
  • The OS level (think EULAs)
  • The application layer (think EULAs when installing Office productivity tools)
  • The Services Provider (SP) and IdP layer (think Click-Thru agreement when signing-up to accounts)

 

 

Written by admin

May 23rd, 2013 at 4:33 pm

Posted in Uncategorized

NSA introduces two new lightweight ciphers (SIMON and SPECK)

without comments

MIT Media Lab – 2013 Legal Hack-a-thon on Identity

Today we had the privilege of hearing a presentation by Loius Wingers and Stefan Treatman-Clark on  a couple of lightweight ciphers from the NSA.  These are called SIMON and SPECK. The algorithms are not yet published, but they have a paper (pdf copy here) that shows some numbers on the performance of the proposed ciphers.

The SIMON and SPECK algorithms come in a family that range from 48-bits to 128-bits.  Since the target deployment area is low-power and low memory devices (i.e. RFID devices, etc), the requirement is that these algorithms do not use more than 2000 gates. The paper has a table showing the GE and throughput.

Louis and Stefan presented SIMON and SPECK at the 2013 Legal Hack-a-Thon and  the MIT Media Lab.

 

 

Written by thomas

January 30th, 2013 at 9:27 pm

Posted in Uncategorized

IDESG Membership, ROA and IPR

without comments

After over 6 weeks of the IDESG Governance subgroup drafting the IDESG Membership and ROA related docs, these are finally completed.

(1) Proposed Membership Agreement

(2) Proposed Intellectual Property Rights Policy

(3) IDESG Rules of Association

 

Key Dates and Times

  • Ballot on the Membership Agreement & IPR Policy opened at 12:00 noon ET on December 3, 2012
  • Ballot on the Membership Agreement & IPR Policy will close at 12:00 noon ET on December 17, 2012

Written by thomas

December 6th, 2012 at 5:52 pm

Posted in Uncategorized

UMA Tutorial – High Level (Part 1)

without comments

Since I’m editing the User Managed Access (UMA) Core spec, I always seem to be getting questions about UMA.  Also I’ve noticed that some folks in the IETF OAuth2.0 WG have not really understood the UMA flows (not suprising, since the UMA Core Rev 5C is now over 40 pages long). So I thought a very high level tutorial would benefit lots of people.

So here goes Part 1 of the tutorial.

 

 

 

 

 

 

When a Requester (acting on behalf of a Requesting Party) seeks access to resources sitting at the Host, the Requester must obtain an OAuth2.0 token from the Authorization Manager (AM).  In this case the Requester is behaving like an OAuth Client, while the AM is behaving like an OAuth2.0 Authorization Server (AS). The resulting token is referred to in the UMA specs as the Authorization API Token (AAT).  This token allows the Requester to use the other APIs at the AM at a later stage. It signals authorization by the AM for the Requester to access the other APIs at the AM.

After the Requester gets its own AAT token, it must now get another token specific to the Requesting Party (call him Bob).  This token is referred to as the Requester Permission Token (RPT).  Later when Bob (the Requesting Party) is seeking to access resources to at the Host (e.g. MyCalendarsz.com) by employing the Requester (e.g. TripItz.com), the requester must forward both tokens (the AAT and the RPT tokens) to the Host.

The reason why two tokens are needed is that there might be multiple Requesting Parties (eg. Bob, Billy, Becky, etc) at the same Requester (TripItz.com) seeking to access the same resources at the same Host. However, the Requester needs to be authorized only once by the AM to access the AM’s APIs.

[In the next post, we'll see how the Host has to do the same basic registration to the AM]

 

 

Written by thomas

December 5th, 2012 at 11:01 pm

Posted in Uncategorized

SAML2.X (Revising the SAML2.0 Specs)

without comments

For SAML2.0 developers, users and vendors, it is perhaps worthwhile noting that the OASIS Security Services TC (SSTC) has started the process of revising the SAML2.0 specs.

Here is what the SSTC group has agreed to so far:

  • All approved errata, along with any errata presented to the TC subsequent to the last approval, are to be applied to the specifications, or the specifications may be reworded to include the spirit of the errata identified.
  • All original SAML 2.0 message formats are intended to remain unchanged in the new version except in cases where outright errors existed and were corrected through errata or subsequent specifications. This includes preservation of core XML namespaces.
  • To the greatest extent possible, existing implementations of SAML 2.0 features should be compatible with the new standard, and any areas of divergence should be minimized and clearly identified.
  • Some extensions and profiles published after SAML 2.0 ought to be incorporated in some fashion into the base standard to promote adoption and reduce the number of documents needed to address critical features.
  • Significant changes to the Conformance statements for the standard are to be expected. We do not expect that every new feature or existing extension would be made mandatory to implement.
  • Material related to a variety of threats implementers ought to be aware of should be drafted and incorporated.

Please visit the wiki containing the SAML2Revision plans. The SSTC is seeking input from the broader SAML2.0 community.

 

Written by thomas

May 31st, 2012 at 8:12 pm

Posted in Uncategorized