@FindThomas

Digital Identity, Trust and Privacy on the open Internet

Archive for the ‘Privacy’ Category

Query Smart Contracts: Bringing the Algorithm to the Data

without comments

One paradigm shift being championed by the MIT OPAL/Enigma community is that of using (sharing) algorithms that have been analyzed by experts and have been vetted to be “safe” from the perspective of privacy-preservation. The term “Open Algorithm” (OPAL) here implies that the vetted queries (“algorithms”) are made open by publishing them, allowing other experts to review them and allowing other researchers to make use of them in their own context of study.

One possible realization of the Open Algorithms paradigm is the use of smart contracts to capture these safe algorithms in the form of executable queries residing in a legally binding digital contract.

What I’m proposing is the following: instead of a centralized data processing architecture, the P2P nodes (e.g. in a blockchain) offers the opportunity for data (user data and organizational data) to be stored by these nodes and be processed in a privacy-preserving manner, accessible via well-known APIs and authorization tokens and the use of smart contracts to let the “query meet the data”.

In this new paradigm of privacy-preserving data sharing, we “move the algorithm to the data” where queries and subqueries are computed by the data repositories (nodes on the P2P network). This means that repositories never release raw data and that they perform the algorithm/query computation locally which produce aggregate answers only. This approach of moving the algorithm to the data provides data-owners and other joint rights-holders the opportunity to exercise control over data release, and thus offers a way forward to provide the highest degree of privacy-preservation while allowing data to still be effectively shared.

This paradigm requires that queries be decomposed into one or more subqueries, where each subquery is sent to the appropriate data repository (nodes on the P2P network) and be executed at that repository. This allows each data repository to evaluate received subqueries in terms of “safety” from a privacy and data leakage perspective.

Furthermore, safe queries and subqueries can be expressed in the form of a Query Smart Contract  (QSC) that legally bind the querier (person or organization), the data repository and other related entities.

A query smart contract that has been vetted to be safe can be stored on nodes of the P2P network (e.g. blockchain). This allows Queriers to not only search for useful data (as advertised by the metadata in the repositories) but also search for prefabricated safe QSCs that are available throughout the P2P network that match the intended application. Such a query smart contract will require that identities and authorizations requirements be encoded within the contract.

A node on the P2P network may act as a Delegate Node in the completion of a subquery smart contract.  A delegate node works on a subquery by locating the relevant data repositories, sending the appropriate subquery to each data repository, and receiving individual answers and collating the results received from these data repositories for reporting to the (paying) Querier.

A Delegate Node that seeks to fulfill a query smart contract should only do so when all the conditions of the contract has been fulfilled (e.g. QSC has valid signature; identity of Querier is established; authorization to access APIs at data repositories has been obtained; payment terms has been agreed, etc.). A hierarchy of delegate nodes may be involved in the completion of a given query originating from the Querier entity. The remuneration scheme for all Delegate Nodes and the data repositories involved in a query is outside the scope of the current use-case.

Written by thomas

August 6th, 2016 at 1:21 pm

What and why: MIT Enigma

without comments

I often get asked to provide a brief explanation about MIT Enigma — notably what it is, and why it is important particularly in the current age of P2P networking and blockchain technology.  So here’s a brief summary.

The MIT Enigma system is part of a broader initiative at MIT Connections Science called the Open Algorithms for Equity, Accountability, Security, and Transparency (OPAL-EAST).

The MIT Enigma system employs two core cryptographic constructs simultaneously atop a Peer-to-Peer (P2P network of nodes). These are secrets-sharing (ala Shamir’s Linear Secret Sharing Scheme (LSSS)) and multiparty computation (MPC). Although secret sharing and MPC are topics of research for the past two decades, the innovation that MIT Enigma brings is the notion of employing these constructions on a P2P network of nodes (such as the blockchain) while providing “Proof-of-MPC” (like proof of work) that a node has correctly performed some computation.

In secret-sharing schemes, a given data item is “split” into a number of ciphertext pieces (called “shares”) that are then stored separately. When the data item needs to be reconstituted or reconstructed, a minimum or “threshold” number of shares need to be obtained and merged together again in a reverse cryptographic computation. For example, in Naval parlance this is akin to needing 2 out of 3 keys in order to perform some crucial task (e.g. activate the missile). Some secret sharing schemes possess the feature that some primitive arithmetic operations can be performed on shares (shares “added” to shares) yielding a result without the need to fully reconstitute the data items first. In effect, this feature allows operations to be performed on encrypted data (similar to homomorphic encryption schemes).

The MIT Enigma system proposes to use a P2P network of nodes to randomly store the relevant shares belonging to data items.  In effect, the data owner no longer needs to keep a centralized database of data-items (e.g. health data) and instead would transform each data item into shares and disperse these on the P2P network of node.  Only the data owner would know the locations of the shares, and can fetch these from the nodes as needed.  Since each of these shares appear as garbled ciphertext to the nodes, the nodes are oblivious to their meaning or significance.  A node in the P2P network would be remunerated for storage costs and the store/fetch operations.

The second cryptographic construct employed in MIT Enigma multiparty computation (MPC). The study of MPC schemes seeks to address the problem of a group of entities needing to share some common output (e.g. result of computation) whilst maintaining as secret their individual data items.  For example, a group of patients may wish to collaboratively compute their average blood pressure information among them, but without each patient sharing actual raw data about their blood pressure information.

The MIT Enigma system combines the use of MPC schemes with secret-sharing schemes, effectively allowing some computations to be performed using the shares that are distributed on the P2P. The combination of these 3 computing paradigms (secret-sharing, MPC and P2P nodes) opens new possibilities in addressing the current urgent issues around data privacy and the growing liabilities on the part of organizations who store or work on large amounts of data.

Written by thomas

August 6th, 2016 at 1:10 pm

New Principles for Privacy-Preserving Queries for Distributed Data

without comments

Here are the three (3) principles for privacy-preserving computation based on the Enigma P2P distributed multi-party computation model:

(a) Bring the Query to the Data: The current model is for the querier to fetch copies of all the data-sets from the distributed nodes, then import the data-sets into the big data processing infra and then run queries. Instead, break-up the query into components (sub-queries) and send the query pieces to the corresponding nodes on the P2P network.

(b) Keep Data Local: Never let raw data leave the node. Raw data must never leaves its physical location or the control of its owner. Instead, nodes that carry relevant data-sets execute sub-queries and report on the result.

(c) Never Decrypt Data: Homomorphic encryption remains an open field of study. However, certain types of queries can be decomposed into rudimentary operations (such as additions and multiplications) on encrypted data that would yield equivalent answers to the case where the query was run on plaintext data.

 

Written by thomas

August 30th, 2015 at 5:51 pm

Atmel to support EPID from Intel

without comments

One important news item this week from the IoT space is the support by Atmel of Intel’s EPID technology.

Enhanced Privacy ID (EPID) grew from the work of Ernie Brickell and Jiangtao Li based on previous work on Direct Anonymous Attestations (DAA).  DAA is very relevant because it is built-in into the TPM1.2 chip (of which there are several hundred million in PC machines).

Here is a quick summary of EPID:

  • EPID is a special digital signature scheme.
  • One public key corresponds to multiple private keys.
  • Private key generates a EPID signature.
  • EPID signature can be verified using the public key.

Interesting Security Properties:

  • Anonymous/Unlinkable: Given two EPID signatures one cannot determine whether they are generated from one or two private keys.
  • Unforgeable: Without a private key one cannot create a valid signature.

 

Written by thomas

August 24th, 2015 at 10:33 pm

Transparency of usage of personal data: the need for a HIPAA-like regime

without comments

Ray Campbell hits the ball out of the park again with his awesome suggestion in his blog: we need a HIPAA-like regime for the privacy of personal data.  As a mental exercise, Ray has gone through the HIPAA document and substituted “individually identifiable health information” to “individually identifiable personal information“. The red-lined doc can also be found on his site.

The at the heart of his proposal is the notion of shifting the thought paradigm from the person as the absolute owner of his/her personal data to one where the person is seeking the right to know about who has his/her personal data, how they obtained it, what are they doing with it and to whom have they sold the data (the 4 questions).

Following on from Ray’s post and from Professor Sandy Pentland’s view on the New Deal on Data, I believe there should be a new market in the digital economy where individuals can meet directly with buyers of their personal data, and where individuals can opt-in to make more data about themselves available to these buyers.  Cut out the middleman — the big data corporations that are not contributing to the efficiency of free markets.

 

Written by thomas

March 6th, 2013 at 9:02 pm

The 4 questions on transparency in personal data (disclosure management)

without comments

MIT Media Lab – 2013 Legal Hack-a-thon on Identity

Ray Campbell argues quite elegantly and convincingly that the “data ownership” paradigm is not the correct paradigm for achieving privacy and control over personal data. The notion that “I own my data” can be impractical especially in the light of 2-party transactions, where the other party may also “own” portions of the transaction data and where they might be legally bound to keep copies of “my data”.

Instead, the better approach is to look at “transparency” and visibility into where our data reside and who is using it. Here are the four questions that Ray poses:

  • Who has my data
  • What data they have about me
  • How did they acquire my data
  • How are they using my data

Transparency becomes an important tool disclosure management of personal data. These questions could be the basis for the development of a  trust framework on data transparency, one which can be used to frame Terms of Service that both myself and the Relying Party must accept.

Written by thomas

January 30th, 2013 at 9:16 pm

On the survival NSTIC Privacy Standing Committee

without comments

Aaron Titus writes an interesting piece based on his analysis of the recent proposal from Trent Adams (PayPal) to modify the NSTIC governance rules. The abolition of the NSTIC Privacy Standing Committee may have unforeseen impact on the acceptance of the whole NSTIC Identity Ecosystem idea, notably from the privacy front.

During the last decade — starting from the Liberty vs Passport kerfufle — we have seen a number of proposals for components of an “identity infrastructure” for the Internet.  All in all, there has been little adoption (by consumers) of these technologies for high-value transactions due IMHO to the lack of privacy-preserving features.

So far I have yet to see a sustainable business model for identities which is focus on the “individual” (i.e. individual centric) and which preserves his/her personal data.  All the agreements and EULAs that we click “yes” to seem to be titled in favor of the provider.  If a provider “loses” my personal information (including credit-card information), there is really little incentive (positive or negative) to get them to recover my data.  The individual suffers all the losses. Little wonder there is no buy-in from the consumer :-)

 

 

 

Written by thomas

August 29th, 2012 at 3:45 pm

Posted in NSTIC,Privacy

A market for leakage in derived identities

without comments

At lunch today Sal summarized in one sentence what I have been trying to express for the last couple of years:

There is a market out there for leakage in derived identities (in the Internet)

What we had been talking about was the (inevitable) need for something similar to what the Jericho Forum folks call Core Identity. In simple words, this is the notion that every entity/person should have a “main” secret identity (like a confidential SSN number), from which other usable identities (personas) are derived via a one-way function. (See the Jericho Forum “Identity Commandments”).

The question was how and who was going to manage the issuance and maintenance of the core identities of US citizens. Since the US federal government was the issuing authority for social security numbers in the US, one possibility would be for the US federal government to be the issuing and maintenance authority for core identities (independent of whether the day-to-day managing was actually outsourced to private sector organizations).

The “leakage” here refers to the obtaining (e.g. scraping off the Internet) of pieces of information about one of my persona stemming off my core identity. For example, in my “home-persona” the location of my house may be of interest to advertisers who are doing target-marketing in my neighborhood. We can call this “leakage” because I never authorized the release (and usage) of my home-persona to the relying party (i.e. the marketing company).

 

 

Written by thomas

March 9th, 2012 at 11:36 pm

Posted in Privacy