Policy@Intel
A place to exchange ideas and perspectives, promoting a thriving innovation economy through public policy
644 Discussions

How Obscurity Could Help the Right To Fail

David_Hoffman
Employee
0 0 592
In the past, I have discussed the European Commission’s “Right to be Forgotten” proposal, and the issues with trying to provide a comprehensive right to wipe a record clean. I have argued individuals need a sphere of privacy where they know they can make mistakes, without those errors following them for the rest of their lives. Individuals will shy away from risky or provocative ideas and efforts, if they fear organizations will use those activities to discriminate against them forever. These provocative ideas challenge the status quo and are often what is needed to break away from conformity and innovate. Technology companies are familiar with this need for space to allow employees to innovate, and many structure their performance review systems to create the ability for individuals to take risks.  I call the need for this space for innovation, “The Right to Fail”.

Google’s current court case at the European Court of Justice centers on the issue of how to provide an ability for an individual for something to be forgotten. In this case, the Spanish data protection authority searched for a mechanism to allow individuals to have Google no longer provide links to certain information. The case is about more than just requiring organizations to take down inaccurate information. Instead, the regulator required Google to also stop pointing to information that may have been true. The case raises questions of: 1. When is it reasonable to require an internet company to obscure something? and 2. How should an individual make such a request? What follows are some thoughts I have on those questions. These ideas are just some rough thinking, and need a broad spectrum of stakeholders to come together to develop the concepts. I hope this blog entry furthers the discussion.


I. When is it reasonable to ask for Obscurity?

As I noted in my blog entry about Gavin Newsom’s book “Citizenville”, many commentators have argued that the best way for individuals to protect themselves is “not to do bad things”. However, that philosophy ignores many situations where it is more than reasonable for an individual to want to obscure true information about him or herself. I have taken a stab at a partial taxonomy of the categories where it would be reasonable for an individual to request obscurity. The list below also includes illustrative examples:


1. Remorse – I did a bad thing but truly feel sorry, paid my debt and learned my lesson. (a drunk driving conviction followed by 20 years of sobriety and exemplary driving)


2. Unexpected Consequences – I did a bad thing, but could not have expected it would cause the results. (an inconsiderate breakup with a girlfriend, who then commits suicide)


3. Impossible Situation – A classic dilemma, where there are no good choices (choosing between hitting the car in front of me or swerving into the pedestrians on the sidewalk).

4. Took a Risk – I tried to do something good, but it did not end well. (start-up company fails; applied for a job with another company, but did not receive an offer).

5. Others Perceive as Bad – I think it is right, but others will discriminate against me for it. (protesting at a rally for a controversial political cause; or a teacher expressing a political opinion on a blog that is then read by students).

6. Sensitive Data – I just don’t want people knowing. (bank account information, ss#, health data, or the fact I am planning a surprise party)

7. Bad things (that aren't so bad) - (jaywalking, driving 5 miles over the speed limit, college sophomores drinking alcohol)

8. Taken out of context – Information that looks bad out of its original context. (Placing some punk rock lyrics in a blog about the use of profanity in music, and later the lyrics are quoted as the thinking of the individual).

9. Victims – Information about how an individual was a victim of a crime (domestic violence victims).

These categories contain mostly truthful information about the individual (excepting when something is taken so out of context that it no longer can be thought of as true). Also, the concerning uses of this information are much broader than for employment or providing credit. For these two reasons, the Fair Credit Reporting Act does not solve the need for obscurity in the U.S..

II. How can individuals request obscurity?

Completely “forgetting” these categories of information, or opting-out of all tracking of them, may be impossible. However, each of the categories above argues, to varying degrees, for the limited ability for practical obscurity. To preserve the Right to Fail, we do not need absolute deletion of information. Instead, we just need to make it more difficult to find. I am a fan of the work on the concept of the privacy benefits of obscurity on which Profs. Evan Selinger and Woodrow Hartzog have written.

Given third parties might place harmful information on the internet, without consent, individuals should have some type of legal and practical remedy to have this information obscured. There may be some possibility of doing so under US tort law, such as the public disclosure of private facts. However, filing a lawsuit to enforce this right will give limited practical ability for most individuals to have the information obscured, at least in part due to the cost of hiring a lawyer. Similarly, there are technical tools and commercial services to promote obscurity (e.g. Reputation.com), but those are only available to those who can afford the solutions.  For individuals who do not have the financial and technical resources to create the obscurity, how could we create a legal mechanism allowing individuals some ability for obscurity, and thereby a preservation of the Right to Fail?

What if, Congress introduced legislation to create a public-private partnership to further obscurity? This law could focus on a co-regulatory approach, taking the best of industry led efforts, while still allowing for strong government oversight and enforcement.

The law could require any company operating internet content searching services (search engines, but likely also social networks) and data brokers (a term which would need to be defined) to create an industry self regulatory body (the “Center”) to make decisions on what should be obscured. The FTC would be given oversight responsibility over the self regulatory body, and enforcement power against those who do not participate (or who do not follow the remedy instructions).

The Center could be an independent non-profit, and could be funded and staffed by the regulated companies. The Center could be a central point of contact for individuals who believe they have a legitimate case for the obscurity of a piece of information (see the examples above). The Center could develop guidelines for each of the above categories to help make decisions of whether “practical obscurity” is warranted (e.g. it would not show up in internet searches and would be deleted from data broker profiles).

Cost to the company needing to obscure the information, the potential for harm to the individual, the relevance of the information, whether the individual originally consented to the use of the information, whether the use of the data is outside of the original context of why it was provided, and the impacts to third parties could all be captured in the guidelines as criteria for consideration. The FTC could approve these guidelines and have some oversight in how they are applied. Each entity participating in the effort would submit an annual review of their follow through on obscurity requests from the Center. The Center would submit an annual review to the FTC to show categories of obscurity requests granted and denied. This would still allow the initial collection/posting of the information (thus hopefully decreasing first amendment concerns), but would provide the individual an opportunity for limited and legitimate obscurity over time.

There may not be enough political will to put such a Center in place. However, what would it take to bring the necessary stakeholders together to discuss this and other ideas of how to provide reasonable and practical obscurity for individuals? If we are able to do so, we can go a long way to preserving the Right to Fail.