How Obscurity Could Help the Right To Fail

In the past, I have discussed the European Commission’s “Right to be Forgotten” proposal, and the issues with trying to provide a comprehensive right to wipe a record clean. I have argued individuals need a sphere of privacy where they know they can make mistakes, without those errors following them for the rest of their lives. Individuals will shy away from risky or provocative ideas and efforts, if they fear organizations will use those activities to discriminate against them forever. These provocative ideas challenge the status quo and are often what is needed to break away from conformity and innovate. Technology companies are familiar with this need for space to allow employees to innovate, and many structure their performance review systems to create the ability for individuals to take risks.  I call the need for this space for innovation, “The Right to Fail”.

Google’s current court case at the European Court of Justice centers on the issue of how to provide an ability for an individual for something to be forgotten. In this case, the Spanish data protection authority searched for a mechanism to allow individuals to have Google no longer provide links to certain information. The case is about more than just requiring organizations to take down inaccurate information. Instead, the regulator required Google to also stop pointing to information that may have been true. The case raises questions of: 1. When is it reasonable to require an internet company to obscure something? and 2. How should an individual make such a request? What follows are some thoughts I have on those questions. These ideas are just some rough thinking, and need a broad spectrum of stakeholders to come together to develop the concepts. I hope this blog entry furthers the discussion.

I. When is it reasonable to ask for Obscurity?

As I noted in my blog entry about Gavin Newsom’s book “Citizenville”, many commentators have argued that the best way for individuals to protect themselves is “not to do bad things”. However, that philosophy ignores many situations where it is more than reasonable for an individual to want to obscure true information about him or herself. I have taken a stab at a partial taxonomy of the categories where it would be reasonable for an individual to request obscurity. The list below also includes illustrative examples:

1. Remorse – I did a bad thing but truly feel sorry, paid my debt and learned my lesson. (a drunk driving conviction followed by 20 years of sobriety and exemplary driving)

2. Unexpected Consequences – I did a bad thing, but could not have expected it would cause the results. (an inconsiderate breakup with a girlfriend, who then commits suicide)

3. Impossible Situation – A classic dilemma, where there are no good choices (choosing between hitting the car in front of me or swerving into the pedestrians on the sidewalk).

4. Took a Risk – I tried to do something good, but it did not end well. (start-up company fails; applied for a job with another company, but did not receive an offer).

5. Others Perceive as Bad – I think it is right, but others will discriminate against me for it. (protesting at a rally for a controversial political cause; or a teacher expressing a political opinion on a blog that is then read by students).

6. Sensitive Data – I just don’t want people knowing. (bank account information, ss#, health data, or the fact I am planning a surprise party)

7. Bad things (that aren’t so bad) – (jaywalking, driving 5 miles over the speed limit, college sophomores drinking alcohol)

8. Taken out of context – Information that looks bad out of its original context. (Placing some punk rock lyrics in a blog about the use of profanity in music, and later the lyrics are quoted as the thinking of the individual).

9. Victims – Information about how an individual was a victim of a crime (domestic violence victims).

These categories contain mostly truthful information about the individual (excepting when something is taken so out of context that it no longer can be thought of as true). Also, the concerning uses of this information are much broader than for employment or providing credit. For these two reasons, the Fair Credit Reporting Act does not solve the need for obscurity in the U.S..

II. How can individuals request obscurity?

Completely “forgetting” these categories of information, or opting-out of all tracking of them, may be impossible. However, each of the categories above argues, to varying degrees, for the limited ability for practical obscurity. To preserve the Right to Fail, we do not need absolute deletion of information. Instead, we just need to make it more difficult to find. I am a fan of the work on the concept of the privacy benefits of obscurity on which Profs. Evan Selinger and Woodrow Hartzog have written.

Given third parties might place harmful information on the internet, without consent, individuals should have some type of legal and practical remedy to have this information obscured. There may be some possibility of doing so under US tort law, such as the public disclosure of private facts. However, filing a lawsuit to enforce this right will give limited practical ability for most individuals to have the information obscured, at least in part due to the cost of hiring a lawyer. Similarly, there are technical tools and commercial services to promote obscurity (e.g. Reputation.com), but those are only available to those who can afford the solutions.  For individuals who do not have the financial and technical resources to create the obscurity, how could we create a legal mechanism allowing individuals some ability for obscurity, and thereby a preservation of the Right to Fail?

What if, Congress introduced legislation to create a public-private partnership to further obscurity? This law could focus on a co-regulatory approach, taking the best of industry led efforts, while still allowing for strong government oversight and enforcement.

The law could require any company operating internet content searching services (search engines, but likely also social networks) and data brokers (a term which would need to be defined) to create an industry self regulatory body (the “Center”) to make decisions on what should be obscured. The FTC would be given oversight responsibility over the self regulatory body, and enforcement power against those who do not participate (or who do not follow the remedy instructions).

The Center could be an independent non-profit, and could be funded and staffed by the regulated companies. The Center could be a central point of contact for individuals who believe they have a legitimate case for the obscurity of a piece of information (see the examples above). The Center could develop guidelines for each of the above categories to help make decisions of whether “practical obscurity” is warranted (e.g. it would not show up in internet searches and would be deleted from data broker profiles).

Cost to the company needing to obscure the information, the potential for harm to the individual, the relevance of the information, whether the individual originally consented to the use of the information, whether the use of the data is outside of the original context of why it was provided, and the impacts to third parties could all be captured in the guidelines as criteria for consideration. The FTC could approve these guidelines and have some oversight in how they are applied. Each entity participating in the effort would submit an annual review of their follow through on obscurity requests from the Center. The Center would submit an annual review to the FTC to show categories of obscurity requests granted and denied. This would still allow the initial collection/posting of the information (thus hopefully decreasing first amendment concerns), but would provide the individual an opportunity for limited and legitimate obscurity over time.

There may not be enough political will to put such a Center in place. However, what would it take to bring the necessary stakeholders together to discuss this and other ideas of how to provide reasonable and practical obscurity for individuals? If we are able to do so, we can go a long way to preserving the Right to Fail.

4 Responses to How Obscurity Could Help the Right To Fail

  1. Jim Lewis says:

    You might want to add a discussion of public domain. People may be unaware that information on the internet is in the public domain, and one way to rephrase the question (and the question for the EU) is should people have the right to remove information from the public domain. In some contexts, this would be known as censorship, and a good test would be to ask if a court would agree to suppress the information as defamatory. If the answer is no, why would we use extra-legal procedures to suppress it on the web?
    Some categories of information “Sensitive Data (bank account information, ss#, health data)” and perhaps “ Information about how an individual was a victim of a crime (domestic violence victims)” could be considered as not public, but other categories are open to question.

    We could argue that in the past, it was more difficult to assemble information about an individual and this gave them a degree of protection that the internet has removed. Folklore tells of Americans’ whose business failed in one town moving to another state to start again. This encourages both innovation and fraud. We might ask if people have the right to falsify the public record by removing or editing information of which they disapprove. An example of this would be asking a newspaper not to publish certain information. There are examples where copyright and trademark laws have been abused by businesses seeking to legitimate criticism. Could a restaurant ask for a negative review to be removed. In this area, the culture has not caught up to the technology, where the anonymity of the internet encourages extreme views and the voice of a single person can now be heard by thousands.

    If the EU rule is not carefully designed, it will be abused and harmful. All information about an individual cannot be subject to their control. This leads us back to the question of public domain and reasonable expectation of privacy. Is the internet a public space (advocates of the global commons, this is you big chance to weigh in) is it a “pseudo-public area,” like the food court in a shopping mall, or is it a private space, and what can we ‘reasonably” expect to be considered private?.

  2. Jim Lewis says:

    Sent this in an email to David and then decided to add it:

    Here’s a toy: suppose you took a picture of yourself and posted it on the telephone pole outside your office, on the street. Later you came along and took the picture down, but in the interim, someone came and made 100 copies of it. What are your rights? Could you demand all the copies?

    A bit similar to the debate over DNA – if you throw away a coke bottle and the police get it from the trash and do a swab, is it legal? In that case, I’d argue that you did have a reasonable expectation of privacy, but for the photo, I’d say once you put something in the public domain you surrender your rights.

  3. Cliff Elam says:

    We have always been at war with Eastasia!

    I love the idea of being able to have the internet “forget” something or correct a factually incorrect item, but I fear the first will quickly be corrupted and the second will degenerate into something as fun as the Wikipedia page on AGW.

    I think the real balm will come as more embarrassing facts are available, not fewer. Really, don’t you believer that the Clinton’s (to take an obvious example) have proven that no matter what stupid stuff you do that becomes public, well, you can become an elder statesman, run for president, etc?

    Once everyone’s indiscretions are public, well, we’ll know that no pigs really are more equal than others.

    -XC

    PS – Two, count ‘em, two Orwell references in one short post!

  4. Ruby Zefo says:

    This is a very thoughtful commentary, David. Your blog made me wonder how many in society will react to attempts to create criteria for obscurity when much of the content may include “moral judgments”. What if the actor shows no remorse, the debt to society was not paid, the act largely viewed as heinous and potentially recurring? Are these activities to be judged via local cultural norms, creating a patchwork of global opinions on right vs. wrong, private vs. public, so that something available for viewing in one country is obscured in another? What about free speech and the right to comment on a person’s actions that the actor would otherwise like to obscure? Who is qualified to make these calls? I commend you on provoking quite a lot of thought and questions on this topic.