7 Conundrums of the Right to be Forgotten

Digital Eraser

The recent ruling by the European Court of Justice (ECJ) has re-ignited debate about the “right to be forgotten”, or perhaps more accurately the right to have certain information purged from the Internet. While this right provides some real privacy benefits, it runs up against free speech and jurisdictional problems.

Here are seven conundrums around the right to be forgotten and the recent ECJ ruling:

  1. The ECJ ruling provides for removing search results, but not for removing the underlying web page. In the case in question, a newspaper article is allowed to stay on-line, but a search on the plaintiff's name must not return a link to that page.
  2. While the search result would be removed when the search is the person’s name, other searches for the information would show that link.
  3. The ECJ does not give you a right to remove anything harmful or embarrassing to you, only information “inadequate, irrelevant or no longer relevant, excessive in relation to the purposes of the processing”
  4. You don’t have a right to have certain information forgotten if that is newsworthy and noteworthy. In other words, if this was likely to be searched for by a lot of people, then you can’t remove it.
  5. The ECJ ruling only applies to EU residents . If you are outside the EU, or using a search engine outside the EU then you don’t have this right.
  6. The ECJ ruling only applies to search engines operating in the EU. If the search engine is exclusively operating outside the EU, or is being accessed from outside the EU, then the search results would still be visible. This means that you would get the search results if you were using Anonymizer Universal from within the EU.
  7. The tools and laws used to enforce the right to be forgotten are very similar to the techniques used for censorship by repressive regimes. Once in place, the urge to use the power more broadly has been irresistible to governments that obtain it.


Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me onFacebookTwitter, and Google+.

Cash for Anonymity shows serious intent

AU Icon Paying for anonymity is a tricky thing, mostly because on-line payments are strikingly non-anonymous. The default payment mechanism on the Internet is the Credit Card, which generally requires hard identification. There are anonymous pre-paid cards, but they are getting harder to find, and most pre-paid cards are requiring registration with real name and (in the US) social security number.

We are working on supporting Bitcoin which provides some anonymity, but not as much as you might think. New tools for Bitcoin anonymity are being developed, so this situation may improve, and other crypto currencies are gaining traction as well.

When it comes to anonymity, cash is still king. Random small US bills are truly anonymous, and widely available (1996 study showed over half of all physical US currency circulates outside the country). While non-anonymous payments only allow Anonymizer to know who its customers are, not what they are doing, that information might be sensitive and important to protect for some people.

That is why Anonymizer accepts cash payments for its services. Obviously it is slower and more cumbersome, but for those who need it, we feel it is important to provide the ultimate anonymous payment option. If you are looking at a privacy provider, even if you don’t plan to pay with cash, take a look at whether it is an option. It could tell you something about how seriously they take protecting your privacy overall.

Would you take $8 / month to expose yourself online?

Flasher man Startup Datacoup Will Pay You $8 a Month If Your Feed It Data from Facebook, Twitter, and Your Credit Card | MIT Technology Review

We have seen interesting experiments and studies where researchers have looked at what people are willing to pay to protect their privacy.

This then would be the opposite experiment. A company called Datacoup is offering people $8 per month to give them access to all of their social media accounts, and information on their credit and debit card transactions.

You certainly can’t fault them for being covert about their intentions. They are saying very directly what they want and offering a clear quid pro quo.

I don’t think I will be a customer, but it will be very interesting to see if they can find a meaningful number of people willing to make this deal.

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

Facial recognition apps: I both desire and fear them.

B W Mask ImageFacial recognition app matches strangers to online profiles | Crave - CNET Google has adopted a privacy protecting policy of banning facial recognition apps from the Google Glass app store. I appreciate the effort to protect my privacy but facial recognition is probably the ONLY reason I would wear Google Glass.

I am hopeless at parties or networking events. I have no ability at all to remember names, and I know I am far from alone in this. The ability to simply look at someone and be reminded of their name, our past interactions, and any public information about their recent activities, would be absolute gold.

Obviously I am less enthusiastic about having third party ratings of my intelligence, integrity, hotness, or whatever, popping up to the people looking at me. As usual, humans are in favor of privacy for themselves but not for others.

A new app is coming out soon called Nametag, which is planned to do exactly this. On iOS, Android, and jail broken Glass, you will be able to photograph anyone and, using facial recognition, pull up all available social media information about them.

To opt out you will need to set up an account with NameTag, and I presume you will also need to upload some high quality pictures of yourself so they can recognize you to block the information. Hurm…..

Whatever we all think about this, the capability is clearly coming. The cameras are getting too small to easily detect, high quality tagged photos are everywhere, and the computing power is available.

While citizens have some ability to impact government surveillance cameras and facial recognition, it will be much harder to change course on the use of these technologies with private fixed cameras, phones, and smart glasses. Even if we convince device makers to block these applications, the really creepy people will jailbreak them and install them anyway.

For years I have said that the Internet is the least anonymous environment we inhabit. With this kind of technology, it may soon be much easier to hide yourself online than off. Police really don’t like you wearing masks.

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

Can the market drive privacy protections?

Study: Consumers Will Pay $5 for an App That Respects Their Privacy - Rebecca J. Rosen - The Atlantic

This is refreshing. Some evidence that most people ARE actually willing to pay for privacy. If the market shows that this is a winner, we might start to see more privacy protecting applications and services.

The real question is whether invading your privacy generate more revenue than what we are willing to pay to be protected.

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

Opt out of Google ads using your name

Google is changing its terms of service to allow them to use your name and photo in advertisements to your friends. Most people seem to have been opted in to this by default, although some (including me) have found themselves defaulted out of the program.

If you are uncomfortable with your name, picture, and opinions appearing in ads from Google, just go to Google's Shared Endorsements Settings page. The page describes the program. At the bottom you will find a checkbox. Uncheck it, and click "Save".

Apparently Open WiFi is actually private

An important decision just came down from the Federal 9th Circuit Court of Appeals about whether Google can be sued for intercepting personal data from open WiFi networks. The intercepts happened as part of the Street View program. In addition to capturing pictures of their surroundings, the Street View vehicles also collect GPS information (to correctly place the pictures) and the MAC addresses (unique hardware identifiers), SSIDs (user assigned network names), and until 2010 they captured some actual data from those networks. The purpose of the WiFi collection is to provide enhanced location services. GPS drains phone batteries quickly, and the weak signals may be unavailable indoors, or even under and significant cover. Nearly ubiquitous WiFi base stations provide another way of finding your location. The Street View cars capture their GPS coordinates along with all of the WiFi networks they can see. Your phone can then simply look at the WiFi networks around it, and ask the database what location corresponds to what it is seeing. WiFi is often available indoors, has short range, requires much less power, and is generally turned on in any case. Google claims that capturing the actual data was an accident and a mistake.

Unfortunately that data contained usernames, passwords and other sensitive information in many cases. A lawsuit was filed accusing Google of violating the Wiretap Act when it captured the data. There is no suggestion that the data has been leaked, misused, or otherwise caused direct harm to the victims.

The ruling was on a motion to dismiss the lawsuit on the grounds that Google’s intercepts were protected under an exemption in the Wiretap Act which states that it is OK to intercept radio communications that are “readily accessible” to the general public. The Act specifically states that encrypted or scrambled communications are NOT readily accessible, but the decision hangs on exactly what IS readily accessible. The court ruled that WiFi did not count as “radio” under the Act because several types of radio communications were enumerated, and this was not one of them. They then considered this case under the umbrella of “electronic communications”, which also has an exemption for readily accessible communications. On that, they decided that open WiFi is not readily accessible.

From a privacy perspective, this is good news. It says that people who intercept your information from your open WiFi can be punished (if you ever find out about it). This would clearly prevent someone setting up a business to automatically capture personal and marketing data from coffee shop WiFi’s around the world. It is less likely to have any impact on criminals. I am concerned that it will also lead to a sense of false confidence, and perhaps cause people to leave their WiFi open, rather than taking even minimal steps to protect themselves.

The hacker / tinkerer / libertarian in me has a real problem with this ruling. It is really trivial to intercept open WiFi. Anyone can join any open WiFi network. Once joined, all the the data on that network is available to every connected device. Easy, free, point and click software allows you to capture all of the data from connected (or even un-connected) open WiFi networks. If you are debugging your home WiFi network, you could easily find yourself capturing packets from other networks by accident. They are in the clear. There is no hacking involved. It is like saying that you can not tune your radio to a specific station, even though it is right there on the dial.

I think peeping in windows is a reasonable analogy. If I am standing on the sidewalk, look at your house, and see something through your windows that you did not want me to see, that is really your problem. If I walk across your lawn and put my face against the glass, then you have a cause to complain.

Open WiFi is like a window without curtains, or a postcard. You are putting the data out there where anyone can trivially see it. Thinking otherwise is willful ignorance. All WiFi base stations have the ability to be secured, and it is generally as simple as picking a password and checking a box. You don’t even need to pick a good password (although you really should). Any scrambling or encryption clearly moves the contents from being readily accessible, to being intentionally protected. If you want to sunbathe nude in your back yard, put up a fence. If you want to have privacy in your data, turn on security on your WiFi router.

I think that radio communications are clearly different than wired. With radio, you are putting your data on my property, or out into public spaces. There is no trespass of any kind involved to obtain it, and we have no relationship under which you would expect me to protect the information that you have inadvertently beamed to me. It would be like saying that I can’t look at your Facebook information that you made public because you accidentally forgot to restrict it. 

Similar to provisions of the DMCA, which outlaw much research on copy protection schemes, this is likely to create accidental outlaws of researchers, and the generally technical and curious.


Teens are not the no-privacy generation after all

Report: Teens Actually Do Care About Online Privacy -- Dark Reading

I keep hearing people say that young people today don't care about privacy, and that we are living in a post privacy world. This is clearly not the case.

Teens share a lot, maybe much more than I would be comfortable with, but that does not mean that they share everything, or don't care about where that information goes.

A new report from the Pew Research says that over half of teens have avoided or un-installed a mobile app because of privacy concerns. This is a sign that they are privacy aware and willing to do something about it.

Teens almost always have something that they want to hide, if only from their parents.

MaskMe is a good complement to Anonymizer

MaskMe (introduced in this blog post) is an interesting new entrant in the privacy services space.

They provide the ability to provide "masked" Email addresses (like our old Nyms product), phone numbers, and credit cards.

Combined with Anonymizer Universal, you will be able to do a fairly comprehensive job of shielding your true identity from websites and services you use.

This is a brand new service, so it is hard to know how it will fair, but it is certainly worth watching.

Law Enforcement Back Doors

Bruce Schneier has a great post on issues with CALEA-II.

He talks about two main issues, with historical context.

First, about the vulnerabilities that automated eavesdropping backdoors always create in communications, and how that disadvantages US companies.

Second, about the fact that law enforcement claims of communications "Going Dark" are absurd given the treasure trove of new surveillance information available through social media, and cloud services (like gmail).

I know I have talked about this issue a lot over the years, but I am shocked that I can't find any posts like it on this blog.

Bruce does it really well in any case.

The Privacy Blog Podcast - Ep.8: Phishing Attacks, Chinese Hackers, and Google Glass

Welcome to The Privacy Blog Podcast for May 2013. In this month’s episode, I’ll discuss how shared hosting is increasingly becoming a target and platform for mass phishing attacks. Also, I’ll speak about the growing threat of Chinese hackers and some of the reasons behind the increase in online criminal activity.

Towards the end of the episode, we’ll address the hot topic of Google Glass and why there’s so much chatter regarding the privacy and security implications of this technology. In related Google news, I’ll provide my take on the recent announcement that Google is upgrading the security of their public keys and certificates.

Leave any comments or questions below. Thanks for listening!

Why California’s Suggested 100 Word Privacy Policy is the Best Worst Idea

A guest post by Janelle Pierce who enjoys writing about various business issues, and spends her time answering questions like, "what is point of sale"?  

Just last month California’s Assemblymember Ed Chau (D-Alhambra) introduced a bill that would require the website privacy policy of any company located in California to be no more than 100 words long, and written at the reading level of an 8th grade student.

While Chau’s practice what you preach 64-word bill has garnered a lot of negative press lately, one thing is for certain; it has gotten people talking about something most people don’t talk about, the privacy policy. For those who don’t know what a privacy policy is, it’s simply the legal document that every website must have. According to Wikipedia.org a privacy policy is:

“A statement or a legal document (privacy law) that discloses some or all of the ways a party gathers, uses, discloses and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to but including; name, address, date of birth, marital status, contact information, ID issue and expiry date, financial records, credit information, medical history, where you travel, and intentions to acquire goods and services.”

Whenever you register a username on a website, whether for free e-mail, picture sharing, or social networking, you must agree to the site’s established privacy policy. Generally speaking most users simply click “accept” without ever reading, much less understanding, what is written in the privacy policy. This is often because site privacy policies are long, written in confusing legalese, and often overshadowed by the false assumption that a site with a privacy policy will keep your data private. While I do agree that ultimately the responsibility for reading and understanding the privacy policy lies with the users of a site, the same can be said about those who write and present the policy.

Which brings me to the point I’d like to make, that is, I think Chau’s idea to force privacy policies to a maximum of 100 words, and require that they’re written at an eighth grade reading level, is a good one. However, I do feel it has a few drawbacks that almost invalidate its ability to be credible. First, requiring that a legal document be 100 words or less is a little short sighted. Don’t get me wrong, I think the thought behind making this otherwise lengthy, unreadable, and downright obnoxious (yet important) document accessible to everyone is a great goal, but requiring 100 words or less doesn’t offer a company the chance to disclose everything they need to disclose. I think a maximum word count should be required, but there is no reason it needs to be so low.

Second, I think requiring an 8th grade reading level is an excellent idea. Too often these policies are chalked full of legal words and phrases that even college educated users cannot make sense of. That being said, I think Chau’s attempt at “rewriting” the privacy policy is a good one, albeit a little short sighted. Like many things in life that we’ve put up with for too long the privacy policy is definitely in need of an overhaul. However, trying to shore up its lacking all at once and in such an aggressive manner may not be the right approach. There’s no doubt that something needs to be done about the state of the average privacy policy, but rushing headlong into it so aggressively tends to alienate people who would otherwise be supporters of Chau’s intention.

For help creating a privacy policy you can contact a business lawyer or simply use an online privacy policy generator.

Do you read privacy policies or simply click “accept”? Share your thoughts below.

Postmortem Social Media (a.k.a. virtual zombies)

For millennia people have asked the question “what happens to us when we die?”

While the larger spiritual question will continue to be debated, the question about what happens to our on-line data and presence is more recent, and also more tractable.

Until very recently little thought has been given to this issue. Accounts would continue until subscriptions lapsed, the website shut down, or the account was closed for inactivity.

This has lead to some rather creepy results. I have lost some friends over the last few years, but I continue to be haunted by their unquiet spirits, which remind me of their birthdays, ask me to suggest other friends for them, and generally keep bobbing in my virtual peripheral vision.

Many social media sites do have a process for dealing with accounts after the death of their owners, but they are cumbersome and I have never actually seen them used. Generally, they are only engaged postmortem, by the family of the deceased. Assuming that they don’t have the passwords to the account, they need to contact the provider in writing and provide proof that they are a relative and of the death of the account’s owner.

Google has an interesting idea that I would like to see other sites adopt. They have set up the “Google Inactive Account Manager”  which allows the user to specify what will happen in advance. The user specifies what length of inactivity should be taken as a sign of death. Once that is triggered, Google contacts the user using secondary email accounts and phone numbers, if available, to make sure this was not just a long vacation or a loss of interest. If there is no response to that, then the Inactive Account Manager kicks in.

It notifies a list of people that you specify that this has happened. You have the option of having your data packaged up and sent to some or all of those people. Finally, you may have it delete your account, or leave it available but closed as a memorial.

This may not be the perfect implementation of this concept, but it is an important step.

So please, set up your digital will, and lets put a stop to the digital zombie apocalypse.

Do you have a right to be forgotten

The right to be forgotten is a topic discussed more in Europe than in the US. The core question is whether you have a right to control information about yourself that is held and published on the Internet by third parties.

This includes social media, news sites, discussion forums, search engine results, and web archives.

The information in question may be true or false, and anything from embarrassing to libelous.

 

Often discussions about removing old information center on calls for Google to remove information from their search results. I think they are chosen because they are the dominant search engine, and people feel that if the information is not shown in Google, then it is effectively gone. Of course, search engines are really just pointing to the actual data, while generally lives on some other website.

Being removed from Google does nothing to the existence of the information, nor would it impact indexing of that information by other search engines.

 

Even if you get the hosting website to remove the information, there are many organizations like archive.org who may have copied and archived the information, thus keeping it alive and available.

Here are some examples of information that you might want removed.

  • Racist rantings on an old social media site to which access has been lost.
  • Drunk party pictures on a friend’s social media account.
  • Newspaper articles about dubious business activities.
  • Court records of a conviction after the sentence has been completed.
  • Negative reviews on a review website.
  • Unflattering feedback on a dating website.

 

In many of these cases, your “right to be forgotten” runs directly into another person’s “right to free speech”.

 

My thinking on this is still evolving, and I would welcome your thoughts and feedback. Right now I think that the free speech right trumps the right to be forgotten except in specific situations which need to be legally carved out individually; things like limitations on how long credit information should be allowed to follow you. Of course, the problem will be that every country will draw these lines differently, making enforcement and compliance very difficult, and leading to opportunities for regulatory arbitrage.

 

We are already seeing this in the EU. While most of the EU is moving towards codifying a right to be forgotten, the UK is planning to opt out of that.

DEA can't break Apple iMessage encryption?

Cnet reports that an internal DEA document reveals that the DEA are unable to intercept text messages sent over Apple's iMessage protocol.

The protocol provides end to end encryption for messages between iOS and Mac OS X devices.

This is not to suggest that the encryption in iMessages is particularly good, but to contrast with standard text messages and voice calls which are completely unprotected within the phone company's networks.

It appears that an active man in the middle attack would be able to thwart the encryption, but would be significantly more effort. The lack of any kind of out of band channel authentication suggests that such an attack should not be too difficult.

If you really need to protect your chat messages, I suggest using a tool like Silent Text. They take some steps that make man in the middle attacks almost impossible.

Will a warrent be required to access your email.

Email Privacy Hearing Set To Go Before The House On Tuesday | WebProNews

The House Judiciary Committee is going to be discussing the Electronic Communications Privacy Act. There is a chance that they will strengthen it.

This act was written decades ago, before there were any real cloud solutions. Email was downloaded by your email client, and immediately deleted from the server. They law assumed that any email left on a server more than 180 days had been abandoned, and so no warrant was required for law enforcement to obtain it.

These days, with services like gmail, we tend to keep our email on the servers for years, with no thought that it has been abandoned. Law enforcement is opposing reforms of this law because it would make their work more difficult. Doubtless it would, as does almost any civil liberty.

Earlier this month Zoe Lofgren introduced the Online Communications and Geolocation Protection act, amending ECPA. It would require a warrant to obtain cell phone location information. There is clearly some momentum for reform.

Security by obscurity and personality shards

Adam Rifkin on TechCrunch has an interesting article about Tumblr and how it is actually used.The thesis of the article is that Tumblr is used more openly and for more sensitive things than Facebook because the privacy model is so much easier to understand and implement. If you have five interests and corresponding social circles, just set up five pseudonymous Tumblrs. Each then becomes its own independent social space with minimal risk of cross contamination. While all of those Tumblrs are public and discoverable, in practice they are not easy to find and unlikely to be stumbled upon by undesired individuals. This is classic security by obscurity. By contrast, Facebook wants you to put everything in one place, then use various settings to try to ensure that only the desired subset of friends, friends of friends, or the general public have access to it. This ties to the case I have been making for a while that people want to be able to separate their various personality shards among their various social circles. Even with access controls, using the same account for all of them may be too much connection and the odds of accidentally releasing information to the wrong people is too likely. I would like to see something like Tumblr provide stronger abilities to restrict discoverability, but it represents an interesting and growing alternative model to Facebook.