Lavabit and Silent Mail shutdowns

There has been a lot of chatter about implications of first Lavabit and then Silent Circle's Silent Mail being shut down by their operators.

In both cases, it appears that there was information visible to the services which could be compelled by search warrants, court orders, or national security letters.

I want to assure Anonymizer users that we have no such information about Anonymizer Universal users that could be compelled. While we know who our customers are, for billing purposes, we have no information at all about what they do.

This has been tested many times, under many different kinds of court orders, and no user activity information has ever been provided, or could be provided.

The Privacy Blog Podcast – Ep.10: Storage Capacity of the NSA Data Center, Royal Baby Phishing Attacks, and how your SIM Card is Putting you at Risk

Welcome to Episode 10 of The Privacy Blog Podcast, brought to you by Anonymizer. In July’s episode, I’ll be talking about the storage capacity of the NSA’s data center in Utah and whether the US really is the most surveilled country in the world. Next, I’ll explain why the new royal baby is trying to hack you and how your own phone’s SIM card could be putting your privacy at risk.

Lastly, I’ll discuss the current legal status of law enforcement geolocation, Yahoo!’s decision to reuse account names, and  some exciting Anonymizer Universal news.

As always, feel free to leave any questions in the comments section. Thanks for listening!

No warrant needed for cell location information in the Fifth US Circuit

ArsTechnica has a nice article on a recent ruling by the US Fifth Circuit court of appeals.

In this 2-1 decision, the court ruled that cellular location information is not covered by the fourth amendment, and does not require a warrant. The logic behind this ruling is that the information is part of business records created and stored by the mobile phone carriers in the ordinary course of their business.

Therefor, the data actually belongs to the phone company, and not to you. The Stored Communications Act says that law enforcement must get a warrant to obtain the contents of communications (the body of emails or the audio of a phone call) but not for meta-data like sender, recipient, or location.

The court suggests that if the public wants privacy of location information that they should demand (I suppose through market forces) that providers delete or anonymize the location information, and that legislation be enacted to require warrants for access to it. Until then, they say we have no expectation of privacy in that information.

The Fifth Circuit covers Louisiana, Mississippi, and Texas.

This ruling conflicts with a recent New Jersey Supreme Court, which unanimously ruled that law enforcement does not have that right, which ruling only applies in New Jersey.

Montana has a law requiring a warrant to obtain location information, while in California a similar bill was vetoed.

It seems very likely that one or more of these cases will go to the supreme court.

MaskMe is a good complement to Anonymizer

MaskMe (introduced in this blog post) is an interesting new entrant in the privacy services space.

They provide the ability to provide "masked" Email addresses (like our old Nyms product), phone numbers, and credit cards.

Combined with Anonymizer Universal, you will be able to do a fairly comprehensive job of shielding your true identity from websites and services you use.

This is a brand new service, so it is hard to know how it will fair, but it is certainly worth watching.

The Privacy Blog Podcast – Ep.9: Government Surveillance Programs, Facebook Shadow Profiles, and Apple’s Weak Hotspot Security

Welcome to the June edition of the Privacy Blog Podcast, brought to you by Anonymizer. In June’s episode, I’ll discuss the true nature of the recently leaked surveillance programs that has dominated the news this month. We’ll go through a quick tutorial about decoding government “speak” regarding these programs and how you can protect yourself online.

Later in the episode, I’ll talk about Facebook’s accidental creation and compromise of shadow profiles along with Apple’s terrible personal hotspot security and what you can do to improve it.

Thanks for listening!

Can you be forced to decrypt your files?

Declan McCullagh at CNET writes about the most recent skirmish over whether a person can be forced to decrypt their encrypted files.

In this case, Jeffery Feldman is suspected of having almost 20 terabytes of encrypted child pornography. Evidence of use of eMule, a peer to peer file sharing tool, showed filenames suggestive of such content. Child porn makes for some of the worst case law because it is such an emotionally charged issue.

A judge had ordered Mr. Feldman to decrypt the hard drive, or furnish the pass phrase, by today. After an emergency motion, he has been given more time while the challenge to the order is processed.

The challenge is over whether being compelled to decrypt data is equivalent to forced testimony against one's self, which is forbidden by the Fifth Amendment. The prosecution position is that an encryption key is similar to a key to a safe, which may be compelled. Some prior cases have come down on the side of forcing the decryption, but not all.

If it was plausible that the suspect might not know how to decrypt the file, that would make things even more interesting. For now, the moral of the story is that you can't rely on the Fifth Amendment to protect you from contempt of court charges in the United States if you try to protect your encrypted data. Outside the US, your mileage may vary.

Law Enforcement Back Doors

Bruce Schneier has a great post on issues with CALEA-II.

He talks about two main issues, with historical context.

First, about the vulnerabilities that automated eavesdropping backdoors always create in communications, and how that disadvantages US companies.

Second, about the fact that law enforcement claims of communications "Going Dark" are absurd given the treasure trove of new surveillance information available through social media, and cloud services (like gmail).

I know I have talked about this issue a lot over the years, but I am shocked that I can't find any posts like it on this blog.

Bruce does it really well in any case.

The Privacy Blog Podcast - Ep.8: Phishing Attacks, Chinese Hackers, and Google Glass

Welcome to The Privacy Blog Podcast for May 2013. In this month’s episode, I’ll discuss how shared hosting is increasingly becoming a target and platform for mass phishing attacks. Also, I’ll speak about the growing threat of Chinese hackers and some of the reasons behind the increase in online criminal activity.

Towards the end of the episode, we’ll address the hot topic of Google Glass and why there’s so much chatter regarding the privacy and security implications of this technology. In related Google news, I’ll provide my take on the recent announcement that Google is upgrading the security of their public keys and certificates.

Leave any comments or questions below. Thanks for listening!

Google upgrades SSL Certs to 2048 bit

Yesterday Google announced that it was updating its certificates to use 2048 bit public key encryption, replacing the previous 1024 bit RSA keys.

I have always found the short keys used by websites somewhat shocking. I recall back in the early 1990's discussion about whether 1024 bits was good enough for PGP keys. Personally, I liked to go to 4096 bits although it was not really officially supported.

The fact that, 20 years later, only a fraction of websites have moved up to 2048 bits is incredible to me.

Just as a note, you often see key strengths described in bit length with RSA being 1024 or 2048 bits, and AES being 128 or 256 bits.

This might lead one to assume that RSA is much stronger that AES, but the opposite is true (at these key lengths). The problem is that the two systems are attacked in very different ways. AES is attacked by a brute force search through all possible keys until the right one is found. If the key is 256 bits long, then you need to try, on average, half of the 2^256 keys. That is about 10^77 keys (a whole lot). This attack is basically impossible for any computer that we can imagine being built, in any amount of time relevant to the human species (let alone any individual human).

By comparison, RSA is broken by factoring a 1024 or 2048 bit number in the key into its two prime factors. While very hard, it is not like brute force. It is generally thought that 1024 bit RSA is about as hard to crack as 80 bit symmetric encryption. Not all that hard. 

Government enabled Chinese criminal hacking.

Thanks to the Financial Times for their article on this.

When we hear that a company has been hacked by China what is usually meant is that the company has been hacked from a computer with a Chinese IP address. The immediate implication is that it is Chinese government sponsored.

Of course, there are many ways in which the attacks might not be from anyone in China at all. Using proxies or compromised computers as relays, would allow the attacker to be anywhere in the world while appearing to be in China. The fact that there is so much hype about Chinese government hacking right now, makes China the perfect false flag for any attacker. It sends investigators down the wrong path immediately. However, there is growing evidence that many of the attacks are actually being perpetrated by independent Chinese civilian criminal hackers out to make a buck. They are intent on stealing and selling intellectual property. The huge supply, and under employment, of computer trained people in China may be to blame. They have the skills, the time, and a need for money.

The Chinese government has also been very lax about trying to track down these individuals and generally suppress this kind of activity. The hacking activity is certainly beneficial to the Chinese economy, as the IP is generally stolen from outside China and sold to give advantage to Chinese companies. That gives a kind of covert and subtle support to the hacking activity without any actual material help or direction.

So, it is not quite government sponsored, and it IS actually Chinese. The bottom line is that it is a real problem, and a threat that is actually harder to track down and prevent because it is so amorphous.

Hacking for counter surveillance

Another from the "if the data exists, it will get compromised" file.

This article from the Washington Post talks about an interesting case of counter surveillance hacking.

In 2010, Google disclosed that Chinese hackers breached Google's servers. What only recently came to light was that one of the things compromised was a database containing information about government requests for email records.

Former government officials speculate that they may have been looking for indications of which of their agents had been discovered. If there were records of US government requests for information on any of their agents, it would be evidence that those agents had been exposed. This would allow the Chinese to shut down operations to prevent further exposure and to get those agents out of the country before they could be picked up.

I had not thought about subpoenas and national security letters being a counter intelligence treasure trove, but it makes perfect sense.

Because Google / Gmail are so widely used, they present a huge and valuable target for attackers. Good information on almost any target is likely to live within their databases.

Attackers are going after water plants and other infrastructure

It is often debated if, and how often, hackers are going after critical infrastructure like water plants, generators, and such.

MIT Technology Review reports on a security researcher Kyle Wilhoit's exploration of this question. He set up two fake control systems and one real one (just not connected to an actual plant), which he then connected to the Internet.

Over the course of the one month experiment he detected 39 sophisticated attacks against his "honeypot" systems. The attackers did not just penetrate the systems, but also manipulated their settings, which would have had real world impacts had these been real systems.

One must assume that the same is happening to any real Internet accessible industrial control systems.

Signed Mac Malware discovered on activist's laptop

Arstechnica reports on the discovery of signed malware designed for surveillance on the Mac laptop of an Angolan activist.

The malware was a trojan that the activist obtained through a spear phishing email attack. The news here is that the malware was signed with a valid Apple Developer ID. 

The idea is that having all code signed should substantially reduce the amount of malware on the platform. This works because creating a valid Apple Developer ID requires significant effort, and may expose the identity of the hacker unless they take steps to hide their identity. This is not trivial as the Developer ID requires contact information and payment of fees.

The second advantage of signed code is that the Developer's certificate can be quickly revoked, so the software will be detected as invalid and automatically blocked on every Mac world wide. This limits the amount of damage a given Malware can do, and forces the attacker to create a new Apple Developer ID every time they are detected.

This has been seen to work fairly well in practice, but it is not perfect. If a target is valuable enough, a Developer ID can be set up just to go after that one person or small group. The malware is targeted to just them, so the likelihood of detection is low. In this case, it would continue to be recognized as a legitimates signed valid application for a very long time.

In the case of the Angolan activist, it was discovered at a human rights conference where the attendees were learning how to secure their devices against government monitoring.

Cloud and telecom needs the same legal protection as snail mail.

The ACLU just posted an article about a recent federal magistrate judge's ruling. It is a somewhat bizarre case. The DEA had an arrest warrant for a doctor suspected selling prescription pain killer drugs for cash. They then requested a court order to obtain his real time location information from his cell provider.

The judge went along, but then published a 30 page opinion stating that no order or warrant should have been required for the location information because the suspect had no expectation of location privacy. If he wanted privacy, all he had to have done is to turn off his phone (which would have prevented the collection of the information at all, not just established his expectation).

So, if this line of reasoning is picked up and becomes precedent, it is clear than anyone on the run needs to keep their phone off and / or use burner phones paid for with cash.

My concern is that, if there is no expectation of privacy, is there anything preventing government entities from requesting location information on whole populations without any probable cause or court order.

While I think that the use of location information in this case was completely appropriate, I would sleep better if there was the check and balance of the need for a court order before getting it.

This is another situation where technology has run ahead of the law. The Fourth Amendment was written in a time where information was in tangible form, and the only time it was generally in the hands of third parties, was when it was in the mail. Therefor search of mail in transit was specially protected.

Today, cloud and telecommunication providers serve much the same purpose as the US Postal Service, and are used in similar ways. It is high time that the same protection extended to snail mail be applied to the new high tech communications infrastructures we use today.

Is anyone here actually a bad guy?

Wendy Nather at Dark Reading has post on the explosion of white hat "offensive defense".

She speaks to an issue I have been thinking about for some time. More and more security firms and internal security groups are going "offensive". They are setting up more and more honey pots, creating fake malware, posting about false vulnerabilities, and actively participating in hacker forums. Even the hackers are getting in on the action by dropping false information and leads.

At what point does the false information start to swamp the real and cause the value of the collected intelligence to degrade. Undercover law enforcement calls the problem "blue on blue" where one group (typically overt) is actively investigating an under cover group.

I was told a story like this by a friend in law enforcement. He told of a drug case. A deal was going down in a warehouse between some drug distributers and drug importers. In the middle of the transaction the warehouse was raided by the local police. Turns out, everyone there was in law enforcement.

Even if that story was apocryphal, it illustrates what we are likely to see on-line. Undercover is in many ways easier and certainly less dangerous on-line, and we are likely to see many private investigations in addition to official law enforcement activities.

This is likely to get interesting. The Internet may start to feel like cold war Vienna, where you never know where anyone really stands.

Google Glass and Surveillance

There is a lot of buzz right now about how Google Glass will lead to some kind of universal George Orwell type surveillance state.

I think this misses the point. We are going there without Google Glass. Private surveillance is becoming ubiquitous. Any place of business is almost certain to have cameras. After the Boston bombings, we are likely to see the same proliferation of street cameras that has already happened in London any many other places.

The meteor in russia earlier this year made me aware of just how common personal dash board cameras are in Russia. It seems likely that they will be common everywhere in no too many years.

Smart phone cameras are already doing an amazing job of capturing almost any event that takes place anywhere in the world.

So, you are probably being filmed by at least one camera at almost all times any time you are away from your house.

David Brin and others have been arguing for "sousveillance". If surveillance is those with power looking down from above, sousveillance is those without power looking back. It tends to have a leveling effect. Law enforcement officers are less likely to abuse their power if they are being recorded by private cameras. Similarly and simultaneously they are protected against false claims of abuse from citizens.

I would rather see ubiquitous private cameras than ubiquitous government cameras. If there is a major incident, the public will send in requested footage, but it would make broad drift net fishing, and facial recognition based tracking more difficult.

An interesting counter trend may be in the creation of camera free private spaces. Private clubs, restaurants, gyms, etc. may all differentiate themselves in part based on their surveillance / sousveillance policies.

Why California’s Suggested 100 Word Privacy Policy is the Best Worst Idea

A guest post by Janelle Pierce who enjoys writing about various business issues, and spends her time answering questions like, "what is point of sale"?  

Just last month California’s Assemblymember Ed Chau (D-Alhambra) introduced a bill that would require the website privacy policy of any company located in California to be no more than 100 words long, and written at the reading level of an 8th grade student.

While Chau’s practice what you preach 64-word bill has garnered a lot of negative press lately, one thing is for certain; it has gotten people talking about something most people don’t talk about, the privacy policy. For those who don’t know what a privacy policy is, it’s simply the legal document that every website must have. According to Wikipedia.org a privacy policy is:

“A statement or a legal document (privacy law) that discloses some or all of the ways a party gathers, uses, discloses and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to but including; name, address, date of birth, marital status, contact information, ID issue and expiry date, financial records, credit information, medical history, where you travel, and intentions to acquire goods and services.”

Whenever you register a username on a website, whether for free e-mail, picture sharing, or social networking, you must agree to the site’s established privacy policy. Generally speaking most users simply click “accept” without ever reading, much less understanding, what is written in the privacy policy. This is often because site privacy policies are long, written in confusing legalese, and often overshadowed by the false assumption that a site with a privacy policy will keep your data private. While I do agree that ultimately the responsibility for reading and understanding the privacy policy lies with the users of a site, the same can be said about those who write and present the policy.

Which brings me to the point I’d like to make, that is, I think Chau’s idea to force privacy policies to a maximum of 100 words, and require that they’re written at an eighth grade reading level, is a good one. However, I do feel it has a few drawbacks that almost invalidate its ability to be credible. First, requiring that a legal document be 100 words or less is a little short sighted. Don’t get me wrong, I think the thought behind making this otherwise lengthy, unreadable, and downright obnoxious (yet important) document accessible to everyone is a great goal, but requiring 100 words or less doesn’t offer a company the chance to disclose everything they need to disclose. I think a maximum word count should be required, but there is no reason it needs to be so low.

Second, I think requiring an 8th grade reading level is an excellent idea. Too often these policies are chalked full of legal words and phrases that even college educated users cannot make sense of. That being said, I think Chau’s attempt at “rewriting” the privacy policy is a good one, albeit a little short sighted. Like many things in life that we’ve put up with for too long the privacy policy is definitely in need of an overhaul. However, trying to shore up its lacking all at once and in such an aggressive manner may not be the right approach. There’s no doubt that something needs to be done about the state of the average privacy policy, but rushing headlong into it so aggressively tends to alienate people who would otherwise be supporters of Chau’s intention.

For help creating a privacy policy you can contact a business lawyer or simply use an online privacy policy generator.

Do you read privacy policies or simply click “accept”? Share your thoughts below.

Japanese ask sites to block "abusive" TOR users.

Wired reports on a move by the Japanese government to ask websites to block users who "abuse" TOR. 

I assume that TOR is being used as an example, and it would apply to any secure privacy tool.

The interesting question is whether this is simply a foot in the door on the way to banning anonymity, or at least making its use evidence of evil intent.

Currently, public privacy services make little effort to hide themselves. Traffic from them is easily detected as being from an anonymity system. If blocking becomes common, many systems may start implementing more effective stealth systems, which would make filtering anonymity for security reasons even harder.

Postmortem Social Media (a.k.a. virtual zombies)

For millennia people have asked the question “what happens to us when we die?”

While the larger spiritual question will continue to be debated, the question about what happens to our on-line data and presence is more recent, and also more tractable.

Until very recently little thought has been given to this issue. Accounts would continue until subscriptions lapsed, the website shut down, or the account was closed for inactivity.

This has lead to some rather creepy results. I have lost some friends over the last few years, but I continue to be haunted by their unquiet spirits, which remind me of their birthdays, ask me to suggest other friends for them, and generally keep bobbing in my virtual peripheral vision.

Many social media sites do have a process for dealing with accounts after the death of their owners, but they are cumbersome and I have never actually seen them used. Generally, they are only engaged postmortem, by the family of the deceased. Assuming that they don’t have the passwords to the account, they need to contact the provider in writing and provide proof that they are a relative and of the death of the account’s owner.

Google has an interesting idea that I would like to see other sites adopt. They have set up the “Google Inactive Account Manager”  which allows the user to specify what will happen in advance. The user specifies what length of inactivity should be taken as a sign of death. Once that is triggered, Google contacts the user using secondary email accounts and phone numbers, if available, to make sure this was not just a long vacation or a loss of interest. If there is no response to that, then the Inactive Account Manager kicks in.

It notifies a list of people that you specify that this has happened. You have the option of having your data packaged up and sent to some or all of those people. Finally, you may have it delete your account, or leave it available but closed as a memorial.

This may not be the perfect implementation of this concept, but it is an important step.

So please, set up your digital will, and lets put a stop to the digital zombie apocalypse.

Do you have a right to be forgotten

The right to be forgotten is a topic discussed more in Europe than in the US. The core question is whether you have a right to control information about yourself that is held and published on the Internet by third parties.

This includes social media, news sites, discussion forums, search engine results, and web archives.

The information in question may be true or false, and anything from embarrassing to libelous.

 

Often discussions about removing old information center on calls for Google to remove information from their search results. I think they are chosen because they are the dominant search engine, and people feel that if the information is not shown in Google, then it is effectively gone. Of course, search engines are really just pointing to the actual data, while generally lives on some other website.

Being removed from Google does nothing to the existence of the information, nor would it impact indexing of that information by other search engines.

 

Even if you get the hosting website to remove the information, there are many organizations like archive.org who may have copied and archived the information, thus keeping it alive and available.

Here are some examples of information that you might want removed.

  • Racist rantings on an old social media site to which access has been lost.
  • Drunk party pictures on a friend’s social media account.
  • Newspaper articles about dubious business activities.
  • Court records of a conviction after the sentence has been completed.
  • Negative reviews on a review website.
  • Unflattering feedback on a dating website.

 

In many of these cases, your “right to be forgotten” runs directly into another person’s “right to free speech”.

 

My thinking on this is still evolving, and I would welcome your thoughts and feedback. Right now I think that the free speech right trumps the right to be forgotten except in specific situations which need to be legally carved out individually; things like limitations on how long credit information should be allowed to follow you. Of course, the problem will be that every country will draw these lines differently, making enforcement and compliance very difficult, and leading to opportunities for regulatory arbitrage.

 

We are already seeing this in the EU. While most of the EU is moving towards codifying a right to be forgotten, the UK is planning to opt out of that.