The Privacy Blog Podcast - Ep. 14: Mobile device privacy and the anti-surveillance tent.

Standard Profile PictureThis is episode 14 of the Privacy Blog Podcast for November,2013.In this episode I talk about: How your phone might be tracked, even if it is off The hidden second operating system in your phone Advertising privacy settings in Android KitKat How Google is using your profile in caller ID and the lengths to which Obama has to go to avoid surveillance when traveling.

The paradox of irresponsible responsibility

This article got me thinking: People's ignorance of online privacy puts employers at risk - Network World

There is an interesting paradox for security folks. On the one hand, almost two thirds of people feel that security is a matter of personal responsibility. On the other hand, few are actually doing very much to protect themselves.

In the workplace we see this manifest in the BYOD (bring your own device) trend. Workers want to use their own phones, tablets, and often laptops. Because it is their personal device, they don’t think the company has any business telling them how to secure it, or what they can or can’t do with it. Yet they want to be able to work with the company’s documents and intellectual property, and access company sensitive networks from that device.

When that trend intersects with the poor real-world security practiced by most people, the security perimeter of businesses just got both larger and weaker.

Realistically, it is too much to expect that users will be able to fully secure their devices, or that security professionals will be able to do it for them. The productivity impact of locking users out of the devices they use (whether BYOD or company provided) is often too high, especially in the case of technical workers. Spear Phishing attacks eventually penetrate a very high fraction of targets, even against very sophisticated users. How then can we expect average, or below average, users to catch them, and catch them all.

Increasing use of sandboxing and virtualization is allowing a change in the security model. Rather than assuming the user will detect attacks, the attack is encapsulated in a very small environment where it can do little or no damage, and from which it is quickly eliminated and prevented from spreading. The trick will be to get people to actually use these tools on their own devices.

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

The second operating system hiding in every mobile phone

OS News has an interesting article: The second operating system hiding in every mobile phone It discusses the security implications of the fact that all cell phones run two operating systems. One is the OS that you see and interact with: Android, iOS, Windows Phone, BlackBerry, etc. The other is the OS running on the baseband processor. It is responsible for everything to do with the radios in the phone, and is designed to handle all the real time processing requirements.

The baseband processor OS is generally proprietary, provided by the maker of the baseband chip, and generally not exposed to any scrutiny or review. It also contains a huge amount of historical cruft. For example, it responds to the old Hays AT command set. That was used with old modems to control dialing, answering the phone, and setting up the speed, and other parameters required to get the devices to handshake.

It turns out that if you can feed these commands to many baseband processors, you can tell them to automatically and silently answer the phone, allowing an attacker to listen in on you.

Unfortunately the security model of these things is ancient and badly broken. Cell towers are assumed to be secure, and any commands from them are trusted and executed. As we saw at Def Con in 2010, it is possible for attackers to spoof those towers.

The baseband processor, and its OS, is generally superior to the visible OS on the phone. That means that the visible OS can’t do much to secure the phone against these vulnerabilities.

There is not much you can do about this as an end user, but I thought you should know. :)

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

Tech companies respond to reports of NSA tracking switched-off mobile phones | Privacy International

Tech companies respond to reports of NSA tracking switched-off mobile phones | Privacy International

Based on a single line in a Washington Post article, Privacy International has been investigating whether it is possible to track cell phones when they have been turned off. Three of the 8 companies they contacted have responded.

In general they said that when the phone is powered down that there is no radio activity, BUT that might not be the case if the phone had been infected with malware.

It is important to remember that the power button is not really a power switch at all. It is a logical button that tells the phone software that you want to turn the phone off. The phone can then clean up a few loose ends and power down… or not. It could also just behave as though it were shutting down.

They don’t cite any examples of this either in the lab or in the wild, but it certainly seems plausible.

If you really need privacy, you have two options (after turning the phone “off”):

1) If you can remove the phone’s battery, then doing so should ensure that the phone is not communicating.

2) If you can’t remove the battery (hello iPhone) then you need to put the phone in a faraday cage. You can use a few tightly wrapped layers of aluminum foil, or buy a pouch like this one.

Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook and Google+.

The Privacy Blog Podcast - Ep. 13: Adobe, Russia, the EU, Experian, Google, Silk Road, and Browser Fingerprinting

Welcome to episode 13 of our podcast for September, 2013.In this episode I will talk about: A major security breach at Adobe How airplane mode can make your iPhone vulnerable to theft Russian plans to spy on visitors and athletes at the winter Olympics Whether you should move your cloud storage to the EU to avoid surveillance Identity thieves buying your personal information from information brokers and credit bureaus How to stop google using your picture in its ads Why carelessness lead to the capture of the operator of the Silk Road And how Browser Fingerprinting allows websites to track you without cookies.

Please let me know what you think, and leave suggestions for future content, in the comments.

Opt out of Google ads using your name

Google is changing its terms of service to allow them to use your name and photo in advertisements to your friends. Most people seem to have been opted in to this by default, although some (including me) have found themselves defaulted out of the program.

If you are uncomfortable with your name, picture, and opinions appearing in ads from Google, just go to Google's Shared Endorsements Settings page. The page describes the program. At the bottom you will find a checkbox. Uncheck it, and click "Save".

The Privacy Blog Podcast – Ep.12: The Court Ruling Against Google’s Wi-Fi Snooping, Vulnerabilities in the iPhone Fingerprint Scanner, and Security Tips for iOS 7

Welcome to the 12th episode of The Privacy Blog Podcast brought to you by Anonymizer. In September’s episode, I will talk about a court ruling against Google’s Wi-Fi snooping and the vulnerabilities in the new iPhone 5s fingerprint scanner. Then, I’ll provide some tips for securing the new iPhone/iOS 7 and discuss the results of a recent Pew privacy study.

Hope you enjoy – feel free to add questions and feedback in the comments section.

ID Theft service had access to 3 giant data brokers

Krebs on Security discovered that a major identity theft service populated its databases by raiding the vaults of three of the biggest personal information brokers, including LexisNexis, Dun & Bradstreet, and Kroll Background which does employment background, drug, and health screening.

This is very bad news. The stolen data includes SSN, birthdays, and the answers to almost any security question your bank or other sensitive website might ask.

This is further evidence of my thesis that: if the data exists, it will eventually get out.

Shanghai getting more relaxed great firewall

The South China Morning Post reports that the ban on Facebook, Twitter, the New York Times, and many other sites, will be lifted, but only in the Shanghai free-trade zone.

The information came from anonymous government sources within China. The purpose is to make the zone more attractive to foreign companies and workers who expect open Internet access. The sources say that the more open access may be expanded into the surrounding territory if the experiment is successful.

It will be interesting to see if this actually comes to pass.

Two questions occur to me. First, will the free-trade zone be considered to be outside the firewall, and hard to access from within the rest of China? Second, is this as much about surveillance of activity on those websites as it is about providing free access?

iPhone 5S fingerprint scanner tricked by Chaos Computer Club

The Chaos Computer Club (CCC) in Germany recently announced its successful bypassing of the new iPhone 5S fingerprint scanner.

Despite many media claims that the new scanner worked on deep layers in the skin, and was not vulnerable to simple fingerprint duplication, that is exactly what succeeded. 

The CCC used a high resolution photo of a fingerprint on glass to create a latex duplicate, which unlocked the phone. It strikes me as particularly problematic that the glass surface of an iPhone is the perfect place to find really clear fingerprints of the owner.

PRISM fears being used by scammers

In a new attack, some websites have been set up to show visitors a slash page that says the vicim's computer has been blocked because is has been used to access illegal pornographic content. The user is then presented a link to pay an instant "fine" of $300 to the scammers.

This is a new variant of "ransomware". The most common of which is "fake AV". A fake anti-virus website or software will claim to scan your computer for free, then charge you to remove malware that it has "detected".

Details and screenshots here.

Apparently Open WiFi is actually private

An important decision just came down from the Federal 9th Circuit Court of Appeals about whether Google can be sued for intercepting personal data from open WiFi networks. The intercepts happened as part of the Street View program. In addition to capturing pictures of their surroundings, the Street View vehicles also collect GPS information (to correctly place the pictures) and the MAC addresses (unique hardware identifiers), SSIDs (user assigned network names), and until 2010 they captured some actual data from those networks. The purpose of the WiFi collection is to provide enhanced location services. GPS drains phone batteries quickly, and the weak signals may be unavailable indoors, or even under and significant cover. Nearly ubiquitous WiFi base stations provide another way of finding your location. The Street View cars capture their GPS coordinates along with all of the WiFi networks they can see. Your phone can then simply look at the WiFi networks around it, and ask the database what location corresponds to what it is seeing. WiFi is often available indoors, has short range, requires much less power, and is generally turned on in any case. Google claims that capturing the actual data was an accident and a mistake.

Unfortunately that data contained usernames, passwords and other sensitive information in many cases. A lawsuit was filed accusing Google of violating the Wiretap Act when it captured the data. There is no suggestion that the data has been leaked, misused, or otherwise caused direct harm to the victims.

The ruling was on a motion to dismiss the lawsuit on the grounds that Google’s intercepts were protected under an exemption in the Wiretap Act which states that it is OK to intercept radio communications that are “readily accessible” to the general public. The Act specifically states that encrypted or scrambled communications are NOT readily accessible, but the decision hangs on exactly what IS readily accessible. The court ruled that WiFi did not count as “radio” under the Act because several types of radio communications were enumerated, and this was not one of them. They then considered this case under the umbrella of “electronic communications”, which also has an exemption for readily accessible communications. On that, they decided that open WiFi is not readily accessible.

From a privacy perspective, this is good news. It says that people who intercept your information from your open WiFi can be punished (if you ever find out about it). This would clearly prevent someone setting up a business to automatically capture personal and marketing data from coffee shop WiFi’s around the world. It is less likely to have any impact on criminals. I am concerned that it will also lead to a sense of false confidence, and perhaps cause people to leave their WiFi open, rather than taking even minimal steps to protect themselves.

The hacker / tinkerer / libertarian in me has a real problem with this ruling. It is really trivial to intercept open WiFi. Anyone can join any open WiFi network. Once joined, all the the data on that network is available to every connected device. Easy, free, point and click software allows you to capture all of the data from connected (or even un-connected) open WiFi networks. If you are debugging your home WiFi network, you could easily find yourself capturing packets from other networks by accident. They are in the clear. There is no hacking involved. It is like saying that you can not tune your radio to a specific station, even though it is right there on the dial.

I think peeping in windows is a reasonable analogy. If I am standing on the sidewalk, look at your house, and see something through your windows that you did not want me to see, that is really your problem. If I walk across your lawn and put my face against the glass, then you have a cause to complain.

Open WiFi is like a window without curtains, or a postcard. You are putting the data out there where anyone can trivially see it. Thinking otherwise is willful ignorance. All WiFi base stations have the ability to be secured, and it is generally as simple as picking a password and checking a box. You don’t even need to pick a good password (although you really should). Any scrambling or encryption clearly moves the contents from being readily accessible, to being intentionally protected. If you want to sunbathe nude in your back yard, put up a fence. If you want to have privacy in your data, turn on security on your WiFi router.

I think that radio communications are clearly different than wired. With radio, you are putting your data on my property, or out into public spaces. There is no trespass of any kind involved to obtain it, and we have no relationship under which you would expect me to protect the information that you have inadvertently beamed to me. It would be like saying that I can’t look at your Facebook information that you made public because you accidentally forgot to restrict it. 

Similar to provisions of the DMCA, which outlaw much research on copy protection schemes, this is likely to create accidental outlaws of researchers, and the generally technical and curious.


The Privacy Blog Podcast – Ep.11: Lavabit & Silent Circle Shutdown, Hoarding Bitcoins, and “Spy” Trash Cans in London

Welcome to Episode 11 of The Privacy Blog Podcast, brought to you by Anonymizer. In this episode, I’ll discuss the shutdown of secure email services by Lavabit and Silent Circle. In addition, we’ll dive into the problem with hoarding Bitcoins and how you can protect yourself while using the increasingly popular online currency. Lastly, I’ll chat about whether teens actually care about online privacy and an ad agency’s shocking decision to use high-tech trash cans to measure Wi-Fi signals in London.

Please leave any questions or feedback in the comments section. Thanks for listening.

Easy bypass to Android App signing discovered

Infosec Institute published an article showing in detail how application signing on Android devices can be defeated.

This trick allows the attacker to modify a signed application without causing the application to fail its signature check.

The attack works by exploiting a flaw in the way signed files in the .apk zip file are installed and verified. Most zip tools don't allow duplicate file names, but the zip standard does support it. The problem is that, when confronted by such a situation the signature verification system and the installer do different things.

The signature verifier checks the first copy of a duplicated file, but the installer actually installs the last one.

So, if the first version of a file in the archive is the real one, then the package will check as valid, but then your evil second version actually gets installed and run.

This is another example of vulnerabilities hiding in places you least expect.

Teens are not the no-privacy generation after all

Report: Teens Actually Do Care About Online Privacy -- Dark Reading

I keep hearing people say that young people today don't care about privacy, and that we are living in a post privacy world. This is clearly not the case.

Teens share a lot, maybe much more than I would be comfortable with, but that does not mean that they share everything, or don't care about where that information goes.

A new report from the Pew Research says that over half of teens have avoided or un-installed a mobile app because of privacy concerns. This is a sign that they are privacy aware and willing to do something about it.

Teens almost always have something that they want to hide, if only from their parents.