The Hola peer to peer VPN service suffered a number of very damaging security revelations today including exploit vulnerabilities, exposed administrative tools, & broken architecture impacting 45 million active users of the service.Read More
In two separate cases recently Uber has, or has talked about, abusing its information about their customer’s movements.
First a Buzzed reporter Johana Bhuiyan was told that she was tracked on the way to a meeting by Josh Mohrer, general manager of Uber New York.
Next Emil Michael, SVP of business for Uber, talked at a private dinner about the possibility of using the information Uber has about hostile reporters to gather dirt on them.
Apparently Uber has an internal tool called “God View” which is fairly widely available to employees and allows tracking of any car or customer. Obviously such information must exist within the Uber systems for them to operate their business, but this access for personal or inappropriate business purposes is very worrying, possibly putting the security of customers at risk.
While Uber is the company that got caught, the potential for this kind of abuse exists in a tremendous number of businesses. We give sensitive personal information to these companies in order to allow them to provide the services that we want, but we are also trusting them to treat the data appropriately.
Last year there was a scandal within the NSA about a practice called “LOVEINT”. The name is an inside joke. Signals intelligence is called “SIGINT”, human intelligence is called “HUMINT”, so intelligence about friends and lovers was called “LOVEINT”. In practice, people within the NSA were accessing the big national databases to look up information on current or former partners, celebrities, etc.
The exact same risk exists within all of these businesses, but generally with far weaker internal controls than in the government.
I think that the solution to this is not to insist on controls that would be difficult to enforce, or to ban the keeping of information which they really do need, but rather to give users visibility into when their information is viewed, why, and by whom. Abuse could then be quickly detected and exposed, while allowing the business to continue to operate as they need to.
Apple is getting taken to task for a couple of security issues.
First, their recently announced “Random MAC address” feature does not appear to be as effective as expected. The idea is that the iOS 8 device will use randomly generated MAC addresses to ping WiFi base stations when it is not actively connected to a WiFi network. This allows your phone to identify known networks and to use WiFi for enhanced location information without revealing your identity or allowing you to be tracked. Unfortunately the MAC only changes when the phone is sleeping, which is really rare with all the push notifications happening all the time. The effect is that the “random” MAC addresses are changed relatively infrequently. The feature is still good, but needs some work to be actually very useful.
Second, people are noticing their passwords showing up in Apples iOS 8 predictive keyboard. The keyboard is designed to recognize phrases you type frequently so it can propose them to you as you type, thus speeding message entry. The problem is that passwords often follow user names, and may be typed frequently. Research is suggesting that the problem is from websites that fail to mark their password fields. Apple is smart enough to ignore text in known password fields, but if it does not know that it is a password, then the learning happens. It is not clear that this is Apple’s fault, but it is still a problem for users. Auto-fill using the latest version of 1Password should protect against this.
Since it was introduced, Apple has had the ability to decrypt the contents if iPhones and other iOS devices when asked to do so (with a warrant).
Apple recently announced that with iOS 8 Apple will no longer be able to do so. Predictably, there has been a roar of outrage from many in law enforcement. [[Insert my usual rant about how recent trends in technology have been massively in favor of law enforcement here]].
This is really about much more than keeping out law enforcement, and I applaud Apple for (finally) taking this step. They have realized what was for Anonymizer a foundational truth. If data is stored and available, it will get out. If Apple has the ability to decrypt phones, then the keys are available within Apple. They could be taken, compromised, compelled, or simply brute forced by opponents unknown. This is why Anonymizer has never kept data on user activity.
Only by ensuring that they can not do so can Apple provide actual security to it customers against the full range of threats, potentially least of which is US law enforcement.
On Sunday I appeared on The Social Network Show talking about general privacy and security issues. Follow the link below for the show’s post and audio. The Social Network Show on KDWN Presents Lance Cottrell — The Social Network Station
On July 2, Google engineers discovered unauthorized certificates for Google domains in circulation. They had been issued by the National Informatics Center in India. They are a trusted sub-authority under the Indian Controller of Certifying Authorities (CCA). They in turn are part of the Microsoft Root Store of certificates, so just about any program running on Windows, including Explorer and Chrome, will trust the unauthorized certificates.
The power of this attack is that the holder of the private key to the certificate can impersonate secure Google servers. Your browser would not report any security alerts because the certificate is “properly” signed and trusted within the built in trust hierarchy.
Firefox does not have the CCA in its root certificate list and so is not affected. Likewise Mac OS, iOS, Android, and Chrome OS are safe from this particular incident as well.
It is not known exactly why these certificates were issued, but the obvious use would be national surveillance.
While this attack seems to be targeted to India and only impacts the Microsoft ecosystem, the larger problem is much more general. There is a long list of trusted certificate authorities, which in turn delegate trust to a vast number of sub-authorities, any of whom can trivially create certificates for any domain which would be trusted by your computer.
In this case the attack was detected quickly, but if it had been very narrowly targeted detection would have been very unlikely and monitoring could have continued over very long periods.
As an end user, you can install Certificate Patrol in Firefox to automatically detect when a website’s certificate is changed. This would detect this kind of attack.
On Chrome you should enable “Check for server certificate revocation” in advanced settings. That will at least allow quick protection once a certificate is compromised.
Update: Microsoft has issued an emergency patch removing trust from the compromised authority.
A vulnerability in LIFX WiFi enabled light bulbs allowed researchers at Context Information Security to control the lights and access information about the local network setup.
The whole “Internet of Things” trend is introducing all kinds of new vulnerabilities. Because these devices tend to be cheap, don’t feel like tech, and don’t expose much user interface, users are unlikely to secure, patch, or otherwise maintain them.
As these devices proliferate in our networks, we will be introducing ever more largely invisible vulnerabilities, usually without any thought to the consequences.
For years, TrueCrypt has been the gold standard open source whole disk encryption solution. Now there is a disturbing announcement on the TrueCrypt website. Right at the top it says "WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues”.
The rest of the page has been changed to a notice that development on TrueCrypt stopped this May, and directions for migrating from TrueCrypt to BitLocker, the disk encryption tool built in to Windows. Of course, this is of little help to anyone using TrueCrypt on Mac or Linux. It is still possible to download TrueCrypt from the site, but the code now will not create new vaults, and warns users to migrate to a new platform.
There are certainly alternatives, but this is a real shock. On Mac, one could always use the built in FileVault tool. Linux users may have a harder time finding a good replacement.
The big question is, what the heck is actually going on here. This is all far too cryptic, with no where near enough actual information to draw intelligent conclusions.
A recent independent audit of TrueCrypt discovered “no evidence of backdoors or otherwise intentionally malicious code in the assessed areas.”
There are a number of theories about what is going on ranging from credulous to paranoid.
- Like Lavabit, they received a National Security Letter requiring compromise of the code. This is their way of resisting without violating the gag order.
- They have been taken over by the government, and they are trying to force everyone to move to a less secure / more compromised solution.
- There really is a gigantic hole in the code. Releasing a fix would tell attackers the exact nature of the vulnerability, which most people would take a very long time to address. Having everyone migrate is the safest solution.
- Some personal conflict within the TrueCrypt developers is leading to a “take my ball and go home” action.
- The developers only cared about protecting windows users with XP or earlier, which did not have the built in disk encryption. Now that XP support has ended, they don’t feel it is valuable any more. This is suggested by the full wording of the announcement.
- The website or one of the developer’s computers was compromised, and this is a hack / hoax.
The whole thing is really odd, and it is not yet obvious what the best course of action might be.
The safest option appears to be to remove TrueCrypt, and replace it with some other solution, either one that is built in to the OS, or from a third party.
The recent Ebay password compromise is just the latest in a string of similar attacks. Each time we hear a call for people to change their passwords. Sometimes the attacked company will require password changes, but more often it is just a suggestion; a suggestion that a majority choose to ignore.
Further exacerbating the problem is the tendency of people to use the same username and password across many different websites. Even if a compromised website does require a password change on that site, it has no way of forcing users to change their passwords on any other sites where the same password was used. This matters because a smart attacker will try any username / password pairs he discovers against a range of interesting websites of value, like banks. Even though the compromise may have been on an unimportant website, it could give access to your most valuable accounts if you re-used the password.
The burden on the user can also be significant. If a password is used on 20 websites, then after a compromise it should be changed on all 20 (ideally to 20 different passwords this time). People who maintain good password discipline only need to change the one password on the single compromised website.
Trying to remember a large number of strong passwords is impossible for most of us. Some common results are that the the passwords are too simple, the passwords all follow a simple and predictable pattern, passwords are re-used, or some or all of these at once.
Many companies and standards organizations are working hard to replace the password with a stronger alternative. Apple is using fingerprint scanners in its latest phones, and tools like OAUTH keep the actual password (or password hash) off the website entirely. Two factor authentication adds a hardware device to the mix making compromise of a password less damaging. So far many of these approaches have shown promise, but all have some disadvantages or vulnerabilities, and none appear to be a silver bullet.
For now, best practice is to use a password vault. I use 1Password but LastPass, Dashlane, and others are also well regarded. Create unique long random passwords for every website (since you no longer need to actually remember any of them). Don’t wait. If you are not using one of these tools, get it and start using it now.
Sochi visitors entering hacking 'minefield' by firing up electronics | Security & Privacy - CNET News UPDATE: According to Errata security the NBC story about the hacking in Sochi total BS. Evidently: They were in Moscow, not Sochi. The hack was from sites they visited, not based on their location. They intentionally downloaded malware to their Android phone. So, as a traveler you are still at risk, and my advice still stands, but evidently the environment is not nearly as hostile as reported.
According to an NBC report, the hacking environment at Sochi is really fierce. After firing up a couple of computers at a cafe, they were both attacked within a minute, and within a day, both had been thoroughly compromised.
While you are vulnerable anywhere you use the Internet, it appears that attackers are out in force looking for unwary tourists enjoying the olympics.
Make sure you take precautions when you travel, especially to major events like the Sochi Olympics.
- Enable whole disk encryption on your laptop (FileVault for Mac and TrueCrypt for Windows), and always power off your computer when you are done, rather than just putting it to sleep.
- Turn off all running applications before you connect to any network, particularly email. That will minimize the number of connections your computer tries to make as soon as it gets connectivity.
- Enable a VPN like Anonymizer Universal the moment you have Internet connectivity, and use it 100% of the time.
- If you can, use a clean computer with a freshly installed operating system.
- Set up a new Email account which you will only use during the trip. Do not access your real email accounts.
- Any technology you can leave behind should be left back at home.
This article got me thinking: People's ignorance of online privacy puts employers at risk - Network World
There is an interesting paradox for security folks. On the one hand, almost two thirds of people feel that security is a matter of personal responsibility. On the other hand, few are actually doing very much to protect themselves.
In the workplace we see this manifest in the BYOD (bring your own device) trend. Workers want to use their own phones, tablets, and often laptops. Because it is their personal device, they don’t think the company has any business telling them how to secure it, or what they can or can’t do with it. Yet they want to be able to work with the company’s documents and intellectual property, and access company sensitive networks from that device.
When that trend intersects with the poor real-world security practiced by most people, the security perimeter of businesses just got both larger and weaker.
Realistically, it is too much to expect that users will be able to fully secure their devices, or that security professionals will be able to do it for them. The productivity impact of locking users out of the devices they use (whether BYOD or company provided) is often too high, especially in the case of technical workers. Spear Phishing attacks eventually penetrate a very high fraction of targets, even against very sophisticated users. How then can we expect average, or below average, users to catch them, and catch them all.
Increasing use of sandboxing and virtualization is allowing a change in the security model. Rather than assuming the user will detect attacks, the attack is encapsulated in a very small environment where it can do little or no damage, and from which it is quickly eliminated and prevented from spreading. The trick will be to get people to actually use these tools on their own devices.
OS News has an interesting article: The second operating system hiding in every mobile phone It discusses the security implications of the fact that all cell phones run two operating systems. One is the OS that you see and interact with: Android, iOS, Windows Phone, BlackBerry, etc. The other is the OS running on the baseband processor. It is responsible for everything to do with the radios in the phone, and is designed to handle all the real time processing requirements.
The baseband processor OS is generally proprietary, provided by the maker of the baseband chip, and generally not exposed to any scrutiny or review. It also contains a huge amount of historical cruft. For example, it responds to the old Hays AT command set. That was used with old modems to control dialing, answering the phone, and setting up the speed, and other parameters required to get the devices to handshake.
It turns out that if you can feed these commands to many baseband processors, you can tell them to automatically and silently answer the phone, allowing an attacker to listen in on you.
Unfortunately the security model of these things is ancient and badly broken. Cell towers are assumed to be secure, and any commands from them are trusted and executed. As we saw at Def Con in 2010, it is possible for attackers to spoof those towers.
The baseband processor, and its OS, is generally superior to the visible OS on the phone. That means that the visible OS can’t do much to secure the phone against these vulnerabilities.
There is not much you can do about this as an end user, but I thought you should know. :)
Welcome to the 12th episode of The Privacy Blog Podcast brought to you by Anonymizer. In September’s episode, I will talk about a court ruling against Google’s Wi-Fi snooping and the vulnerabilities in the new iPhone 5s fingerprint scanner. Then, I’ll provide some tips for securing the new iPhone/iOS 7 and discuss the results of a recent Pew privacy study.
Hope you enjoy – feel free to add questions and feedback in the comments section.
The Chaos Computer Club (CCC) in Germany recently announced its successful bypassing of the new iPhone 5S fingerprint scanner.
Despite many media claims that the new scanner worked on deep layers in the skin, and was not vulnerable to simple fingerprint duplication, that is exactly what succeeded.
The CCC used a high resolution photo of a fingerprint on glass to create a latex duplicate, which unlocked the phone. It strikes me as particularly problematic that the glass surface of an iPhone is the perfect place to find really clear fingerprints of the owner.