The FBI's hack of Syed Farook’s iPhone appears to be a lot of work. This makes the security implications much less disturbing.Read More
Since it was introduced, Apple has had the ability to decrypt the contents if iPhones and other iOS devices when asked to do so (with a warrant).
Apple recently announced that with iOS 8 Apple will no longer be able to do so. Predictably, there has been a roar of outrage from many in law enforcement. [[Insert my usual rant about how recent trends in technology have been massively in favor of law enforcement here]].
This is really about much more than keeping out law enforcement, and I applaud Apple for (finally) taking this step. They have realized what was for Anonymizer a foundational truth. If data is stored and available, it will get out. If Apple has the ability to decrypt phones, then the keys are available within Apple. They could be taken, compromised, compelled, or simply brute forced by opponents unknown. This is why Anonymizer has never kept data on user activity.
Only by ensuring that they can not do so can Apple provide actual security to it customers against the full range of threats, potentially least of which is US law enforcement.
In many cases, a false sense of security causes people to put themselves at much greater risk.
The following article describes a “burner” phone service that re-uses the temporary phone numbers. It appears that number a security researcher received was previously used by a sex worker, who’s customers continued to send pictures and messages to the number after it had been re-assigned.
The Internet is on fire with discussions of the recent release of stolen nude photos of over 100 female celebrities. This is a massive invasion of their privacy, and it says something sad about our society that there is an active market for such pictures. While this particular attack was against the famous, most of us have information in the cloud that we would like to stay secret.
While there is not a definitive explanation of the breach the current consensus is that it was probably caused by a vulnerability in Apple’s “Find My iPhone” feature. Apparently the API interface to this service did not check for multiple password failures, a standard security practice. This allowed attackers to test effectively unlimited numbers of passwords for each of the accounts they wanted to access.
Because most people use relatively weak passwords, this attack is quite effective. Once they gained access to the accounts, they could sync down photos or any other information stored in iCloud.
Of course, the first rule of secrecy is: If it does not exist, it can’t be discovered.
If you do want to create something that you would be pained to see released publicly, then make sure you keep close control of it. Store it locally, and encrypted.
Wherever you keep it, make sure it has a strong password. Advice for strong passwords has changed over time because of the increasing speed of computers. It used to be that fancy pneumonics would do the trick but now the fundamental truth is: if you can remember it, it is too weak.
This is particularly true because you need to be using completely different passwords for every website. Changing a good password in a simple obvious way for every website is obvious. It might prevent brute force attacks but if some other attack gives access to your password, the attacker will be able to easily guess your password on all other websites.
You need to be using a password manager like 1Password (Mac), LastPass, Dashlane, etc. Let the password manager generate your passwords for you. This is what a good password should look like: wL?7mpEyfpqs#kt9ZKVvR
Obviously I am never going to remember that, but I don’t try. I have one good password that I have taken the time to memorize, and it unlocks the password manager which has everything else.
UPDATE: There appears to be some question about whether this vulnerability is actually to blame.
This article describes a clever attack against Secret, the “anonymous” secret sharing app.
Their technique allows the attacker to isolate just a single target, so any posts seen are known to be from them. The company is working on detecting and preventing this attack, but it is a hard problem.
In general, any anonymity system needs to blend the activity of a number of users so that any observed activity could have originated from any of them. For effective anonymity the number needs to be large. Just pulling from the friends in my address book who also use Secret is way too small a group.
The Importance of Privacy & The Power of Anonymizers: A Talk With Lance Cottrell From Ntrepid — The Social Network Station A recent interview I did, talking about data anonymization and mobile device privacy. Lance Cottrell is the Founder and Chief Scientist of Anonymizer. Follow me on Facebook, Twitter, and Google+.
The latest leaked messages to blow up in someone’s face are some emails from Evan Spiegel, the CEO of Snapchat. These were incredibly sexist emails sent while he was in college at Stanford organizing fraternity parties.
These emails are like racist rants, homophobic tweets, and pictures of your “junk”. They are all trouble waiting to happen, and there is always a risk that they will crop up and bite you when you least expect it. If you have ever shared any potentially damaging messages, documents, photos, or whatever then you are at risk if anyone in possession of them is angry, board, or in search of attention.
Even if it only ever lives on your computer, you are vulnerable to hackers breaking in and stealing it, or to someone getting your old poorly erased second hand computer.
This falls in to the “if it exists it will leak” rant that I seem to be having to repeat a lot lately. The first rule of privacy is: think before you write (or talk, or take a picture, or do something stupid). Always assume that anything will leak, will be kept, will be recorded, will be shared. Even when you are “young and stupid” try to keep a thought for how that thing would be seen in ten years when you are in a very different position. Of course, ideally you are not sexist, racist, homophobic, or stupid in the first place.
We have seen interesting experiments and studies where researchers have looked at what people are willing to pay to protect their privacy.
This then would be the opposite experiment. A company called Datacoup is offering people $8 per month to give them access to all of their social media accounts, and information on their credit and debit card transactions.
You certainly can’t fault them for being covert about their intentions. They are saying very directly what they want and offering a clear quid pro quo.
I don’t think I will be a customer, but it will be very interesting to see if they can find a meaningful number of people willing to make this deal.
Sochi visitors entering hacking 'minefield' by firing up electronics | Security & Privacy - CNET News UPDATE: According to Errata security the NBC story about the hacking in Sochi total BS. Evidently: They were in Moscow, not Sochi. The hack was from sites they visited, not based on their location. They intentionally downloaded malware to their Android phone. So, as a traveler you are still at risk, and my advice still stands, but evidently the environment is not nearly as hostile as reported.
According to an NBC report, the hacking environment at Sochi is really fierce. After firing up a couple of computers at a cafe, they were both attacked within a minute, and within a day, both had been thoroughly compromised.
While you are vulnerable anywhere you use the Internet, it appears that attackers are out in force looking for unwary tourists enjoying the olympics.
Make sure you take precautions when you travel, especially to major events like the Sochi Olympics.
- Enable whole disk encryption on your laptop (FileVault for Mac and TrueCrypt for Windows), and always power off your computer when you are done, rather than just putting it to sleep.
- Turn off all running applications before you connect to any network, particularly email. That will minimize the number of connections your computer tries to make as soon as it gets connectivity.
- Enable a VPN like Anonymizer Universal the moment you have Internet connectivity, and use it 100% of the time.
- If you can, use a clean computer with a freshly installed operating system.
- Set up a new Email account which you will only use during the trip. Do not access your real email accounts.
- Any technology you can leave behind should be left back at home.
This is refreshing. Some evidence that most people ARE actually willing to pay for privacy. If the market shows that this is a winner, we might start to see more privacy protecting applications and services.
The real question is whether invading your privacy generate more revenue than what we are willing to pay to be protected.
Welcome to the 12th episode of The Privacy Blog Podcast brought to you by Anonymizer. In September’s episode, I will talk about a court ruling against Google’s Wi-Fi snooping and the vulnerabilities in the new iPhone 5s fingerprint scanner. Then, I’ll provide some tips for securing the new iPhone/iOS 7 and discuss the results of a recent Pew privacy study.
Hope you enjoy – feel free to add questions and feedback in the comments section.
I keep hearing people say that young people today don't care about privacy, and that we are living in a post privacy world. This is clearly not the case.
Teens share a lot, maybe much more than I would be comfortable with, but that does not mean that they share everything, or don't care about where that information goes.
A new report from the Pew Research says that over half of teens have avoided or un-installed a mobile app because of privacy concerns. This is a sign that they are privacy aware and willing to do something about it.
Teens almost always have something that they want to hide, if only from their parents.
Welcome to the June edition of the Privacy Blog Podcast, brought to you by Anonymizer. In June’s episode, I’ll discuss the true nature of the recently leaked surveillance programs that has dominated the news this month. We’ll go through a quick tutorial about decoding government “speak” regarding these programs and how you can protect yourself online.
Later in the episode, I’ll talk about Facebook’s accidental creation and compromise of shadow profiles along with Apple’s terrible personal hotspot security and what you can do to improve it.
Thanks for listening!
Welcome to episode 7 of The Privacy Blog Podcast. In April’s episode, we’ll be looking at the blacklisting of SSL certificate authorities by Mozilla Firefox - Specifically, what this complex issue means and why Mozilla chose to start doing this.
In more breaking online privacy news, I will be discussing the security implications of relying on social media following the hacking of the Associated Press Twitter account earlier this week.
Next, I’ll chat about the “right to be forgotten” on the Internet, which hinges on the struggle between online privacy and free speech rights. In a closely related topic and following Google’s release of the new “Inactive Account Manager,” I will discuss what happens to our social media presence and cloud data when we die. It’s a topic none of us likes to dwell on, but it’s worth taking the time to think about our digital afterlife.
Adam Rifkin on TechCrunch has an interesting article about Tumblr and how it is actually used.The thesis of the article is that Tumblr is used more openly and for more sensitive things than Facebook because the privacy model is so much easier to understand and implement. If you have five interests and corresponding social circles, just set up five pseudonymous Tumblrs. Each then becomes its own independent social space with minimal risk of cross contamination. While all of those Tumblrs are public and discoverable, in practice they are not easy to find and unlikely to be stumbled upon by undesired individuals. This is classic security by obscurity. By contrast, Facebook wants you to put everything in one place, then use various settings to try to ensure that only the desired subset of friends, friends of friends, or the general public have access to it. This ties to the case I have been making for a while that people want to be able to separate their various personality shards among their various social circles. Even with access controls, using the same account for all of them may be too much connection and the odds of accidentally releasing information to the wrong people is too likely. I would like to see something like Tumblr provide stronger abilities to restrict discoverability, but it represents an interesting and growing alternative model to Facebook.
A Guest Post by Robin Wilton of the Internet Society
We are the raw material of the new economy. Data about all of us is being prospected for, mined, refined, and traded...
. . . and most of us don’t even know about it.
Every time we go online, we add to a personal digital footprint that’s interconnected across multiple service providers, and enrich massive caches of personal data that identify us, whether we have explicitly authenticated or not.
That may make you feel somewhat uneasy. It's pretty hard to manage your digital footprint if you can't even see it.
Although none of us can control everything that’s known about us online, there are steps we can take to understand and regain some level of control over our online identities, and the Internet Society has developed three interactive tutorials to help educate and inform users who would like to find out more.
We set out to answer some basic questions about personal data and privacy:
- Who’s interested in our online identity? From advertisers to corporations, our online footprint is what many sales driven companies say helps them make more informed decisions about not only the products and services they provide - but also who to target, when and why.
- What's the real bargain we enter into when we sign up? The websites we visit may seem free - but there are always costs. More often than not, we pay by giving up information about ourselves – information that we have been encouraged to think has no value.
- What risk does this bargain involve? Often, the information in our digital footprint directly changes our online experience. This can range from the advertising we see right down to paying higher prices or being denied services altogether based on some piece of data about us that we may never even have seen. We need to improve our awareness of the risks associated with our digital footprint.
- The best thing we can do to protect our identity online is to learn more about it.
The aim of the three tutorials is to help everyone learn more about how data about us is collected and used. They also suggest things you need to look out for in order to make informed choices about what you share and when.
Each lasts about 5 minutes and will help empower all of us to not only about what we want to keep private, but also about what we want to share.
After all, if we are the raw material others are mining to make money in the information economy, don't we deserve a say in how it happens?
Find out more about the Internet Society’s work on Privacy and Identity by visiting its website.
* Robin Wilton oversees technical outreach for Identity and Privacy at the Internet Society.
Their Asha and Lumia phones come with something they call the "Xpress Browser". To improve the browser experience, the web traffic is proxies and cached. That is a fairly common and accepted practice.
Where Nokia has stepped into questionable territory is when it does this for secure web traffic (URLs starting with HTTPS://). Ordinarily it is impossible to cache secure web pages because the encryption key is unique and used only for a single session, and is negotiated directly between the browser and the target website. If it was cached no one would be able to read the cached data.
Nokia is doing a "man in the middle attack" on the user's secure browser traffic. Nokia does this by having all web traffic sent to their proxy servers. The proxy then impersonate the intended website to the phone, and set up a new secure connection between the proxy and the real website.
Ordinarily this would generate security alerts because the proxy would not have the real website's cryptographic Certificate. Nokia gets around this by creating new certificates which are signed by a certificate authority they control and which is pre-installed and automatically trusted by the phone.
So, you try to go to Gmail. The proxy intercepts that connection, and gives you a fake Gmail certificate signed by the Nokia certificate authority. Your phone trusts that so everything goes smoothly. The proxy then securely connects to Gmail using the real certificate. Nokia can cache the data, and the user gets a faster experience.
All good right?
The fly in the ointment is that Nokia now has access to all of your secure browser traffic in the clear, including email, banking, etc.
They claim that they don't look at this information, and I think that is probably true. The problem is that you can't really rely on that. What if Nokia gets a subpoena? What about hackers? What about accidental storage or logging?
This is a significant breaking of the HTTPS security model without any warning to end users.
Welcome to Anonymizer’s inaugural episode of The Privacy Podcast. Each month, we'll be posting a new episode focusing on security, privacy, and tips to protect you online. Today, I talk about non-technical ways your online accounts can be compromised, focusing on email address and password reuse, security questions, and using credit card numbers as security tokens. In part two, I give power user tips for getting the most out of your Anonymizer Nyms account.
Hope you enjoy the first episode in our monthly series of podcasts. Please leave feedback and questions in the comments section of this post.
Download the transcript here
NBC News is reporting that the iOS UDIDs leaked last week were actually stolen from Blue Toad publishing company. Comparing the leaked data with Blue Toad's data showed 98% correlation which makes them almost certainly the source.
They checked the leaked data against their own after receiving a tip from an outside researcher who had analyzed the leaked data.
It is certainly possible that this data had been stolen earlier and that, in tracking that crime, the FBI had obtained the stolen information. This strongly suggests that this is not a case of the FBI conducting some kind of massive surveillance activity.
The other possibility is that Anonymous and Antisec are simply lying about the origin of the information as part of an anti-government propaganda campaign.
Either way, it is a big knock on their credibility, unless you think this whole thing is just a conspiracy to protect the FBI.