CAT | Social Networking
This article describes a clever attack against Secret, the “anonymous” secret sharing app.
Their technique allows the attacker to isolate just a single target, so any posts seen are known to be from them. The company is working on detecting and preventing this attack, but it is a hard problem.
In general, any anonymity system needs to blend the activity of a number of users so that any observed activity could have originated from any of them. For effective anonymity the number needs to be large. Just pulling from the friends in my address book who also use Secret is way too small a group.
Russia seems to have a conflicted relationship with Twitter and Internet censorship in general.
While trying to portray themselves as open and democratic, they clearly have a real problem with the radical openness of social media like Twitter.
Maxim Ksenzov, deputy head of Roscomnadzor (Russia’s censorship agency), said Twitter is a “global instrument for promoting political information” and that they could block Twitter or Facebook in minutes.
Prime Minister Dimitri Medvedev responded on his Facebook account, saying that state officials “sometimes need to turn on their brains” rather than “announcing in interviews the shutdown of social networks.” Which is not quite the same as saying that they would not do so.
The primary desire in Russia is for Twitter and all other social networks to open offices in Russia. That would smooth communications, but also provide leverage to push for censorship or access to data as needed.
For millennia people have asked the question “what happens to us when we die?”
While the larger spiritual question will continue to be debated, the question about what happens to our on-line data and presence is more recent, and also more tractable.
Until very recently little thought has been given to this issue. Accounts would continue until subscriptions lapsed, the website shut down, or the account was closed for inactivity.
This has lead to some rather creepy results. I have lost some friends over the last few years, but I continue to be haunted by their unquiet spirits, which remind me of their birthdays, ask me to suggest other friends for them, and generally keep bobbing in my virtual peripheral vision.
Many social media sites do have a process for dealing with accounts after the death of their owners, but they are cumbersome and I have never actually seen them used. Generally, they are only engaged postmortem, by the family of the deceased. Assuming that they don’t have the passwords to the account, they need to contact the provider in writing and provide proof that they are a relative and of the death of the account’s owner.
Google has an interesting idea that I would like to see other sites adopt. They have set up the “Google Inactive Account Manager” which allows the user to specify what will happen in advance. The user specifies what length of inactivity should be taken as a sign of death. Once that is triggered, Google contacts the user using secondary email accounts and phone numbers, if available, to make sure this was not just a long vacation or a loss of interest. If there is no response to that, then the Inactive Account Manager kicks in.
It notifies a list of people that you specify that this has happened. You have the option of having your data packaged up and sent to some or all of those people. Finally, you may have it delete your account, or leave it available but closed as a memorial.
This may not be the perfect implementation of this concept, but it is an important step.
So please, set up your digital will, and lets put a stop to the digital zombie apocalypse.
Welcome to episode 7 of The Privacy Blog Podcast.
In April’s episode, we’ll be looking at the blacklisting of SSL certificate authorities by Mozilla Firefox – Specifically, what this complex issue means and why Mozilla chose to start doing this.
In more breaking online privacy news, I will be discussing the security implications of relying on social media following the hacking of the Associated Press Twitter account earlier this week.
Next, I’ll chat about the “right to be forgotten” on the Internet, which hinges on the struggle between online privacy and free speech rights. In a closely related topic and following Google’s release of the new “Inactive Account Manager,” I will discuss what happens to our social media presence and cloud data when we die. It’s a topic none of us likes to dwell on, but it’s worth taking the time to think about our digital afterlife.
Last week the Twitter account of the Associated Press was hacked, and a message posted saying that bombs had gone off in the white house, and the president was injured.
Obviously this was false. The Syrian Electronic army a pro regime hacker group has claimed responsibility, which does not prove that they did it.
There is talk about Twitter moving to two factor authentication to reduce similar hacking in the future. While this is all well and good, it will not eliminate the problem.
The bigger issue is that these poorly secured social media sites are used by people around the world as reliable sources of news.
Apparently much of the crash came from automated trading systems parsing the tweet, and generating immediate trades without any human intervention at all.
The DOW dropped 140 points in 5 minutes.
The creators of these trading algorithms feel that news from twitter is reliable enough to be the basis of equity trades without any confirmation, or time for reflection.
Certainly very large amounts of money were made and lost in that short period.
Why make the effort to hack into what we hope is a well defended nuclear power plant or other critical infrastructure, when you can get similar amounts of financial damage from subverting a nearly undefended twitter account.
Because individual twitter accounts are not considered critical infrastructure, they are hardly protected at all, and are not designed to be easy to protect.
Nevertheless we give it, and other social media, substantial power to influence us and our decisions, financial and otherwise.
Take for example the crowd sourced search for the Boston bombers on reddit. Despite the best of intentions, many false accusations were made that had major impact on the accused, and one can imagine scenarios which could have turned out much worse. What if the accused at committed suicide, been injured in a confrontation with authorities, or been the vicim of vigilante action? Now, what if there had been malicious players in that crowd intentionally subverting the process. Planting false information, introducing chaos and causing more damage.
This is an interesting problem. There are no technical or legislative solutions. It is a social problem with only social solutions. Those are often the hardest to address.
Adam Rifkin on TechCrunch has an interesting article about Tumblr and how it is actually used.
The thesis of the article is that Tumblr is used more openly and for more sensitive things than Facebook because the privacy model is so much easier to understand and implement.
If you have five interests and corresponding social circles, just set up five pseudonymous Tumblrs. Each then becomes its own independent social space with minimal risk of cross contamination.
While all of those Tumblrs are public and discoverable, in practice they are not easy to find and unlikely to be stumbled upon by undesired individuals. This is classic security by obscurity.
By contrast, Facebook wants you to put everything in one place, then use various settings to try to ensure that only the desired subset of friends, friends of friends, or the general public have access to it.
This ties to the case I have been making for a while that people want to be able to separate their various personality shards among their various social circles. Even with access controls, using the same account for all of them may be too much connection and the odds of accidentally releasing information to the wrong people is too likely.
I would like to see something like Tumblr provide stronger abilities to restrict discoverability, but it represents an interesting and growing alternative model to Facebook.
Courthouse News Service reports that a virginia judge has ruled Facebook “Likes” are not protected speech.
The case was related to employees of the Hampton VA sheriff’s office who “Liked” the current sheriff’s opponent in the last election. After he was re-elected, he fired many of the people who had supported his opponent.
The judge ruled that posts on Facebook would have been protected, but not simple Likes.
This article from Threatpost discusses a study out of CMU of Chinese censorship of their home grown social networking websites.
Now that they are blocking most of the western social media sites entirely, the focus of censorship is internal. Obviously blocking the internal sites as well would defeat the purpose, so they are selectively deleting posts instead. This study looks at the rate at which posts with sensitive key words are removed from the services.
It clearly shows how censorship can be taken to the next level when the censor controls the websites as well as the network.
The NYTimes.com reports that Kapil Sibal, the acting telecommunications minister for India is pushing Google, Microsoft, Yahoo and Facebook to more actively and effectively screen their content for disparaging, inflammatory and defamatory content.
Specifically Mr. Sibal is telling these companies that automated screening is insufficient and that they should have humans read and approve allmessages before they are posted.
This demand is both absurd and offensive.
- It is obviously impossible for these companies to have a human review the volume of messages they receive, the numbers are staggering.
- The demand for human review is either evidence that Mr. Sibal is completely ignorant of the technical realities involved, or this is an attempt to kill social media and their associated free wheeling exchanges of information and opinion.
- There is no clear objective standard for “disparaging, inflammatory, and defamatory” content, so the companies are assured of getting it wrong in many cases putting them at risk.
- The example of unacceptable content sighted by Mr. Sibal is a Facebook page that maligned Congress Party president Sonia Gandhi suggesting that this is more about preventing criticism than actually protecting maligned citizens.
Thanks to a PrivacyBlog reader for pointing me to this article: Blackhat SEO – Esrun » Youtube privacy failure
It looks like it is easy to find thumbnail images from YouTube videos that have been marked private.
If you have any such videos, go back and check that you are comfortable with the information in the thumbnails being public, or delete the video completely.