Facebook, the Government and revenge porn

Facebook and the Australian government are piloting a scheme to tackle revenge porn, but does it ask too much of potential victims to give up their privacy in order to gain privacy?

Last week, Facebook announced it is partnering with the government to pilot a program in Australia aimed at tackling the problem of unauthorised sharing of intimate or sexual pictures, with plans to launch in the US and UK in the future.

While the intent to prevent ‘revenge porn’ should be applauded, Facebook’s approach raises a number of concerns.

The pilot requires a concerned person to share a copy of the picture with Facebook. Picture: Getty Images

What is the Facebook proposal?

An individual who is concerned that someone with intimate pictures of them might post those photos online, is asked to share copies of those pictures in a private chat session with Facebook.

A Facebook employee then reviews the submitted images, before creating what’s called a ‘image hash’ of them. Hashes are like digital fingerprints that allow for a fast search. By adding these hashes to Facebook’s image matching database, future images posted on Facebook-owned properties, including Instagram and Messenger, can be compared to sensitive images, detected as inappropriate, and prevented from being shared.

This digital fingerprint allows the social network to track the media using similar techniques to the artificial intelligence-based technologies it uses in its photo and face matching algorithms, and then prevent it from being uploaded and shared in the future.

But exactly how the system works is still an industry secret.

Facebook’s claim is that by only storing image hashes, and deleting the original images, users only risk insignificant privacy in participating in a program that could protect them from sexual harassment and material harm.

As part of the pilot, Facebook is partnering with the Australian Government’s Office of the eSafety Commissioner which says one in five Australians has faced image-based abuse, where an intimate photo has been posted to social media without their consent.

What are the concerns?

A number of obvious concerns have been raised, one of the most important questions is: Why is the image hashing being performed on a Facebook server and not directly on the client device, like the user’s smartphone?.

That specific concern was addressed by Facebook’s Chief Security Officer, Alex Stamos, who tweeted:

Facebook’s Chief Security Officer, Alex Stamos, tweeted a reply about ‘local hashing’. Picture: Twitter

Mr Stamos’s tweet raises more questions than it answers.

Firstly, it is concerning that the Chief Security Officer of Facebook is ignoring a mainstay of cryptography research, Kerchoffs’s Principle, which states that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge. This means that you should not require the algorithm itself to be secret.

There is good reason for this, in that information about the algorithm is often revealed simply by its use and output, a very realistic possibility in this situation. If someone wished to develop circumvention techniques they could do so by submitting a photo to the safety scheme before creating multiple fake Facebook accounts and attempting to share modifications of the photo.

The accounts that do not get flagged will reveal the techniques that circumvent the detection. Repeating this attack with minor changes to the image until one goes through, is an instance of adversarial machine learning, an area of active research in security and artificial intelligence.

While there is insufficient security justification for not performing the image hashing on the client, however, there would be a commercial motivation for not revealing the algorithm. High-quality image hashing is a valuable asset for Facebook and its commercial partners, potentially allowing it to recognise and classify similar images.

The second part of Mr Stamos’s response is simply incorrect.

Facebook will have access to any image that is subsequently flagged by the automated system, and as such, can review any adversarial reporting, either through appeal or direct review.

The important distinction is that an individual concerned that intimate pictures of them may be shared, does not have to share those very images with Facebook: gaining privacy by paying with privacy.

Who has access to the shared images?

Facebook has stated that only specially-trained staff will have access to submitted images. But can Facebook really make the guarantee that not a single one of their system administrators will access those photos?

There is often a big difference between those who are authorised to access content, and those who have the ability to access it. There are historic examples of abuses of power that have taken place in the past – one example at Google.

Although Facebook has promised it’s not storing the images, that’s a difficult promise to keep.

The system will only be able to detect and block images shared via posts, not via encrypted messaging channel. Picture: Shutterstock

Of equal concern is whether the images could be sent to US data centres, and US-based employees, for processing. Internet traffic originating from outside the US is routinely intercepted by America’s intelligence agency, the National Security Agency.

It is doubtful that Facebook could guarantee that messages containing the submitted photos will not be subject to such collection.

How effective will it be?

The effectiveness of the approach is also doubtful.

It will only be able to detect and block images shared via posts, not via encrypted messaging channels. Messages on Facebook’s WhatsApp platform, including images, are end-to-end encrypted, meaning not even Facebook has access to them. As such, it will have no way of detecting the sharing of any such images on WhatsApp. It is also worth noting that both Facebook and Instagram prohibit the posting of intimate pictures on their platforms already, even when consent has been given.

The practicality of the approach should also be examined more closely.

In cases where a photo was not taken with the victim’s device, they may not be in possession of a copy of the photo, for submission to Facebook.

In situations where a photo has been shared by a user, at what point are they expected to submit it to Facebook? At the time it was taken or when they fear it might be shared further without consent? Is Facebook suggesting users keep an archive of potentially compromising pictures in case they have to upload one of them to Facebook in the future?

That would seem to create far greater risks with people stealing or gaining access to the user’s device or account.

What can be done?

The approach from Facebook creates huge privacy concerns without addressing either the fundamental problem on the Facebook service, or by providing an ability to detect sharing on other major distribution channels.

More concerning, it places the burden on the potential victim to suffer an invasion of privacy, even if it is limited to a few Facebook employees, in order to avoid suffering a potential greater privacy invasion later. That cost is being borne by the user seemingly to protect the secrecy of the Facebook image hashing algorithm.

The pilot places the burden on the potential victim to suffer an invasion of privacy. Picture: Shutterstock

Calculating the hash on the client device would at least avoid the user having to share intimate pictures with Facebook, although evidence would need to be provided to the public that the hashing algorithm cannot be reversed.

Technical solutions can play a part in trying to address this problem, but they should not come at the cost of privacy for the user. Fundamentally, we need tougher action to be taken against perpetrators.

There is plenty of evidence that this is a growing problem, but convictions remain vanishingly rare.

Banner image: Shutterstock