An investigation by a British newspaper into child sexual abuse content and terrorist propaganda being shared onFacebook has once again drawn critical attention to how the companyhandles complaints about offensive and extremist contentbeing shared on its platform.
And, indeed, how Facebooks algorithmically driven user generated content sharing platform apparently encouragesthe spread of what can also be illegal material.
In a report published today,The Timesnewspaper accuses Facebookof publishing child pornography after one of its reporters created a fake profile and was quickly able to find offensive and potentiallyillegal content on the site including pedophilic cartoons; a video that apparently shows a child being violently abused; and various types ofterrorist propaganda including a beheading video made byan ISIS supporter, and comments celebrating a recent attack against Christians in Egypt.
The Times says it reported the content to Facebook but in most instances was apparently told the imagery and videosdid not violate the sites community standards. (Although, when it subsequently contacted the platform identifying itself as The Times newspaper it says some of pedophilic cartoons that had been kept up by moderators were subsequently removed.)
Facebook says it has since removed all the content reported by the newspaper.
A draft law in Germany is proposing to tackle exactly this issue using the threatof large fines for social media platforms that fail to quickly take down illegal content after a complaint. Ministers in the German cabinet backed the proposed lawearlier this month, which could be adopted in the current legislative period.
And where one European government is heading, others in the region might well be moved to follow. The UK government, for example, has once againbeen talking tougher on social platforms and terrorism, following a terror attack in London last month with the Home Secretary putting pressure on companies including Facebook to build tools to automate the flagging up and taking down of terrorist propaganda.
The Times says itsreporter created a Facebook profile posing as an IT professionalin his thirties and befriending more than 100 supporters of ISIS whilealso joining groups promoting lewd or pornographic images of children. It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children, it writes.
The Times showed the material it foundto a UK QC, Julian Knowles, who told it that in his viewmany of the images and videos are likely to be illegal potentiallybreaching UK indecency laws, and the Terrorism Act 2006 which outlaws speech and publications that directly or indirectly encourage terrorism.
If someone reports an illegalimage to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offense because the company might be regarded as assisting or encouraging its publication and distribution, Knowles told the newspaper.
Last month Facebook faced similar accusations overits content moderation system, after a BBC investigation looked at how the site responded to reports of child exploitation imagery, and also found the site failed to remove the vast majority of reported imagery.Last yearthe news organization also foundthatclosed Facebook groups were being used by pedophiles to share images of child exploitation.
Facebook declined to provide a spokesperson to be interviewed about The Times report, but in an emailed statement Justin Osofsky, VP global operations, told us: We are grateful to The Times for bringing this content to our attention. We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and well continue to work hard to live up to the high standards people rightly expect of Facebook.
Facebooksays it employs thousands of human moderators, distributedin officesaroundthe world (such as Dublin for European content) to ensure 24/7 availability. However given the platform has close to 2 billion monthly active users (1.86BN MAUsat the end of 2016, to be exact) this is very obviously just the tiniest drop in the ocean of content being uploaded to the site every second of every day.
Human moderation clearly cannot scale to review so much content without there being farmore human moderators employed by Facebook a move it clearly wants to resist, given the costs involved (Facebooksentire company headcountonly totalsjust over 17,000 staff).
Facebookhas implemented MicrosoftsPhoto DNAtechnology, which scans all uploadsfor known images of child abuse. However tackling all types of potentiallyproblematic content isa very hard problem to try to fix with engineering; one that is not easily automated, given it requires individual judgement calls based on context as well as the specificcontent, while also potentially factoring in differences in legal regimes in different regions, and differing cultural attitudes.
CEO Mark Zuckerbergrecently publicly discussedthe issue writing thatone of our greatest opportunities to keep people safe isbuilding artificial intelligence to understand more quickly and accurately what is happening across our community.
But he alsoconceded that Facebookneeds to do more, andcautioned that an AI fix for content moderation is years out.
Right now, were starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide, hewrote in February, before going on toemphasize thatprotecting individual security and liberty is also a core plankofFacebooks communityphilosophy which underscoresthe tricky free speech vs offensive speech balancing act the social media giantcontinues to try to pull off.
In the end, illegal speech may be the driving force that catalyzes a substantialchange to Facebooksmoderating processes by providing harder red lines where it feels forced to act (even if defining what constitutes illegal speech in a particular region vs what is merely abusive and/or offensive entailsanother judgement challenge).
One factor is inescapable: Facebook has ultimately agreed that all of the problem content identified via various differenthigh profilemedia investigations does indeed violate its community standards, and does not belong on its platform. Which rather begs the question why was it not taken down when it was first reported? Either thats systemic failure of its moderating system or rank hypocrisy at the corporate level.
The Times says it has reportedits findings to the UKs Metropolitan Police and the National Crime Agency. Its unclear whether Facebook will face criminal prosecution in the UK for refusing to remove potentially illegal terrorist and child exploitation content.
The newspaper also calls outFacebook for algorithmically promoting some of the offensive material by suggesting that users join particular groups or befriend profiles that had published it.
On that front features on Facebook such as Pages You Might Known automatically suggest additional content a user might be interested on, based on factors such as mutual friends, work and education information, networks youre part of and contacts that have been imported but also manyother undisclosed factors and signals.
And just as Facebooks New Feed machine learning algorithms have been accused of favoring and promoting fake news clickbait, the underlying workings of its algorithmic processes for linking people andinterests look to bebeing increasingly pulledinto thefiring line over how they might beaccidentally aidingand abetting criminal acts.