SHAFAQNA- Twitter and Facebook are refusing to take down hundreds of inflammatory Islamophobic postings from across their sites despite being alerted to the content by anti-racism groups, an investigation by The Independent has established.
The number of postings, some of which accuse Muslims of being rapists, paedophiles and comparable to cancer, has increased significantly in recent months in the aftermath of the Rotherham sex-abuse scandal and the murder of British hostages held by Isis.
The most extreme call for the execution of British Muslims – but in most cases those behind the abuse have not had their accounts suspended or the posts removed.
Facebook said it had to “strike the right balance” between freedom of expression and maintaining “a safe and trusted environment” but would remove any content reported to it that “directly attacks others based on their race”. Twitter said it reviews all content that is reported for breaking its rules which prohibit specific threats of violence.
Over the past four months Muslim groups have been attempting to compile details of online abuse and report it to Twitter and Facebook. They have brought dozens of accounts and hundreds of messages to the attention of the social-media companies.
But despite this most of the accounts reported are still easily accessible. On New Year’s Eve the author of one of the accounts reported wrote: “If whites had groomed only paki girls 1 It would be a race hate crime. 2 There would be riots from all Muslim dogs.”
Other examples of extremist postings on Twitter include:
*A user posted an image of a girl with a noose around her neck with the caption: “6 per cent of white British girls will become sex slaves to the Islamic slave trade in Britain”.
*A tweet which reads: “Should have lost World War Two. Your daughters would be getting impregnated by handsome blond Germans instead of Pakistani goat herders. Good job Britain.”
*On Facebook a posting in response to the beheading of Westerners in Syria is also still easily accessible despite being reported to the company weeks ago. It reads: “For every person beheaded by these sick savages we should drag 10 off the streets and behead them, film it and put it online. For every child they cut in half … we cut one of their children in half. An eye for an eye.”
When the comments were reported, Facebook said that they did not breach the organisation’s guidelines.
Fiyaz Mughal, director of Faith Matters, an interfaith organisation which runs a helpline called Tell MAMA, for victims of anti-Muslim violence, said he was disappointed by the attitude of both firms. “It is morally unacceptable that social media platforms like Facebook and Twitter, which are vast profit-making companies, socially engineer what is right and wrong to say in our society when they leave up inflammatory, highly socially divisive and openly bigoted views,” he said.
“These platforms have inserted themselves into our social fabric to make profit and cannot sit idly by and shape our futures based on ‘terms and conditions’ that are not fit for purpose.”
Mr Mughal said that Tell MAMA regularly received reports of anti-Muslim rhetoric and hate from concerned Facebook and Twitter users.
He added that the far-right group Britain First relied on Facebook to organise, campaign and misinform followers about Islam and Muslims.
The rise in online abuse would appear to mirror a rise in hate attacks during the past year. In October the Metropolitan Police released figures to show hate crime against Muslims in London had risen by 65 per cent over the previous 12 months. Latest figures also suggest that, nationally, anti-Muslim hate crime has risen sharply following the murder of Lee Rigby in 2013.
One man, Eric King, was recently given a suspended sentence for sending a local mosque a picture smeared with dog excrement depicting Mohamed having sex with a pig. However his Facebook account, which he used to send abusive messages to the same mosque, is still active and promoting anti-Muslim hatred.
Mr Mughal added that social media platforms needed to make their content management procedures stricter.
“If users were to express such unacceptable opinions about ‘shooting’ Black British citizens or discussed Jews as a ‘cancer’, their speech would not be legal. The same protections should be forwarded to references to the Muslim community,” he said.
In a statement Facebook said it had a clear policy for deciding what was and what was not acceptable freedom of speech. “We take hate speech seriously and remove any content reported to us that directly attacks others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition,” said a spokeswoman. “With a diverse global community of more than a billion people, we occasionally see people post content which, whilst not against our rules, some people may find offensive. By working with community groups like Faith Matters, we aim to show people the power of counter speech and, in doing so, strike the right balance between giving people the freedom to express themselves and maintaining a safe and trusted environment.”
A Twitter spokesman said: “We review all reported content against our rules, which prohibit targeted abuse and direct, specific threats of violence against others.”