Monday, 21 May 2018
Latest news
Main » It became known as "fakes" found in Facebook

It became known as "fakes" found in Facebook

16 May 2018

The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier.

"We have a lot of work still to do to prevent abuse", Facebook Product Management vice president Guy Rosen said.

Responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook yesterday said those closures came on top of blocking millions of attempts to create fake accounts every day.

In four of the six categories, it increased deletions over the previous quarter: spam (up 15% from the previous quarter), violent content (up 65%), hate speech (up 56%), and terrorist content (up 73%), while deletions of fake accounts were down 16%, and deletions of nudity and sexual activity saw no change.

More news: Tiger Woods shoots 65, his lowest score ever at Players

The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity - quite apart from child pornography, which is not covered by the report.

Facebook today revealed for the first time how much sex, violence and terrorist, propaganda has infiltrated the platform-and whether the company has successfully taken the content down.

The report did not contain quantitative indicators, characterizing the struggle of the social network with fake news. In April, Facebook published its internal guidelines on how it decides to remove posts that include hate speech, violence, nudity, terrorism and more. The company says it has 10,000 human moderators helping to remove objectionable content and plans to double that number by the end of the year.

However, messages inciting hatred, the robot can distinguish much more hard - a computer's intelligence does not understand the cultural and contextual features, so in this category, Facebook is largely dependent on staff moderators. However, the company estimates that between 3% and 4% of the active accounts on its service are fake.

More news: Maple Leafs promote Kyle Dubas, 32, to GM

Hate speech is checked by review teams rather than technology. And more generally, as I explained last week, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", wrote Rosen.

The report that went live yesterday compares much of Facebooks action against spam from Q4 2017 to Q1 2018.

The committee has also urged Facebook boss Mark Zuckerberg to appear before them, adding that it would be open to taking evidence from the billionaire company founder via video link if he would not attend in person.

More news: Mom allegedly stabbed 11-year-old, fled with 7-year

It became known as