Facebook Is Coming Clean About Bullying On Its Platform

Facebook is finally coming clean about bullying on their platform.

By Kristi Eckert | Published

This article is more than 2 years old

facebook

Amid intensifying legal troubles Facebook has been taking steps to try and repair its fractured image. Not only have they changed their name to Meta, which some have speculated was a calculated move to divert attention away from all of Facebook’s woes, but they’ve also announced their plans to end the use of facial recognition software on their platform. Among their efforts to re-image themselves and attempts to be more transparent with their user base, NPR reported that Facebook has now disclosed how many people on its platform encounter posts relating to bullying or harassment. 

At a conference on Tuesday, Facebook (now Meta) detailed that from content that they analyzed over the period of time between July and September that for every 10,000 posts a user viewed on Facebook approximately 14 or 15 of those posts contained content that could be classified as bullying or harassment. On Instagram, which Facebook also owns, those who examined the data found that users on that platform would see bullying or harassing posts 5 or 6 times for every 10,000 post views. 

As a result of their findings, Facebook said that it removed 9.2 million pieces of content from the platform as well as 7.8 million pieces from Instagram. Guy Rosen, Meta’s vice president of integrity, said that “The vast, vast, vast, vast majority of content on Facebook doesn’t violate our policies and is perfectly good content.” However, while it is encouraging to think that Facebook is doing its due diligence and taking the appropriateness of the cumulative content that goes on their platform seriously by removing millions of posts in one fell swoop, it can still in no way encapsulate every harmful post that ends up in the space. 

eating disorder

Rosen also admitted that the software that they used to identify the content is flawed by nature because it has no way to accurately tell “…what is a bullying post or comment, and what is perhaps a lighthearted joke, without knowing the people involved or the nuance of the situation.” The challenges associated with identifying bullying or harassing content are identical to the challenges that go along with trying to identify content pertaining to the promotion of disordered eating. Both are slippery slopes to navigate, still, the prevalence of both issues dictates the need for continued discussion on ways in which to better identify potentially harmful posts. Perhaps, there is a need to integrate a human into the mix to act as a determining factor for posts that fall into the grey area. 

Additionally, while Facebook’s efforts seem valiant on the surface, many have speculated that the company is only taking measurable action now because of increasing pressures from lawmakers and regulators. When whistleblower Francis Haugen revealed in her testimony that during her time working there that there was this undercurrent in the company culture that undervalued the negative effects some content on their platform could impose upon their user base it put a microscope on Facebook’s activities that transpired away from the lens of the public eye. In fact, an investigation done by The Wall Street Journal revealed evidence that suggested Facebook removes less than 5% of offensive content. 

Regardless of the real reasons, reasons which the public most likely will never be privy to, behind Facebooks intensifying efforts to extract harmful content from its platform it is still encouraging to think that changes are beginning to happen. And even if the software that Facebook is currently using to pinpoint harmful posts is innately flawed, the fact that some content is being removed is something that serves as an overarching benefit to the entirety of its user base.