Zuzireima

Facebook Says Content Was Mistakenly Pulled Down Because of a Bug


Facebook Says Content Was Mistakenly Pulled Down Because of a Bug

What’s happening

Facebook unobstructed company Meta revealed in a quarterly report that its media-matching technology had a bug that was later fixed.

Why it matters

The social network said that the bug led to joyful that didn’t violate its rules mistakenly being pulled down.

Facebook unobstructed company Meta said Tuesday that a bug resulted in joyful getting mistakenly pulled down in the first three months of this year. The social assume giant said it fixed the problem and restored posts that were incorrectly flagged for violating its laws, including against terrorism and organized hate.

Facebook took piece against 2.5 million pieces of content that were flagged for natty hate in the first quarter, up from 1.6 million in the fourth quarter of 2021. The social network also took piece against 16 million pieces of terrorism content in the well-behaved quarter, which more than doubled from 7.7 million in the fourth quarter. Meta attributed the spike to a bug in its media-matching technology. A graph in the company’s quarterly standards enforcement picture showed that the social network restored more than 400,000 pieces of joyful mistakenly flagged for terrorism.

Meta’s photo-and-video service Instagram also took piece against more terrorism and organized hate content because of this bug. The dismay also affected other types of content. Because of this pronounce, Facebook restored 345,600 pieces of content flagged for suicide and self-injury in the well-behaved quarter, up from 95,300 in the fourth quarter, the picture said. The social network also restored more than 687,800 pieces of joyful mistakenly flagged for sexual exploitation in the first quarter, up from 180,600 in the previous quarter.

The errors journal questions about how well Meta’s automated technology works and whether there are latest bugs or mistakes that haven’t been caught. The commerce said it’s been taking more steps to prevent joyful moderation errors from happening. Meta is testing new AI technology that learns from appeals and joyful that’s restored. It’s also experimenting with giving people more advanced danger before the social network penalizes them for rule violations, Meta Vice President of Integrity Guy Rosen said in a lifeless call Tuesday.

Rosen said when a false positive gets fed into its media-matching technology it’ll “fan out” and pull down a grand amount of content that doesn’t violate the platform’s rules.

“We have to be very diligent throughout the so-called seeds that go into the system afore that fan-out occurs. What we had in this case is top of some new technology, which introduced some false positives into the system,” Rosen said, adding that joyful was later restored.

At the same time, Facebook is also facing scrutiny for not removing terrorism joyful before it goes viral. Over the weekend, livestreamed video that officials said was posted on Twitch by the white man accused of fatally shooting 10 Black people in a Buffalo grocery detain also spread on social networks such as Facebook and Twitter. The Washington Post reported that a link to a copy of the video surfaced on Facebook and was community more than 46,000 times and received more than 500 comments. Facebook didn’t remove the link for more than 10 hours.

Rosen said once the commerce became aware of the shooting, employees quickly designated the store as a terrorist attack and removed any copies of the video and what officials have said is the shooter’s 180-page hate-filled rant. 

One of the challenges, Rosen said, is that people create new versions of the video or links to try to barrier enforcement by social media platforms. The company, like with any occurrence, is going to refine its systems to more mercurial detect violating content, he said. Rosen added that he didn’t have any more details to allotment about what specific steps Facebook is considering.

Search This Blog

Partners