Fakebook: Will pop-ups and warnings keep 'fake news' off Facebook
Four months after the US elections, Facebook is doing all it can to rid itself of its reputation for spreading fake news. In fact, Facebook CEO Mark Zuckerberg, on a nationwide tour, even announced that the company is totally against fake news.
"There have been some accusations that say that we actually want this kind of content on our service because its more content and people click on it, but that's crap," said Zuckerberg. "No-one in our community wants fake information." To that end, Facebook is rolling out new tools in the war against fake news.
In December of 2016, Facebook announced collaboration with “third-party fact checking organizations” to identify those stories that don't live up to the standards and warn users who try to post these stories.
The new tools are now almost ready to be rolled out worldwide, and should be available to most of the social network's 1.9 billion users, if not all, in the coming days. A pop-up on mobile phones, and a flag below the preview box on desktops will now be a regular feature on users' timelines. Users also have the ability to manually flag dubious news reports as fake. The company also announced that a 'disputed' label will be included for these false/fake reports.
Facebook's war on fake news starts with a pop up window for re-posters of bad info. pic.twitter.com/qpPXZjLL3u— John Ourand (@Ourand_SBJ) March 19, 2017
When these stories appear in one's timeline, they will be accompanied by a warning banner below. As Engadget reports, "First, the fake post either has to be flagged by a certain number of users or the company's automated software. The post is then sent to a fact-checking website like Snopes or Politifact where it is reviewed. If two or more fact-checkers flag it again, Facebook will apply the banner". At the moment, users are reporting extremely slow vetting times for these stories. In fact, as per reports, a story about Donald Trump that suggested his Android phone was responsible for the recent leaks was unlabeled for five whole days.
How Facebook plans to tackle 'fake news'
- Facebook will use its software to look for relative signs of fake news (a built in algorithm) among stories gaining traction. It will also ask its 1.9 billion users to report what they think is 'fake news' by clicking a button that will appear at the top of every story being shared.
- If both users as well as the software find a story to be fake, a consortium of journalists are then asked to fact-check the story.
- If the said journalists find the story to be fake, Facebook will not directly say so. They will instead label the story “disputed by third-party fact-checkers".
- Facebook then attaches the banner to the story whenever it appears on timelines, and the algorithm is tweaked to make sure these stories don't appear frequently.
- Users who want to share said stories will get a prompt on both laptop and mobiles, asking them if they are 100% sure they want to share that story.
Facebook trying to distance itself?
“It’s not always clear what is fake and what isn’t,” Zuckerberg said, adding, “A lot of what people are calling fake news are just opinions that people disagree with.
"We need to make sure that we don’t get to a place where we’re not showing content or banning things from the service just because it hurts someone’s feelings or because someone doesn’t agree with it – I think that would actually hurt a lot of progress."
It's strange that Facebook has to outsource the problem rather than hire more people to vet stories internally. The plan, as it stands, seems to put the onus on five news organisations/fact checking groups - ABC News, Politifact, FactCheck, Snopes and Associated Press - that have signed on to vet the news stories that Facebook sends their way. Only if two of these groups flag the story as "disputed" will Facebook say so. It's like Facebook saying "the story might be false but it wasn't us who said so. It's these other guys".
Will this new labeling scheme make a difference at all to the 1.9 billion users on Facebook? Will people actually stop sharing these kinds of stories from now on? It might decrease, but it definitely isn't going away.