Facebook's problem is more complicated than fake news

R. Kelly Garrett | First published: 21 November 2016, 22:13 IST
Facebook logo

In the wake of Donald Trump's unexpected victory, many questions have been raised about Facebook's role in the promotion of inaccurate and highly partisan information during the presidential race and whether this fake news influenced the election's outcome.

A few have downplayed Facebook's impact, including CEO Mark Zuckerberg, who said that it is "extremely unlikely" that fake news could have swayed the election. But questions about the social network's political significance merit more than passing attention.

Do Facebook's filtering algorithms explain why so many liberals had misplaced confidence in a Clinton victory (echoing the error made by Romney supporters in 2012)? And is the fake news being circulated on Facebook the reason that so many Trump supporters have endorsed demonstrably false statements made by their candidate?

The popular claim that "filter bubbles" are why fake news thrives on Facebook is almost certainly wrong. If the network is encouraging people to believe untruths - and that's a big if - the problem more likely lies in how the platform interacts with basic human social tendencies. That's far more difficult to change.

A misinformed public

Facebook's role in the dissemination of political news is undeniable. In May 2016, 44 percent of Americans said they got news from the social media site. And the prevalence of misinformation disseminated through Facebook is undeniable.

It's plausible, then, that the amount of fake news on a platform where so many people get their news can help explain why so many Americans are misinformed about politics.

But it's hard to say how likely this is. I began studying the internet's role in promoting false beliefs during the 2008 election, turning my attention to social media in 2012. In ongoing research, I've found little consistent evidence that social media use promoted acceptance of false claims about the candidates, despite the prevalence of many untruths. Instead, it appears that in 2012, as in 2008, email continued to be a uniquely powerful conduit for lies and conspiracy theories. Social media had no reliably detectable effect on people's beliefs.

For a moment, however, let's suppose that 2016 was different from 2012 and 2008. (The election was certainly unique in many other regards.)

If Facebook is promoting a platform in which citizens are less able to discern truth from fiction, it would constitute a serious threat to American democracy. But naming the problem isn't enough. To fight the flow of misinformation through social media, it's important to understand why it happens.

Don't blame filter bubbles

Facebook wants its users to be engaged, not overwhelmed, so it employs proprietary software that filters users' news feeds and chooses the content that will appear. The risk lies in how this tailoring is done.

There's ample evidence that people are drawn to news that affirms their political viewpoint. Facebook's software learns from users' past actions; it tries to guess which stories they are likely to click or share in the future. Taken to its extreme, this produces a filter bubble, in which users are exposed only to content that reaffirms their biases. The risk, then, is that filter bubbles promote misperceptions by hiding the truth.

The appeal of this explanation is obvious. It's easy to understand, so maybe it'll be easy to fix. Get rid of personalized news feeds, and filter bubbles are no more.

 
MORE FROM CATCH
PREVIOUS STORY
NEXT STORY