Eli Pariser Predicted the Future. Now He Can’t Escape It.

Six years after the Upworthy cofounder coined the term “filter bubble,” things are much worse.
Eli Pariser
Jon Shapley / Getty Images

In the aftermath of the US election, as pundits blamed highly biased media outlets and fake news stories for Trump’s win, Eli Pariser appeared to be some sort of augur. In 2011, he’d written the book warning that Facebook and Google’s personalization tools would drive us to become ever more partisan by showing us only the news and information with which we already agreed. He called it The Filter Bubble. Pariser’s warning has become our new reality. We have embraced the phrase he coined to describe the most pernicious effects of social media — the way its algorithms feed each of us information that supports views we already have, and creates the conditions for us to be more susceptible to falsehoods.

If Pariser was one step ahead of us, it was because he’d come up on the digital front lines. As executive director of MoveOn.org from 2004–2008, just as social networking began to go mainstream, he’d harnessed the momentum to do grassroots political organizing. In 2012, he teamed up with longtime collaborator Peter Koechley to start Upworthy, a media company focused on making worthy stories into entertaining, viral clips. In other words, its goal is to repackage good-for-you ideas so that they might pierce your filter bubble.

The company grabbed immediate attention for its clickbait headlines that dominated Facebook News Feeds, rocketing Upworthy to more than 80 million views a month—and then that dropped substantially as Facebook changed its algorithm. A few years ago, the company doubled down on video. Today, Upworthy draws about 15–20 million viewers to its site each month, but far more people — more than 200 million, according to Pariser — watch videos like this one about the unexpected response a guy named Garry received when he admitted racial bias, or this one about a church that accepts LGBTQ members as they are, on Facebook and YouTube.

Pariser’s work has led him to believe that blaming fake news for fractured discourse is a red herring. Yes, no doubt, social media is pushing stories that are just plain false. But what most people encounter online isn’t news at all. “The thing that wins now mostly has always won, and is not even news at all,” he says. “The thing that wins now is some guy surfing off his roof into a garbage container.”

The problem with online distribution, Pariser believes, is that specific, true information can’t compete with that guy surfing off his roof. “Is the truth loud enough?” he asks. “If the problem is that the truth isn’t loud enough, it points in very different directions than if the problem is that fake news is misleading people.” I caught up with Pariser last week to discuss how his notion of the filter bubble has evolved.

Jessi Hempel: The Filter Bubble published a year before Facebook reached a billion users. By the end of this year, two billion people are expected to log on every month. Has anything changed about your idea?Eli Pariser: Because it’s relevant to so many people, this conversation about how the News Feed shapes what we get to know, and how unintended biases in those algorithms can have enormous effects, is happening more broadly. You don’t have to be an engineer to understand how powerful that is.

The US presidential election highlighted the impact of filter bubbles on decision making. Do you think they played a large role in electing our current president, Donald Trump?

After the election, I felt gratified that the idea that I had put out in the world was useful to people, but also worried that people were taking it a little too far. The filter bubble explains a lot about how liberals didn’t see Trump coming, but not very much about how he won the election. I think even if you’re talking about the conservative media ecosystem, my guess is that talk-radio, local news, and Fox are a much more important piece of that story than random conservative fake news.

You mean, in particular, the fake news stories that were circulating on social media.

Yes, the specific flavor of “The Pope Endorsed Trump”-type fake news that everybody got in a tizzy about after the election. The number one source of news in America is still local TV news. We don’t even know what it’s going to be like when social media drives most of it, as it presumably will someday soon.

Isn’t that the reason to pay attention to this phenomenon? Over time, won’t people get less and less of their information from the local news?

I think that’s exactly the concern. I’m just saying that these kinds of effects will be most prevalsent for heavy users of social media, for whom that’s their primary vector. Interestingly and complicatedly, that group includes almost all journalists [right now]. That’s a problem.

What hadn’t you considered about filter bubbles when you wrote the book?

It hadn’t really fully occurred to me, when I first had this image of a bunch of media sources and then a membrane or filter that surrounds a person that those sources get through, that the whole system would become self aware in a certain sense—that the media organizations would grow autotrophically toward those bubbles. I think certainly that has happened. You can target very particular niches or communities and reach a lot of those people, and do it by understanding how that algorithm works and what it lets in.

So you understood that our information would be filtered by the people we knew, but you hadn’t thought through how that would influence the purveyors of that information?

Yeah. I mean, there’s irony in that. I started Upworthy to try to get ideas and perspectives in front of lots of people that weren’t necessarily going to get through the algorithmic gauntlet. In some ways, I didn’t realize that the whole industry was going to be doing that, too. It’s the self-reflexivity of it — the feedback loop of it — that I had missed on the first pass through.

There is a crisis of trust in the media. Why do you think people have less confidence in the news than they ever have?

I’ve been thinking a lot about this question of, “Why is media losing trust?” I think trust is about a sense that you’re on my side, and you have my best interests at heart. If we’re honest with ourselves, most media are not on the side of most Trump voters, or rural Americans, or even most Americans. They’re on the side of a fairly narrow sliver of people, a lot of whom live on the coast, and their advertisers.

There was a study of the coverage of the Portland Press Herald in Maine, where I grew up. If you map which towns get covered [journalistically] and which don’t, you get a map that looks a lot like the Clinton areas of Maine [are covered], and not the Trump areas of Maine.

If those stories aren’t being told, no wonder there’s not a lot of trust or respect. I think that’s compounded by the fact that where distribution—the ability to reach a large audience—used to generally go along with some sense of journalistic process, now those two things have disaggregated. [i.e.: It’s just as easy for an individual to publish online as it is for the New York Times.] You’re relying on your friends for guidance about what to look at. That’s a much stronger signal than the brand of a news organization.

So how is trust accrued in a networked age? If trust isn’t given to you because you are an automatic authority by virtue of your connection to an institution like, say, The New York Times, then how is it earned now?

I feel like the focus on fake news is almost a red herring. I say that as someone who jumped into the fray with lots of ideas about how to fix it, and I think a lot of those ideas are good ideas. But one reason we’re talking about it is because it feels like a fixable problem for platforms. It also gives journalists a way to conceptualize and be superior to the problem…I think if you were to eliminate entirely the version of fake news that is the ‘Trump is endorsed by the Pope” version, you could rerun these whole last two years and everything would happen more or less the same. You wouldn’t see some huge difference in the way that things played out.

I really feel like it’s less about fake news and more about, “Is the truth loud enough?” And is it loud enough, in particular, for a broad population and not just news junkies? Whenever you look at who actually consumes news, it’s such a shockingly tiny chunk of the public as a whole. If the problem is that the truth isn’t loud enough, it points in very different directions than if the problem is that fake news is misleading people.

That’s a great distinction, Eli. It’s hard to imagine the truth could be loud enough, because the thing that wins right now is outrage and disgust, and truth can often be more subtle.

But I would also say, the thing that wins now mostly has always won, and is not even news at all. The thing that wins now is some guy surfing off his roof into a garbage container. I mean, you know, because entertainment is entertaining, and news is often boring. How do you rebuild trust and interest so that the truth gets an audience?

Do you have thoughtful ideas for how to do that?

I mean, I do think part of it has to do with really putting down some of our notions about how people come to and understand ideas, especially the ones that rely on a strictly rational approach. I say that as a wistful rationalist, who would really rather argue things out in a logic-driven format. I’ve been thinking a lot about a couple different pieces of that. One is a study that was done recently where people were presented with information about a political candidate, and one group just got some positive information about him. The other group was told he had stolen from his own campaign, and then were immediately presented with this abject apology: “We’re sorry. There was another person with the same name. We mismatched the records. This guy had nothing to do with that.” The researchers asked both groups, “Is he corrupt?” Actually, both groups agreed at similar levels that he’s not corrupt.

Then, they asked about favorability toward him, and the group that was exposed to the error and the correction was much less favorable toward him than the group that never heard that at all.

So it’s not enough to tell the truth—it’s how you tell that the truth matters?

If what you want is for people to have true ideas in their heads, I think thinking about not just what are the facts, but also what are the models that people are building on, and are those underlying models right or wrong, is really important. For example, the more you say Saddam Hussein had nothing to do with 9/11, the more people believe he had something to do with 9/11. Cognitively, the “not” signal is weaker than the “these two things have something to do with each other” signal. It turns out the answer to that is, you say, “Well, 15 of the hijackers were from Saudi Arabia, two were from UAE, and one was from Egypt.” That replaces the need for that idea that Saddam Hussein had something to do with 9/11 in the first place. This is really far from how news and journalism tend to go about communicating and establishing the truth, right?

Also, there is a lot of evidence that when people feel like their identity is threatened, they hold onto their controversial beliefs more strongly, and they’re less accepting of people unlike them. Conversely, if your identity is affirmed or supported, then you’re actually much more willing to tolerate and be interested in new ideas. I think one of the questions is, in our standard political discussions, the identities at play are partisan identities. Are there ways to prime other identities that allow for better and more understanding conversations? Some of the best cross-partisan conversation online happens on sports forums and sports bulletin boards, because, [the assumption is] “Hey, we’re all Patriots fans first, and Democrats and Republicans second.”

The third piece is that people do think in stories and they think emotionally as well as factually. I think that’s a lot of what we’re trying to do at Upworthy. We tell stories that help illuminate a topic or an idea in a way that you’ll remember because it’s emotional and vivid, rather than being super dry.

So you are trying to harness all of the aspects that you have concluded around how people get to and remember important information to share important things?

Yes. As someone who cares about climate change, for example, I often feel psychologically punished for reading about it. I come away depressed and beaten down by the experience of learning about it. That’s not a great way to set up a good feedback loop of information consulting. How do you encourage folks to engage with this stuff in a way that doesn’t feel like, “We’re going to put a big hole in their day?” It doesn’t have to be like, “everything’s great,” but I’ll take non-despair.

A few years ago, Upworthy distinguished itself as being masterful at clickbait headlines, and you quickly drew a massive audience by mastering the Facebook News Feed. The News Feed algorithm has changed many times. What have you learned?

First off, actually, for what it’s worth, we have more people viewing Upworthy content each month today, because we’ve shifted to video. The on-site audience is between 15 to 20 million. Then, the video audience, which is mostly on Facebook, is around 200 million.

Our premise from the beginning was that we’re going to swim with the current and probably get a little bit ahead. Facebook is an enormous part of the media landscape, and we decided to launch there. We pivoted to video a few years ago, and that’s turned out to be a very good thing. There’s this ongoing conversation in media about, “Do you work with the platforms or fight them?” I have no regrets about taking a position on being really, really good at platforms. I feel like there’s some risk there, but there’s way more risk in pretending that they’re not going to be an incredibly part of the ecosystem going forward.

What are you figuring out about distribution at Upworthy?

We have a social scientist on staff who helps us really understand how our stories are affecting people — what are the emotional valences behind that, what are the persuasion valences behind that, and what are the outcomes in terms of people’s willingness to do something? We’ve been doing a lot of that work with the Gates Foundation. It’s totally fascinating. One piece we looked at is correlations between emotions and sharing.

As it turns out, there’s this strong correlation between stories that gave people a sense of empowerment or, “I have some control, I have some agency,” and engagement. There’s way more sharing and engagement, even on really difficult issue like African public health. Wouldn’t it be great if people walked away from media more often feeling some sense of agency and control? That’s certainly not the predominant feeling that I get reading my News Feed these days.