Debunking Election and Social Media Myths
Released on 10/23/2020
Russia is by far the most advanced foreign adversary
sending manipulative messages over social media.
They're even more nuanced today than they were in 2016,
and we are no more prepared today than we were in 2016.
[dramatic music]
Hey, Wired, I'm Sinan Aral.
I'm the David Austin Professor of Management at MIT,
Director of MIT's Initiative on the Digital Economy,
and author of The Hype Machine,
about how social media disrupts our world.
I'm here to debunk some myths about the role
of social media in elections,
and specifically in the upcoming election of 2020.
[dramatic music]
Social media sways elections.
Did Russia sway the election of 2016?
We know they sent manipulative messages
to 126 million people on Facebook,
20 million on Instagram,
and 10 million Tweets.
Do these things sway the election?
There's really three things to know.
Does it change vote choice?
Does it change voter turnout?
And is the reach, scope, and targeting of misinformation
or campaign information enough to sway an election?
Voter choice is simply who you choose to vote for,
given that you're voting.
Do you vote for Republicans or Democrats?
That's a vote choice.
The evidence on vote choice is relatively clear.
Social media messaging and digital advertising
in general has a very small to negligible
to zero effect on vote choice.
Voter turnout on the other hand,
is do you choose to vote at all
and the number of people who vote in an election.
And there, the evidence is a little bit scarier
in the sense that large scale experiments have shown
that social media messages,
digital advertising can have
statistically significant effects on voter turnout.
Facebook ran an experiment
with 61 million people in it in 2010,
which showed that with a simple message,
they could create votes in congressional elections
that wouldn't have happened without their message.
They replicated that experiment in 2012
and demonstrated again the ability
for social media to create voter turnout.
Many studies indicate how digital messaging
can get out the vote and that's an important part
of changing or swaying elections.
Targeting is who you direct
digital social media messages to.
Which populations in which regions,
in which districts in the voting electorate.
The evidence in 2016 indicated
that Russian interference was targeted at swing states
and that the reach and scope of it
was large enough to affect voters in a way
that could change the election through voter turnout.
In addition,
we know that a lot of the manipulative messages sent
by Russia in 2016 were about voter suppression.
Voter suppression memes tend to be targeted
at specific communities.
So for instance,
we know that in 2016 on Instagram in particular,
African American voters were targeted
with voter suppression memes.
Indicating for example,
Hillary Clinton is not a fan of the Black voter
and therefore we should stay home
or there's really no one to vote for in this election.
There's no reason to vote.
Those types of memes were targeted through
at mentions at communities that were African American
or in this year's election,
follow the Black Lives Matter movement and so on,
trying to suppress specific communities
of voters in key swing states.
The number one culprit in spreading misinformation
and voter suppression memes in 2016
and likely in 2020 is Russia.
While Russian misinformation was scary in the 2016 election,
they are much more sophisticated today
than they were four years ago.
In addition,
this is happening during a global pandemic
with civil unrest in the streets arising
from the justifiable social movements
against police brutality in the United States.
With all of this uncertainty,
we are at a dramatic risk for foreign interference
in our election in 2020.
And of course,
social media is nowhere near
the only factor affecting elections.
Certainly the candidates, their charisma, their policies,
advertising their ability to connect with voters,
as well as news of the day.
What is hitting the pocketbooks and the homes and families
of every day voters obviously has the largest effect.
Social media spreads fake news faster than the truth.
That's true.
We did a 10-year longitudinal study
of all of the verified true and false news
that spread on Twitter between 2006 and 2017.
In fact we found that false news
was 70% more likely to be retweeted
and false news traveled about six times faster
than true news online.
Fake news is not a new term.
It was not invented by Donald Trump.
In fact,
it first appeared I believe
in a Harper's Magazine news story
and we've had the concept of falsity in journalism
for many years and decades prior to today.
The thing that makes today different however,
is the speed and breadth and depth
with which social media can spread fake news
so much faster than the truth online
and how that can be targeted at specific individuals
and communities creating separate realities
for people who are seeing one type of news in one community
and a different type of news in a different community.
So when we found these results in our Twitter data,
the natural next question for us was why.
Why does fake news spread so much farther,
faster, deeper, and more broadly than the truth?
What we came up with was
what we call the Novelty Hypothesis.
So if you read the cognitive science literature,
you know that human attention is drawn to novelty.
New things in the environment.
If you read the sociology literature,
you know that we gain in status
when we share novel information
because it makes us look like we're in the know
or that we have inside information
that other people don't have.
So these two factors make it more likely
that we share novelty.
So when we've checked the novelty of true
and false news compared to everything
that a given individual on Twitter had seen
in the two months prior,
we found that indeed false news
was way more novel than the truth,
and when we checked the replies to true and false Tweets
to see how people were expressing sentiment
about what they were reading,
we found that indeed in reply to false news,
people express surprise, anger, and disgust,
while in reply to true news
they expressed anticipation, joy, and trust.
So the surprise confirmed our novelty hypothesis that yes,
false news is more novel.
People spread more novel information more often
than less novel information,
and people were genuinely surprised by fake news.
Voting booths are not hackable.
False.
Voting booths can be hacked,
have been hacked,
and abnormal ballot results in a number
of examples over the last decade or so in the United States.
Many people believe that because the United States
has a federalist system in which each state tallies
its votes in its own way.
Uses different computer systems,
that there's no centralized tallying of ballots,
that this somehow protects the American voting system
from hacking at the voting booth itself.
But that's not true.
It just means that there are 50 different types
of systems that a hacker can attack.
So for instance, we know that in 2016,
a hacker named CyberZeist hacked the Alaska voting systems
and claimed that he could change the voting tallies
in any direction that he wanted in Alaska.
There are also a number of myths surrounding voting
that are spreading on social media.
The major one is that there is widespread voter fraud.
There is no real evidence of a systematic voter fraud
at the level of ballots or other types of voter fraud.
People voting twice, dead people voting, and so on.
Although there have been a very, very small handful
of incidents that may have happened
where there is an error on a ballot,
there has been no evidence of systematic voter fraud
since we can remember about elections in the United States.
Which means that despite all
of the myths floating around social media,
we as citizens can be confident
in the integrity of our elections.
So my advice to all of us is that we vote
and vote as quickly as possible ahead of November 3rd.
Social media algorithms are dividing our society.
There is evidence that recommendation algorithms
that social media uses does tend to give us more
of what we want and therefore lock us into narrower
and narrower sets of information.
Filter bubbles refers to the fact
that in an algorithmic world,
we are each living in our own information bubble.
Meaning that what I see on social media is not what you see
and not what your friends see,
because everything that you see is tailored to you.
And it's tailored to you by algorithms
that are designed to give you more
of what you want to keep you engaged.
That creates these filter bubbles of information
that are unique to every individual.
Echo chambers are groups or communities of people
that are sharing the same information over
and over again with each other,
and that that information stays locked in that community
and doesn't cross over for instance,
to the other side of the aisle
where different information is being constantly shared
amongst a different set of people.
So there are certain algorithms.
For instance, the YouTube algorithm
that tends to recommend more and more
of the type of content that you seem engaged with
and interested in.
Studies have shown that these types
of algorithms can tend to lead
to more extreme content being shown to the viewer.
These algorithms are designed to be bottomless or endless,
meaning they keep you engaged
in a constantly updating reel of new videos.
While the jury is out on whether this can radicalize someone
or the degree to which there
are systematic extremism outcomes
that are created by these algorithms,
the fact that they are sending you down rabbit holes
of more and more content similar to what you like
and engage with is troubling,
given the notion of the filter bubble.
In order to fight the filter bubble,
we have to seek out diverse content.
We have to follow people whose opinions
are different than our own.
We have to do searches for content
that is contrary to what we believe.
We have to demonstrate to the hype machine,
to the social media industrial complex,
that we're interested in diversity,
and that we are seeking diversity
or opinions that are different from our own.
That will help us break out of the filter bubbles
that we find ourselves in with these algorithms.
You can easily spot a deepfake.
Deepfakes are synthetic video that are generated
by machine learning algorithms called
generative adversarial networks.
These networks have a generator and a discriminator,
where the discriminator's job is to tell real
from fake videos and the generator tries
to generate more and more convincing synthetic video
'til it fools the discriminator
into believing that it's true.
Now, the problem with deepfakes is
that they're more difficult to spot every day that goes by.
There are instances of audio deepfakes,
where companies have been defrauded out
of millions of dollars.
Where the CFO will be called by a synthetic attacker
that is using the voice of the CEO requesting
that large sums of money be transferred
before the end of the quarter or to close a deal.
The reason why deepfakes are so troubling
is because seeing is believing
and a picture is worth a thousand words.
I have seen some incredibly professionally created
and convincing deepfakes for instance,
of President Barack Obama,
or Mark Zuckerberg,
or Prime Minister Boris Johnson,
or Kim Jong-Un that really kind of skate the line
between is this convincing or is this not?
As deepfakes become more commonplace
as the technology used to create them
becomes more democratized
and more people have access to it,
I think we're going to see a rise of wave of synthetic audio
and video that could become very dangerous
in a political environment or in a commercial environment,
either through fraud or through political manipulation.
I think the most effective way to spot a deepfake is
to distinguish the content of what's being said in the film.
If you can't imagine those words coming out of the mouth
of the person that you're viewing,
that is a good sign that this is a deepfake.
Social media can bring about positive change.
Most recently we've been focused on the potential disasters
that social media can create in our world,
but it's important not
to forget about the tremendous potential
for promise that social media can also bring.
We know for instance,
that when Nepal experienced the greatest earthquake
that it's seen in 100 years,
Facebook spun up a donate now button
and raised $15.5 million
from 770,000 people in over 100 countries,
which just shows you the mobilization potential
of this technology.
It's certainly played a catalyzing and accelerant role
in important social movements around the world,
like Black Lives Matter,
the Arab Spring, the Snow Revolution in Russia,
social mobilization in Japan and Hong Kong.
These kinds of social movements
can really be accelerated by social media.
Research at MIT and at Stanford shows
that Facebook creates $370 billion a year
in consumer surplus in the United States alone.
Imagine that for the entire world.
That's economic opportunity,
that's the ability to find jobs,
access to life saving health information,
and real human connection.
In some countries around the world,
Facebook is the internet.
It's the way that people conduct any number
of human activities,
from market transactions,
to running their businesses,
to staying in touch with their friends and family,
or finding out about where to vote or how to get healthcare.
These types of benefits are actually tremendous.
Social media is a very powerful tool
for creating such change in society.
The real question is what are we gonna use it for?
Are we gonna use it for the nefarious purposes
that we've seen it be used for recently
or are we gonna use it to bring about a better world?
Astronaut Chris Hadfield Debunks Common Space Myths
Doctor Debunks Common Medical Myths
Dr. Seema Yasmin Debunks Coronavirus Myths
Sleep Expert Debunks Common Sleep Myths
Doctor Debunks Common Health Myths
Internet Expert Debunks Cybersecurity Myths
Dr. Carl Hart Debunks Drug Myths
Debunking Election and Social Media Myths
Happiness Researcher Debunks Happiness Myths
Meteorologist Debunks Weather Myths
NASA's Dr. Lori Glaze Debunks Mars Myths
U.S. - China Relations, Explained