Tech News

On election day, Fb and Twitter made their merchandise worse

That gust of wind you felt from Silicon Valley on Wednesday morning was the social media industry's preliminary sigh.

For the past four years, executives at Facebook, Twitter, YouTube, and other social media companies have been obsessed with one overarching goal: They shouldn't be blamed for ruining the 2020 U.S. elections like the 2016 elections, as Russian trolls and disinformation peddlers ran rude about their defenses.

So they wrote new rules. They built new products and hired new people. They performed elaborate table exercises to plan each possible election outcome. And on election day, they accused huge teams of cracking down on jokes and false claims around the clock.

So far, these efforts seem to have averted the worst. Despite the desperate (and absolutely predictable) attempts by President Trump and his allies to undermine the legitimacy of voting in the states he is losing, no major foreign meddling campaigns have been exposed this week, and election day itself was relatively calm. Fake accounts and potentially dangerous groups were quickly removed, and Facebook and Twitter have been unusually proactive in beating labels and warnings of premature victory claims. (YouTube was a different story, as evidenced by the company's slow, lukewarm response to a video falsely claiming that Mr. Trump won the election.)

The week is of course young and there is still a lot of time for problems. Election-induced disinformation is already trending – some of which are aimed at Latinos – and will only grow when the votes are challenged in court and conspiracy theorists use all uncertainties to undermine confidence in the possible outcomes.

However, the platforms' worst fears have not yet been fulfilled. This is a good thing, and thanks to the employees of the companies who were busy enforcing their rules.

At the same time, it is worth exploring how Twitter, Facebook, and YouTube are averting election-related issues, as it sheds light on the very real issues they are still facing.

For months, almost every step these companies have taken to secure the elections has resulted in the slowdown, shutdown, or other obstruction of core parts of their products – defending democracy by degrading their apps.

They added friction to processes like buying political ads that used to be smooth and seamless. They brought in human experts to root out extremist groups and intervened manually to slow the spread of sketchy stories. They have overridden their own algorithms to insert information from trusted experts into users' feeds. And when the results came in, they relied on calls from news organizations like The Associated Press rather than trusting their systems to naturally bring the truth to the surface.

Nowhere was this shift more evident than on Facebook, which for years envisioned itself as a kind of post-human communication platform. Mark Zuckerberg, the company's chief executive officer, spoke often about his philosophy of “frictionless” design, which makes things as easy as possible for users. Other executives I spoke to seemed to believe that Facebook would ultimately become some kind of self-control machine, with artificial intelligence doing most of the dirty work and human intervention as little as possible.

However, in the run-up to the 2020 elections, Facebook went in the opposite direction. A new, cumbersome approval process for political advertisers has been introduced and new political ads have been blocked after election day. It throttled false claims and put in place a "virality breaker" to give fact-checkers time to evaluate suspicious stories. And it has temporarily turned off its recommendation algorithm for certain types of private groups to reduce the possibility of violent civil unrest.

All of these changes actually made Facebook more secure. But that also included recalling the features that have been driving the platform's growth for years. It's an enlightening act of self-awareness, as if Ferrari realized that the only way to keep their cars from crashing is to replace the engines with go-kart engines.

Updated

Nov. 5, 2020, 5:09 p.m. ET

"Essentially, when you look at Facebook's election results, it was important to get a lot of traffic and attention to these hubs, which were curated by people," said Eli Pariser, a longtime media manager and activist at Civic Signals, a new one Project works that tries to reinterpret social media as public space. "This is an indication that when you have information that is really important, there is ultimately no substitute for human judgment."

Twitter, another platform that has tried for years to make communications as smooth as possible, has spent a lot of time putting the brakes on over the past four years. More moderators were brought in, the rules revised and functions such as trending topics monitored more humanely. In the months leading up to the election, political ads were banned and sharing features were disabled for tweets containing misleading information about the election results, including some from the president's account.

YouTube hasn't been trading anywhere near as aggressively this week, but it has also changed its platform in insightful ways. Last year, it tweaked its vaunted recommendation algorithm to slow down the spread of borderline content. And it began promoting "authoritative sources" during breaking news events to prevent fools and conspiracy theorists from filling in search results.

All of this begs the critical question of what exactly will happen when the elections are over and the spotlight is turned away from Silicon Valley. Are the warning signs and circuit breakers being taken out of service? Are the annoying algorithms turned on again? Are we just going back to social media as usual?

Camille François, chief innovation officer of Graphika, a company investigating disinformation on social media, said it was too early to say whether these companies' precautionary measures had worked as intended. But she acknowledged that this level of hypervigilance may not last.

"Many emergency processes have been set up on the platforms," ​​she said. "The sustainability and scalability of these processes is a fair question."

Mr Pariser said the electoral disorder prevention platforms' work this year raises bigger questions about how they will respond to other threats.

"These platforms are used every day for really important conversations," said Pariser. “If you are doing this for US elections, why not for elections in other countries? Why not climate change? Why not acts of violence? "

These are the right questions. The social media companies may have got through election night without a disaster. But like the elections themselves, the real battles are yet to come.

Related Articles