The Washington Post Democracy Dies in Darkness

Social media wasn’t ready for this war. It needs a plan for the next one.

Facebook, YouTube, TikTok and Twitter are hastily rewriting their rules on hate, violence, and propaganda in Ukraine — and setting precedents they might regret

Analysis by
Staff writer|
March 25, 2022 at 6:00 a.m. EDT
Facebook has relaxed its rules on hate speech in Ukraine to allow users to praise the Azov Battalion, a neo-Nazi militia, in the context of its resistance to the Russian invasion. A soldier from the battalion is shown here in Kharkiv, Ukraine on March 22. (Andrzej Lange/EPA-EFE/REX/Shutterstock)
10 min

A month ago, praising a neo-Nazi militia or calling for violence against Russians could get you suspended from Facebook in Ukraine. Now, both are allowed in the context of the war between the two countries. Meanwhile, Russian state media organizations that once posted freely are blocked in Europe on the platform.

It isn’t just Facebook that’s rewriting its rules in response to Russia’s bloody, unprovoked invasion of Ukraine. From Google barring ads in Russia and taking down YouTube videos that trivialize the war, to Twitter refusing to recommend tweets that link to Russian state media and TikTok suspending all video uploads from the country in response to its “fake news” law — each of the largest social media platforms has taken ad hoc actions in recent weeks that go beyond or contradict its previous policies.

The moves illustrate how Internet platforms have been scrambling to adapt content policies built around notions of political neutrality to a wartime context. And they suggest that those rule books — the ones that govern who can say what online — need a new chapter on geopolitical conflicts.

“The companies are building precedent as they go along,” says Katie Harbath, CEO of the tech policy consulting firm Anchor Change and a former public policy director at Facebook. “Part of my concern is that we’re all thinking about the short term” in Ukraine, she says, rather than the underlying principles that should guide how platforms approach wars around the world.

Moving fast in response to a crisis isn’t a bad thing in itself. For tech companies that have become de facto stewards of online information, reacting quickly to global events, and changing the rules where necessary, is essential. On the whole, social media giants have shown an unusual willingness to take a stand against the invasion, prioritizing their responsibilities to Ukrainian users and their ties to democratic governments over their desire to remain neutral, even at the cost of being banned from Russia.

Analysis: In Ukraine, tech platforms abandon the illusion of neutrality

The problem is that they’re grafting their responses to the war onto the same global, one-size-fits-all frameworks that they use to moderate content in peacetime, says Emerson T. Brooking, a senior resident fellow at the Atlantic Council’s Digital Forensic Research Lab. And their often opaque decision-making processes leave their policies vulnerable to misinterpretation and questions of legitimacy.

The big tech companies now have playbooks for terrorist attacks, elections, and pandemics — but not wars.

What platforms such as Facebook, Instagram, YouTube and TikTok need, Brooking argues, are not another hard-and-fast set of rules that can be generalized to every conflict, but a process and protocols for wartime that can be applied flexibly and contextually when fighting breaks out — loosely analogous to the commitments tech companies made to address terror content after the 2019 Christchurch massacre in New Zealand. Facebook and other platforms have also developed special protocols over the years for elections, from “war rooms” that monitor for foreign interference or disinformation campaigns to policies specifically prohibiting misinformation about how to vote, as well as for the covid-19 pandemic.

The war in Ukraine should be the impetus for them to think in the same systematic way about the sort of “break glass” policy measures that may be needed specifically in cases of wars, uprisings, or sectarian fighting, says Harbath of Anchor Change — and about what the criteria would be for applying them, not only in Ukraine but in conflicts around the world, including those that command less public and media attention.

Facebook, for its part, has at least started along this path. The company says it began forming dedicated teams in 2018 to “better understand and address the way social media is used in countries experiencing conflict,” and that it has been hiring more people with local and subject-area expertise in Myanmar and Ethiopia. Still, its actions in Ukraine — which had struggled to focus Facebook’s attention on Russian disinformation as early as 2015 — show it has more work to do.

Russian disinformation on Facebook targeted Ukraine well before the 2016 U.S. election

The Atlantic Council’s Brooking believes Facebook probably made the right call in instructing its moderators not to enforce the company’s normal rules on calls for violence against Ukrainians expressing outrage at the Russian invasion. Banning Ukrainians from saying anything mean about Russians online while their cities are being bombed would be cruelly heavy-handed. But the way those changes came to light — via a leak to the news agency Reuters — led to mischaracterizations, which Russian leaders capitalized on to demonize the company as Russophobic.

After an initial backlash, including threats from Russia to ban Facebook and Instagram, parent company Meta clarified that calling for the death of Russian leader Vladimir Putin was still against its rules, perhaps hoping to salvage its presence there. If so, it didn’t work: A Russian court on Monday officially enacted the ban, and Russian authorities are pushing to have Meta ruled an “extremist organization” amid a crackdown on speech and media.

In reality, Meta’s moves appear to have been consistent with its approach in at least some prior conflicts. As Brooking noted in Slate, Facebook also seems to have quietly relaxed its enforcement of rules against calling for or glorifying violence against the Islamic State in Iraq in 2017, against the Taliban in Afghanistan last year, and on both sides of the war between Armenia and Azerbaijan in 2020. If the company hoped that tweaking its moderation guidelines piecemeal and in secret for each conflict would allow it to avert scrutiny, the Russia debacle proves otherwise.

Ideally, in the case of wars, tech giants would have a framework for making such fraught decisions in concert with experts on human rights, Internet access and cybersecurity, as well as experts on the region in question and perhaps even officials from relevant governments, Brooking suggests.

In the absence of established processes, major social platforms ended up banning Russian state media in Europe reactively rather than proactively, framing it as compliance with the requests of the European Union and European governments. Meanwhile, the same accounts stayed active in the United States on some platforms, reinforcing the perception that the takedowns were not their choice. That risks setting a precedent that could come back to haunt the companies when authoritarian governments demand bans on outside media or even their own country’s opposition parties in the future.

Pro-Russia rebels are still using Facebook to recruit fighters, spread propaganda

Wars also pose particular problems for tech platforms’ notions of political neutrality, misinformation and depictions of graphic violence.

U.S.-based tech companies have clearly picked a side in Ukraine, and it has come at a cost: Facebook, Instagram, Twitter and now Google News have all been blocked in Russia, and YouTube could be next.

Yet the companies haven’t clearly articulated the basis on which they’ve taken that stand, or how that might apply in other settings, from Kashmir to Nagorno-Karabakh, Yemen and the West Bank. While some, including Facebook, have developed comprehensive state-media policies, others have cracked down on Russian outlets without spelling out the criteria on which they might take similar actions against, say, Chinese state media.

Harbath, the former Facebook official, said a hypothetical conflict involving China is the kind of thing that tech giants — along with other major Western institutions — should be planning ahead for now, rather than relying on the reactive approach they’ve used in Ukraine.

“This is easier said than done, but I’d like to see them building out the capacity for more long-term thinking,” Harbath says. “The world keeps careening from crisis to crisis. They need a group of people who are not going to be consumed by the day-to-day,” who can “think through some of the strategic playbooks” they’ll turn to in future wars.

Facebook, Twitter and YouTube have embraced the concept of “misinformation” as a descriptor for false or misleading content about voting, covid-19, or vaccines, with mixed results. But the war in Ukraine highlights the inadequacy of that term for distinguishing between, say, pro-Russian disinformation campaigns and pro-Ukrainian myths such as the “Ghost of Kyiv.” Both may be factually dubious, but they play very different roles in the information battle.

The platforms seem to understand this intuitively: There were no widespread crackdowns on Ukrainian media outlets for spreading what might fairly be deemed resistance propaganda. Yet they’re still struggling to adapt old vocabulary and policies to such distinctions.

For instance, Twitter justified taking down Russian disinformation about the Mariupol hospital bombings under its policies on “abusive behavior” and “denying mass casualty events,” the latter of which was designed for behavior such as Alex Jones’ dismissal of the Sandy Hook shootings. YouTube cited an analogous 2019 policy on “hateful” content, including Holocaust denial, in announcing that it would prohibit any videos that minimize Russia’s invasion.

TikTok has long tried to stay out of politics. Russia’s invasion is making that harder.

As for depictions of graphic violence, it makes sense for a platform such as YouTube to prohibit, say, videos of corpses or killings under normal circumstances. But in wars, such footage could be crucial evidence of war crimes, and taking it down could help the perpetrators conceal them.

YouTube and other platforms have exemptions to their policies for newsworthy or documentary content. And, to their credit, they seem to be treating such videos and images with relative care in Ukraine, says Dia Kayyali, associate director for advocacy at Mnemonic, a nonprofit devoted to archiving evidence of human rights violations. But that raises questions of consistency.

“They’re doing a lot of things in Ukraine that advocates around the world have asked them for in other circumstances, that they haven’t been willing to provide,” Kayyali says. In the Palestinian territories, for instance, platforms take down “a lot of political speech, a lot of people speaking out against Israel, against human rights violations.” Facebook has also been accused in the past of censoring posts that highlight police brutality against Muslims in Kashmir.

Of course, it isn’t only tech companies that have paid closer attention to — and taken a stronger stand on — Ukraine than other human rights crises around the world. One could say the same of the media, governments and the public at large. But for Silicon Valley giants that pride themselves on being global and systematic in their outlook — even if their actions don’t always reflect it — a more coherent set of criteria for responding to conflicts seems like a reasonable ask.

“I would love to see the level of contextual analysis that Meta is doing for their exceptions to rules against urging violence to Russian soldiers, and to their allowance of praise for the Azov battalion” — the Ukrainian neo-Nazi militia that has been resisting the Russian invasion — applied to conflicts in the Arabic-speaking world, Kayyali says. “It’s not too late for them to start doing some of these things in other places.”

Correction: An earlier version of this story used an incorrect term for the Arabic language.