Article preview (bot search)
(Original link: theverge.com)
It was a big news day for bans: Twitch temporarily banned Donald Trump , Reddit banned The_Donald , YouTube banned a group of far-right creators , and India banned TikTok . But I still haven’t written about the Facebook ad boycott, which accelerated since last I wrote — so let’s talk about that today, and we’ll get to the rest later this week.
A social media advertising boycott that began with some outerwear brands picked up steam over the weekend, and has been joined by some of the giants of consumer brand advertising. Unilever , Verizon , Starbucks , Coca-Cola , and Clorox are among those who have pulled their ads. ( Microsoft did so quietly in May.) Some pulled their ads for a month; some put their ads on an indefinite “pause.” Some pulled their ads from Facebook only; others pulled them from Twitter and YouTube as well. Some joined an official boycott led by a coalition of civil rights groups that includes the Anti-Defamation League and NAACP; others nodded respectfully at the boycott but said they were doing their own thing.
Most of the attention has focused on the Facebook-related aspects of the boycott, so let’s start there: What exactly do the advertisers want? The civil rights group put up a web page with some “recommendations,” starting with hiring a “C-suite level executive with civil rights expertise to evaluate products and policies for discrimination, bias, and hate.” (My sense is that Facebook’s chief diversity officer does at least some of this already, if somewhat informally.) It also asked Facebook to “submit to regular, third party, independent audits of identity-based hate and misinformation.” ( Like this one ?)
Then there’s a part where they ask for their money back:
Provide audit of and refund to advertisers whose ads were shown next to content that was later removed for violations of terms of service.
The remainder is a mix of requests for things Facebook already does or has a policy against (“stop recommending or otherwise amplifying groups or content from groups associated with hate”; “removing misinformation related to voting”); sort of already has a policy against (“Find and remove public and private groups focused on white supremacy, militia, antisemitism, violent conspiracies, Holocaust denialism, vaccine misinformation, and climate denialism”); and things it thought about doing but decided not to (fact-check political ads).
To be fair, there are some original ideas in here. (My favorite, and something every platform should absolutely do: “Enable individuals facing severe hate and harassment to connect with a live Facebook employee.”) But in their public statements, most of the brands have spoken as if Facebook doesn’t ban hate speech at all.
Take Unilever, which removed ads from Twitter as well as Facebook. Here are Suzanne Vranica and Deepa Seetharaman in the Wall Street Journal :
“Based on the current polarization and the election that we are having in the U.S., there needs to be much more enforcement in the area of hate speech,” Luis Di Como, Unilever’s executive vice president of global media, said.
“Continuing to advertise on these platforms at this time would not add value to people and society,” Unilever said. The ban also will cover Instagram.
If advertising Hellmann’s mayonnaise on Facebook and Twitter was “adding value to people and society” before , it’s news to me. But the larger point is that what Unilever and other brands say they want — “more enforcement” — is so vague as to be nearly meaningless.
For instance, take a look at the statement Adidas and Reebok made when they pulled ads on Facebook and Instagram through July: “Racist, discriminatory, and hateful online content have no place in our brand or in society.” And here is Facebook’s policy on hate speech : “We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.”
This would suggest that what is at stake here, to the extent that the boycott is actually about hate speech, is not what is allowed but what is enforced . And if that’s the conversation you want to have, you need to ask different questions. Questions like: How swiftly should violating content be removed? How much of it should be identified by automated systems? And how many mistakes are you willing to tolerate, both for posts removed in error and posts left up in error?
What makes the last one tricky is that given Facebook’s vast size, even a 1 percent error rate means that thousands of mistakes will be made every day. It’s not possible to let 1.73 billion people a day post freely on your services and have them all comply with your rules. Maybe your reaction to that is that it’s OK, some mistakes are fine. Maybe your reaction is that’s terrible, we should get rid of the law that makes all that posting possible . (This is the stated position of the Republican and Democratic candidates for president.)
Or maybe your reaction is, how did Facebook get so...