San Francisco
COMPANIES are usually accountable to no one but their shareholders.
Internet companies are a different breed. Because they traffic in speech - rather than, say, corn syrup or warplanes - they make decisions every day about what kind of expression is allowed where. And occasionally they come under pressure to explain how they decide, on whose laws and values they rely, and how they distinguish between toxic speech that must be taken down and that which can remain.
The storm over an incendiary anti-Islamic video posted on YouTube has stirred fresh debate on these issues. , which owns YouTube, restricted access to the video in Egypt and Libya, after the killing of a United States ambassador and three other Americans. Then, it pulled the plug on the video in five other countries, where the content violated local laws.
Some countries blocked YouTube altogether, though that didn't stop the bloodshed: in Pakistan, where elections are to be scheduled soon, riots on Friday left a death toll of 19.
The company pointed to its internal edicts to explain why it rebuffed calls to take down the video altogether. It did not meet its definition of hate speech, YouTube said, and so it allowed the video to stay up on the Web. It didn't say very much more.
That explanation revealed not only the challenges that confront companies like Google but also how opaque they can be in explaining their verdicts on what can be said on their platforms. Google, Facebook and Twitter receive hundreds of thousands of complaints about content every week.
âWe are just awakening to the need for some scrutiny or oversight or public attention to the decisions of the most powerful private speech controllers,â said Tim Wu, a Columbia University law professor who briefly advised the Obama administration on consumer protection regulations online.
Google was right, Mr. Wu believes, to selectively restrict access to the crude anti-Islam video in light of the extraordinary violence that broke out. But he said the public deserved to know more about how private firms made those decisions in the first place, every day, all over the world. After all, he added, they are setting case law, just as courts do in sovereign countries.
Mr. Wu offered some unsolicited advice: Why not set up an oversight board of regional experts or serious YouTube users from around the world to make the especially tough decisions?
Google has not responded to his proposal, which he outlined in a blog post for The New Republic.
Certainly, the scale and nature of YouTube makes this a daunting task. Any analysis requires combing through over a billion videos and overlaying that against the laws and mores of different countries. It's unclear whether expert panels would allow for unpopular minority opinion anyway. The company said in a statement on Friday that, like newspapers, it, too, made ânuancedâ judgments about content: âIt's why user-generated content sites typically have clear community guidelines and remove videos or posts that break them.â
Privately, companies have been wrestling with these issues for some time.
The Global Network Initiative, a conclave of executives, academics and advocates, has issued voluntary guidelines on how to respond to government requests to filter content.
And the Anti-Defamation League has convened executives, government officials and advocates to discuss how to define hate speech and what to do about it.
Hate speech is a pliable notion, and there will be arguments about whether it covers speech that is likely to lead to violence (think Rwanda) or demeans a group (think Holocaust denial), just as there will be calls for absolute free expression.
Behind closed doors, Internet companies routinely make tough decisions on content.
Apple and Google earlier this year yanked a mobile application produced by Hezbollah. In 2010, YouTube removed links to speeches by an American-born cleric, , in which he advocated terrorist violence; at the time, the company said it proscribed posts that could incite âviolent acts.â
ON rare occasions, Google has taken steps to educate users about offensive content. For instance, the top results that come up when you search for the word âJewâ include a link to a virulently anti-Jewish site, followed by a promoted link from Google, boxed in pink. It links to a page that lays out Google's rationale: the company says it does not censor search results, despite complaints.
Susan Benesch, who studies hate speech that incites violence, said it would be wise to have many more explanations like this, not least to promote debate. âThey certainly don't have to,â said Ms. Benesch, director of the Dangerous Speech Project at the World Policy Institute. âBut we can encourage them to because of the enormous power they have.â
The companies point out that they obey the laws of every country in which they do business. And their employees and algorithms vet content that may violate their user guidelines, which are public.
YouTube prohibits hate speech, which it defines as that which âattacks or demeans a groupâ based on its race, religion and so on; Facebook's hate speech ban likewise covers âcontent that attacks peopleâ on the basis of identity. Google and Facebook prohibit hate speech; Twitter does not explicitly ban it. And anyway, legal scholars say, it is exceedingly difficult to devise a universal definition of hate speech.
Shibley Telhami, a political scientist at the University of Maryland, said he hoped the violence over the video would encourage a nuanced conversation about how to safeguard free expression with other values, like public safety. âIt's really about at what point does speech becomes action; that's a boundary that becomes difficult to draw, and it's a slippery slope,â Mr. Telhami said.
He cautioned that some countries, like Russia, which threatened to block YouTube altogether, would be thrilled to have any excuse to squelch speech. âDoes Russia really care about this film?â Mr. Telhami asked.
International law does not protect speech that is designed to cause violence. Several people have been convicted in international courts for incitement to in Rwanda.
One of the challenges of the digital age, as the YouTube case shows, is that speech articulated in one part of the world can spark mayhem in another. Can the companies that run those speech platforms predict what words and images might set off carnage elsewhere? Whoever builds that algorithm may end up saving lives.