A question I see coming up frequently in the news is what role tech companies should play in combating terrorism. Many are calling on social media companies like Twitter to do more to inhibit the spread of extremist propaganda on their platforms. In attempting to address these requests, tech companies are struggling to figure out a balance between preventing hatred and violence and maintaining a commitment to freedom of speech.
While ISIS’ tech savvy has been a concern for months, events in recent weeks have heightened the level of fear. Last week it was revealed that three British teenagers made their way into Syria after initiating contact with an ISIS recruiter via Twitter. A report released yesterday by the Brookings Institute estimated that a minimum of 46,000 Twitter accounts operate on behalf of the Islamic State. The report also stated that Twitter has only suspended 1,000 of these accounts, though the social media site says that the actual number is higher.1 While the number of ISIS accounts seems high, they actually only represent a very small fraction of users. Instead, their success comes primarily, the report says, from their ability to repeatedly tweet and amplify their message from around 2,000 accounts.2 While Twitter is committed to cracking down on these accounts, their process is somewhat slow. Because the platform has close to 288 million accounts, they don’t have “the manpower to go into every one of their accounts and determine their origin.”3 Instead, they must rely on user complaints which have greatly increased lately. Because of these account suspensions, ISIS has issued threats to Twitter and anyone affiliated with company.
All of this begs the question of what role tech companies should play in this fight and how they can best strike a balance between the First Amendment and national security.
Well first, we know that the First Amendment isn’t “all protecting.” There are exceptions to free speech, including language that could endanger other people. While the Supreme Court has been relatively protective of hateful language, it does not permit such language when it will lead to “imminent lawless action.” In Bradenburg v. Ohio, “[t]he Court used a two-pronged test to evaluate speech acts: (1) speech can be prohibited if it is ‘directed at inciting or producing imminent lawless action’ and (2) it is ‘likely to incite or produce such action.’”4
[quote author=”Freedoms of speech and press do not permit a State to forbid advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”] United States Supreme Court, Bradenburg v. Ohio (1968)[/quote]
Simplified, this means that speech that will clearly lead to illegal acts will not be protected. In addition to this, speech that could endanger others is also an exception. As we described in a previous blog post, the First Amendment, for example, would not protect one yelling “Fire” in a crowded movie theatre as it would create a “clear and present danger.”5
[quote author=”The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic …The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger.”]Justice Oliver W. Holmes, Schenck v. United States (1919)[/quote]
Most social media platforms have also established their own policies to prevent hate speech and violence. Facebook’s terms and conditions read “You will not post content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.”6 Similarly, Instagram forbids users from posting “violent, nude, partially nude, discriminatory, unlawful, infringing, hateful, pornographic or sexually suggestive photos or other content via the Service.”7 Interestingly, YouTube does not have explicit prohibitions of hate speech or language promoting violence on its sites but we do know that YouTube was quick to remove much of the Islamic State’s content from their platform. Twitter’s Terms of Conditions are lenient and warns users that they “may be exposed to Content that might be offensive, harmful, inaccurate or otherwise inappropriate, or in some cases, postings that have been mislabeled or are otherwise deceptive.”8 However, in another part of the site titled “The Twitter Rules,” the social media platform does say that users “may not publish or post direct, specific threats of violence against others.”9
Despite these protections, there has still been some difficulty in managing the spread of extremist content on social media sites. Because of this, the public sector is continuing to seek help from the tech community. A few weeks ago the White House hosted a “Summit to Counter Violent Extremism,” which was attended by Google, Facebook, and Twitter, to strategize on how best to continue this fight.10 Congressman Ted Poe (TX-R) has also requested that Twitter remove ISIS accounts from their platform, “add ‘promoting terrorism’ to its defined list of violations, in order to allow users to more easily mark abusive content,” and create a designated team to monitor this content quickly.11 And it’s not just lawmakers in the United States asking for help. In mid-February, French Interior Minister Bernard Cazeneuve asked tech companies to help, and “to realize that they have an important role to play.”12 It’s clear that there is still more to be done.
Though their policies and the law seem to support social media platforms removing violent content, I understand why companies are concerned about a slippery slope. This delicate balance between freedom of speech and fighting terrorism is even more sensitive considering the recent attacks on Charlie Hebdo in France, attacks that were fueled by the magazine’s depiction of the prophet Muhammad. What is clear, though, is that technology companies have a role in this fight. We often hear the tech community saying it wants to use its skills to serve, and maybe this is one of those opportunities. As the Brookings Institute report concludes, “approaches to the problem of extremist use of social media […] are most likely to succeed when they are mainstreamed into wider dialogues among the broad range of community, private, and public stakeholders.”13 In other words, this challenges requires public-private partnerships.
1 Gladstone, R. and Goel, V. (2015, March 5). ISIS Is Adept on Twitter, Study Finds. The New York Times. Retrieved from http://www.nytimes.com/2015/03/06/world/middleeast/isis-is-skilled-on-twitter-using-thousands-of-accounts-study-says.html
2 Gladstone, R. and Goel, V.
3 Gladstone, R. and Goel, V.
4 Bradenburg v. Ohio. (n.d.). The Oyez Project at IIT Chicago-Kent College of Law. Retrieved from http://www.oyez.org/cases/1960-1969/1968/1968_492
5 Schenck v. United States (n.d.). The Oyez Project at IIT Chicago-Kent College of Law. Retrieved from http://www.oyez.org/cases/1901-1939/1918/1918_437
6 Statement of Rights and Responsibilities. (2015, January 30). Facebook. Retrieved from https://www.facebook.com/legal/terms
8 Terms of Service. (2015, September 8). Twitter. https://twitter.com/tos?lang=en
9 The Twitter Rules. (2014). Twitter. https://support.twitter.com/articles/18311-the-twitter-rules
10 Liebelson, D. (2015, February 23). How The Obama Administration Is Asking Tech Companies To Help Combat ISIS. The Huffington Post. Retrieved from
11 Volz, Dustin. (2015, February 24). GOP Lawmaker Wants Twitter to Ban ISIS. The National Journal. Retrieved from
12 Gauthier-Villars, D. and Schechner, S. (2015, Feb. 17). Tech Companies Are Caught in the Middle of Terror Fight. The Wall Street Journal. Retrieved from
13 Berger, J.M.and Morgan,J. (March 2015). The ISIS Twitter census: Defining and describing the population of ISIS supporters on Twitter. The Brookings institute. Retrieved from http://www.brookings.edu/research/papers/2015/03/isis-twitter-census-berger-morgan