According to University of Haifa researchers, extremists exploit an environment of vulnerability created by TikTok’s popularity among younger users
New York, June 24, 2020 — A newly published study reveals that TikTok, today’s fastest-growing social networking application, is also a growing hotbed of far-right extremism.
Led by researchers Gabriel Weimann, a professor of communication at the University of Haifa and senior researcher at the Institute for Counter Terrorism (ICT), and Natalie Masri, a research assistant and graduate student at ICT, the “Spreading Hate on TikTok” report scanned TikTok for far-right contents by applying a systematic content analysis. This scan of TikTok videos, conducted from February through May of 2020, revealed 196 total postings related to far-right extremism. The posts encompassed the far-right ideologies of fascism, racism, anti-Semitism, anti-immigration, chauvinism, nativism and xenophobia; the postings’ activity ranged from espousing violence, to promoting conspiracy theories, to glorifying terrorists.
Developed by the Chinese company ByteDance, TikTok allows its 1.5 billion active users to upload up to 60-second lip-synched videos with a variety of creative and interactive features. Yet the researchers explain that the app “has a darker side,” particularly in regard to TikTok’s popularity among the younger generation. Forty-one percent of TikTok’s users are between ages 16 and 24, and although its Terms of Service prohibit users under age 13, many users who appear in videos are clearly younger. This creates an environment of vulnerability that is exploited by extremist groups.
“First, unlike all other social media, TikTok’s users are almost all young children, who are more naïve and gullible when it comes to malicious contents,” states the study, which first appeared in the Studies in Conflict & Terrorism journal. “Second, TikTok is the youngest platform thus severely lagging behind its rivals, who have had more time to grapple with how to protect their users from disturbing and harmful contents. Yet, TikTok should have learned from these other platforms’ experiences and apply TikTok’s own Terms of Service that does not allow postings that are deliberately designed to provoke or antagonize people, or are intended to harass, harm, hurt, scare, distress, embarrass or upset people or include threats of physical violence.”
Under the study’s methodology, Weimann and Masri first identified TikTok accounts of known far-right groups, including neo-fascists, neo-Nazis, anti-Semites, white supremacists and other extremist groups and organizations that feature ultranationalist, chauvinist, xenophobic, theocratic, racist, homophobic, anti-communist or reactionary views. They then collected posts on TikTok that featured hashtags associated with far-right movements. Finally, they examined the aforementioned accounts and posts as well as accounts which showed interest in extremism through liking, commenting or following the accounts.
“While most of the scholarly attention focused on social media has examined content from leading platforms like Twitter, Facebook or Instagram, the extremism occurring on other platforms like TikTok had gone largely unnoticed until this new study,” said Karen L. Berman, CEO of the American Society of the University of Haifa. “We hope that the insights and data revealed in the report will influentially inform the efforts of social media platforms, regulatory bodies and the general public to expunge hate and extremism from the internet.”