Original article was published by Brian Hubbard on Artificial Intelligence on Medium
Can Media Intelligence Prevent Online Radicalization?
The right tools can detect extremist activity before the digital world and the real world meet
My introduction to the free internet was through 4chan. For me, it was the birthplace of memes, edgy humor, and the raw chaos we see behind the comments section of our everyday internet experience. Somewhere on the timeline between message boards and Twitter, it’s where I went to observe and take part in the unfiltered conversation behind more mainstream social sites like Myspace.
It was a place to laugh and yell at other faceless users at little to no consequence. An “image-based bulletin board” at its core, it maintains its reputation as a wild and uncensored collection of niche discussion boards where all manner of posting unfolds. These sub-communities are home to anything from lonely strangers communing over a shared taste in elegiac wallpaper to hate groups uniting around the symbol of a green frog.
It was the first place I learned how the internet, despite allowing for free speech and encouraging discourse with people from all over the world, was capable of fostering dangerous echo chambers.
Fast forward to today with the recent release of popular documentaries like The Social Dilemma, and the online world is a little more aware of the dangers of echo chambers. Studies show when you only follow people who share your opinion politically or otherwise, your feed is far more likely to reinforce existing beliefs and assuage an unconscious confirmation bias, creating barriers to critical discourse rather than lifting them.
While it plays a role in the important conversation about our relationship with technology, some criticisms of The Social Dilemma take aim at the documentary’s dramatic simplification of a complex problem and its shortsightedness in only casting blame at major players in the “social industry.”
Although major tech companies have been found guilty of algorithms that silently guide you to more influencers or videos, all in the game of keeping you on their platform, the internet is a big place and home to many more communities than those found exclusively on the blue bird app.
They have a long way to go in policing more fervent users whose vitriolic if not outright racist views are nurtured by the platform’s algorithms, but Twitter has already cleared up some of the worst accounts. This year they finally permanently banned the grand wizard of the KKK David Duke after letting him use the platform for more than a decade and have recently maintained their ban against conspiracy theorist and nominee to Florida’s 21st Congressional District, Laura Loomer.
Even with limited restrictions, the far-right responded to what they believed was an infringement on their right to free speech by founding bubble platforms like Gab and Parler in the spirit of encouraging more “rational” debate, free of the language policing of the SJW and leftist agenda. A marketplace of ideas, if you will.
However, to no great surprise, these alternative platforms have become echo chambers in the same way their predecessors did, fanning the flames of extremism and giving a space for dangerous ideologies to commune and plan. A magnet for radical thought, conversations on Gab have gone so far as to openly call for terrorist attacks against minorities. While not necessarily the source of these ideas, online forums can accelerate the process of radicalization in a way that offline groups can’t. People are prone to take more risks behind a screen than in person and members of a group can meet from anywhere at any time.
By now it’s almost a truism that while the principles of the internet as a forum of free thought have given a voice to the voiceless and enabled the witnessing of organic movements like BLM and #metoo, it has also fostered terrorist groups who in the same way use it as a gathering place to organize.
Using the Islamic State in its research, one study contends that with the development of social media, online and offline radicalization will continue to be inextricably linked, creating a more integrated state called onlife. The study cites how ISIS used video to radicalize and connect online recruits with jihadists on the ground in the post-war Middle East. Techniques like this bring to mind the Christchurch massacre and how Facebook had to quickly take down the video of the shooting for fear of it inciting other terrorists by way of example.
The problem of online radicalization is complicated, and will only grow more so as the technology develops and algorithms learn, but there are tools which we can leverage to help detect and identify potentially dangerous activity.
Exercising Media Intelligence
Removing all toxic extremist communities from the internet is an unrealistic, if not impossible task. However, monitoring popular social media sites isn’t, at least not anymore with the aid of powerful, AI-driven media intelligence platforms that scan websites, mining them for content that might be indicative of extremist activity.
Traditionally used to monitor news and information relevant to an organization’s brand, media intelligence is now being deployed to counteract extremist activity by catching it in its larval stage where it’s open and visible in a public forum. Zignal Labs is one such company to do this, employing real-time analytics in an effort to combat civil unrest and terrorism before it evolves offline
In a series of two reports, the company demonstrates how its technology was used to segment far-right groups in order to better understand their different behaviors and provide data on the growth of their activity. By compiling commonly used words pulled from the mouths of top white supremacist profiles into a “word cloud,” the briefs reveal a common narrative of victimization with links to the white nationalist organization the National Vanguard. Another feature illustrates spikes in mentions and hashtags similar to what you’d see on Google Trends, underlining the increasing trend of similar topics and concluding that the white supremacist/neo-nazi community has in fact grown online recently.
Another takeaway from the study was how not all right-wing extremists act the same. By filtering the content it ingests according to topic, the platform was able to draw distinctions between different groups’ incentives, and identify where there was overlap, thus providing a more accurate conclusion of who is more likely to take up arms in the real world.
In acknowledging how dangerous political ideology can ferment in online echo chambers, accelerating radicalization on social media and throughout the web, we understand where to begin. Media intelligence is an important step in the long process of combatting violent extremist activity, providing an opportunity to track and analyze data on the conversations as they surface throughout the media landscape.
Media intelligence is still a relatively young phenomenon, but it’s quickly developing and the pieces are there to build an effective digital counter-terrorism strategy. Now it’s a question of whether it’ll be implemented alongside the necessary policy responses and regulation in order to mitigate extremist exploitation of the social media sphere. Further research into the field and strategic communication between the government and the private sector in fighting terrorists online is the way forward, highlighting once again how much it pays to be a good listener.