Original article was published on Artificial Intelligence on Medium
Most people over the age of 25 in the United States today remember with vivid detail where they were and what they were doing on the morning of Tuesday, September 11, 2001.
I was in the waiting room of our pediatrician’s office with my wife and 7-month old daughter as the “Breaking News” banner appeared on the TV and the nightmare unfolded.
2,977 innocent people were killed due to the actions of 19 Muslim extremist hijackers: 2,606 in and around the World Trade Center in New York City, including over 414 first responders; 246 on the four planes; and 125 at the Pentagon.
Over the following days, weeks, and months, our nation underwent a transformation that was at once inspiring and chilling.
The realm of counterterrorism was distinctly analog in the years prior to 9/11, relying on chains of physical evidence, wiretaps, and spycraft.
But a new realm was just beginning to grow into its own in the background. As the internet boomed, online interaction via message boards and social media created what can best be described as a lawless “wild west” in which everyone with internet access could participate from behind a wall of digital anonymity.
With such a vast frontier to confront, the challenges faced by law enforcement and those working in counterterror became exponentially more difficult. In order to achieve some kind of parity, they need to take advantage of the same technology that enables this new era of terror.
Artificial intelligence relies on analyzing massive sets of data in order to generate predictive models that can point to potential threats. These are the key areas that AI helps with by using pattern analysis of this data:
Detection of Online Terror Groups; Detection of suspicious individual actors; Detection of suspicious objects; Detection of suspicious transactions
The above is at the heart of the first big step in dealing with possible terror threats. There are 4 steps overall:
[A]…research team at UMD’s Laboratory for Computational Cultural Dynamics determined that combating future attacks from the group requires a cocktail of actions, including fostering dissent within [terrorist group Lashkar-e-Taiba] LeT, hampering the organization’s ability to conduct communication campaigns or provide social services, and disrupting the links between LeT and other Islamist terror groups. The UMD researchers came to this conclusion after analyzing 20 years’ worth of data and around 770 different variables regarding [LeT], [and] searching for trends in attacks, such as the 2008 bombings and shootings in Mumbai.
The above effort and the recommendations resulting from it used sophisticated algorithms to mine data and discover probabilistic rules that can help predict terrorist actions. Using machine learning, these findings then became the basis for actions that law enforcement could take that would have direct negative effects on the goals of the terrorist group, including hacking the group’s online interactions.
Furthermore, the data analysis was able to find patterns in the frequency and triggers of the attacks, such as various geopolitical events, that continue to evolve the understanding of all terrorist actions.
Additionally, the use of malware to carry out cyber-attacks is an ongoing threat that is more prevalent than most people know. Though cybercrime and cyberterror vs. cyber law enforcement might be a zero-sum game, and the enemies of the people are almost always hiding behind the anonymity of the web, a score of 0–0 is far preferable to a win one/lose one approach. The aim of counterterror should be primarily to prevent attacks rather than to catch the bad guys. Sure, everyone wants to catch them and put them on trial, but that cannot be the priority if we want to save lives. This is where spending more on AI and machine learning makes sense rather than more on physical manpower and hardware. It may be a cliche, but knowing really is half the battle.
Whether we apprehend all of the terrorists or not isn’t the most important thing. Saying we prevented the loss of life from planned attacks that were never carried is a win in itself.
Counterterror technology is becoming increasingly ingrained in society. Deep neural networks empower facial recognition, detection of suspicious actions and objects, license plate reading, and mapping out criminal and terror networks. The urban world is filled with data intake points, from CCTVs to smart lights to gunshot and glass breaking-detectors and alarm systems, and even gas and smoke detectors that are networked and online.
Urban planners are even working with economic, terror and criminal experts to “design out” the triggers of radicalization and crime from environments. This kind of preventive tactic is truly next-level, and admirable, as everyone benefits: worn-down neighborhoods get renewed, better food and education opportunities are provided for citizens, cities are built with more light and green spaces to counter the concrete and crowding. These efforts truly address the most core problems behind many of our modern societal ills, and in the long term may do more good than almost anything else.
As we come to rely more and more on AI to help us manage terror and crime in the future, we must also be aware that some of the data that informs those systems are inherently flawed, as it is generated by fallible humans: police officers, lawyers, federal officials. Though machine learning can result in powerful insights, human touch, and human judgment will always be required to help ensure decisions are made appropriately. A human+-level AI that can make potentially better decisions than the best-trained humans is far off, and even then, it may not be something we are comfortable with relying upon.
In the nearer term, the proactive approach of trying to counter the very causes of terrorism itself is our best bet. Beyond that, over the next several decades, response and mitigation of terror activities can be aided by integrating technologies into our members of law enforcement — including vision and reaction speed enhancements, and wireless direct-brain connection to data and communications.
Ultimately, terror attacks will never end. They have been a part of human history as long as violence and war itself. However, we are in an age when the reach of a single terror act is global, and the recruitment of extremists and their radicalization is abetted by advanced technology. It is truly a heyday for dark forces. With the power of AI, we can combat it and, maybe, keep them from hurting as many people as they did on 9/11 ever again.