Original article was published by Krishiv Shah on Artificial Intelligence on Medium
TweetBot: Using Twitter for Good
Let’s face it: Twitter is toxic. Twitter, a popular short-form content platform, has been a platform of many controversies ever since its inception. The platform’s popularity and its controversial nature stem from the same root: unfiltered, unmoderated and highly visible expression. Almost everyday, there are millions of trolls and bots on Twitter, targeting individuals, communities and even countries. These troll armies are seldom censored and hardly ever face repercussions.
Now that we’ve established that little can be done about the trolls, there is only one side we can help: the victims. Such hate against individuals or even communities can push people to take extreme steps. As such, unless one is popular, their situation is almost never acknowledged. There have been several instances of people tweeting their about their suicide, not to mention hundreds of indicative tweets before that.
So many of these people might have refused to seek help. But, what if they didn’t have to seek help? What if they didn’t have to ask to be helped? What if the help came to them? What if they didn’t have to die for us to acknowledge their existence and feelings? One answer: TweetBot.
So what is TweetBot?
TweetBot in the simplest terms is a Suicidal Tweet Detection bot.
Using the Twitter developer API it live streams all the tweet on the local device. These tweets are then processed to convert them into text-only Strings. These strings are then compiled into a quick-database. Routing the data through the database, it is loaded onto a comma separated value table. These values are then passed through a sentiment analysis model. This model has one of the highest accuracies, even internationally and has been trained with high-quality data. This model predicts whether the tweet is positive or negative in nature. Thereafter, all negative tweets are checked for a comprehensive list of words and phrases. The results of this checking- algorithm in turn predict whether a tweet is suicidal or indicative.
Upon finding such tweets from any particular account the algorithm extracts the last 20 tweets from the concerned account. Each of these tweets are passed through the aforementioned procedure as well. When the number of such tweets exceeds a threshold the algorithm can take a number of actions.
Before getting there, I would like to make you see the kind of data this algorithm extracts, processes and visualises.
These data visualisations can be helpful to both- authorities and assistive organizations.
Coming back to the ‘What next?’ Upon such detection currently the bot can only send the user a personal message ad/or reply to their tweets. However, with the assistance of the appropriate organizations and even governmental organizations this mechanism can be made much more elaborate. The algorithm possesses the potential to alert designated individuals who can then personally help the individual in a precarious situation. However, this requires the support from the aforementioned entities. If anyone can help take this ahead, please reach out : here or here 🙂
I will not be going into the Math and Computations involved, at least in this article. However, you can find it here.
Although the algorithm is quite powerful in its current state it also holds immense potential. Majorly, the ability to automate the update of search and detection phrases using a machine learning model. It can also collect an ocean of data from the user which can be used to streamline and increase the efficiency of the searching system. A governmental collaboration can ensure its implementation on a much larger scale, owing, in part, to its scalability.