Source: Deep Learning on Medium
The discussion pages on Wikipedia are a crucial mechanism for editors to coordinate their work and protect pages from defacement. Unfortunately these discussions are also a major avenue by which editors experience toxic and harassing comments.
To protect editors, the Wiki Media Foundation has started the Wikipedia Detox Research project to develop tools that can automatically detect toxic comments using machine learning models.
Personally, I believe this is the future of social media. I’m confident that in about 5 years all major platforms will have automatic abuse and cyber-bullying filters to stop this problem at the source.
But why wait 5 years? Let’s build a filter right now, using C#, NET Core, and the ML.NET machine learning library!
ML.NET is Microsoft’s new machine learning library. It can run linear regression, logistic classification, clustering, deep learning, and many other machine learning algorithms.
And NET Core is the Microsoft multi-platform NET Framework that runs on Windows, OS/X, and Linux. It’s the future of cross-platform NET development.
The first thing I need is a data file with lots of toxic Wikipedia comments. I’m going to use the 40k dataset from the Wiki Detox Project. This dataset has 40,000 labelled toxic and non-toxic comments.
The file looks like this:
It’s a tab-separated file with 8 columns:
- Label: 0 for a non-toxic comment and 1 for a toxic comment
- Revision ID: the unique identifier for the comment
- Text: the text of the comment
- Year: the year when the comment was published
- LoggedIn: indicates if the author of the comment was a logged-in user
- Namespace: the Wikipedia namespace of the discussion in which the comment was posted
- Sample: which random sample the comment came from
- Split: the data partition the comment came from
I will only focus on the label and comment text, and ignore all other data columns. I’ll build a binary classification machine learning model that reads in all comments and then makes a prediction for each comment if it is toxic or non-toxic.
Let’s get started. Here’s how to set up a new console project in NET Core:
$ dotnet new console -o Sentiment
$ cd Sentiment
Next, I need to install the ML.NET base package:
$ dotnet add package Microsoft.ML
Now I’m ready to add some classes. I’ll need one to hold a labelled comment, and one to hold my model’s predictions.
I will modify the Program.cs file like this:
The SentimentIssue class holds one single comment. Note how each field is adorned with a Column attribute that tell the data loading code which column to import data from.
I’m also declaring a SentimentPrediction class which holds a single comment prediction. There’s a boolean classification Prediction, a toxicity Score, and the Probability that the comment is toxic.
Now I’m going to load the data in memory:
This code uses the method LoadFromTextFile to load the TSV data directly into memory. The class field annotations tell the method how to store the loaded data in the SentimentIssue class.
Note that I have a single file, so I need to split it into a training and a test partition. The TrainTestSplit method splits the data and reserves 80% for training and 20% for testing. We often use this ratio in data science.
Now let’s build the machine learning pipeline:
Machine learning models in ML.NET are built with pipelines, which are sequences of data-loading, transformation, and learning components.
My pipeline has the following components:
- FeaturizeText that calculates a numerical ngram value for each comment. This is a required step because machine learning models cannot handle text data directly.
- A FastTree classification learner which will train the model to make accurate predictions.
The FeaturizeText component is a very nice solution for handling text input data. The component performs a number of transformations on the text to prepare it for model training:
- Normalize the text (=remove punctuation, diacritics, switching to lowercase etc.)
- Tokenize each word.
- Remove all stopwords
- Extract Ngrams and skip-grams
- TF-IDF rescaling
- Bag of words conversion
The result is that each message is converted to a vector of numeric values that can easily be processed by the model.
Finally, I train my model on the training partition with a call to Fit(…).
Now let’s test the model on the data in the test partition:
I call Transform(…) to set up predictions for every comment in the test partition. The Evaluate(…) method compares these predictions to the actual truth and automatically calculates the following metrics for me:
- Accuracy: this is the number of correct predictions divided by the total number of predictions.
- AUC: a metric that indicates how accurate the model is: 0 = the model is wrong all the time, 0.5 = the model produces random output, 1 = the model is correct all the time. An AUC of 0.8 or higher is considered good.
- AUCPRC: an alternate AUC metric that performs better for heavily imbalanced datasets with many more negative results than positive.
- F1Score: this is a metric that strikes a balance between Precision and Recall. It’s useful for imbalanced datasets with many more negative results than positive.
- LogLoss: this is a metric that expresses the size of the error in the predictions the model is making. A logloss of zero means every prediction is correct, and the loss value rises as the model makes more and more mistakes.
- LogLossReduction: this metric is also called the Reduction in Information Gain (RIG). It expresses the probability that the model’s predictions are better than random chance.
- PositivePrecision: also called ‘Precision’, this is the fraction of positive predictions that are correct. This is a good metric to use when the cost of a false positive prediction is high.
- PositiveRecall: also called ‘Recall’, this is the fraction of positive predictions out of all positive cases. This is a good metric to use when the cost of a false negative is high.
- NegativePrecision: this is the fraction of negative predictions that are correct.
- NegativeRecall: this is the fraction of negative predictions out of all negative cases.
When filtering toxic comments, I definitely want to avoid false positives because I don’t want to be blocking valid comments and frustrating the work of legitimate Wikipedia editors.
I also want to avoid false negatives but they are not as bad as a false positive. Having some toxic comments slipping through the filter is bad, but it’s not the end of the world.
So I’m going to focus on Precision and AUC to evaluate this model.
As a final step, I’m going to run a toxicity scan on the following comment:
“With all due respect, you are a moron”
Ouch. Well, at least the author tries to be polite.
Here’s the code to do it:
I use the CreatePredictionEngine method to set up a prediction engine. The two type arguments are the input data class and the class to hold the prediction.
And once my prediction engine is set up, I can simply call Predict(…) on the sample comment to make a prediction.
So what’s the output going to look like?
Here’s the code running in the Visual Studio Code debugger on my Mac:
And here’s the app again running in a zsh shell:
The model AUC is 0.96 which indicates that my model has excellent predictive ability.
The model precision is 0.9. This means that 90% of all toxicity predictions made by the model are correct. Only 10 non-toxic comments out of 100 will be incorrectly labelled as toxic.
But the recall is 0.6 which means that out of all toxic comments, my model only predicts 60% correct. Four in ten toxic comments will not be detected and slip through the filter.
This is still a good result. The precision is my most important metric because the cost of false positives is high, and I’ve got it at 90%.
But you can also see how AUCPrc and the F1 Score take the Recall into account, and how they are much lower than the AUC and Precision. This reflects the model’s struggle dealing with false negatives.
Finally, let’s take a look at the sample comment. The prediction engine flags my comment as toxic with a probability of 92.9% and a score of 6.43.
Not bad. I could put this filter to good use right away.
So what do you think?
Are you ready to start writing C# machine learning apps with ML.NET?