Can a machine learning model detect flirty text messages more accurate than humans? I bet it does!

Source: Deep Learning on Medium


Accept it, sometimes we don’t know when someone is hitting on us, or is it just me? I don’t think so! Or I may be just shy!

Have you been in this situation before? We always overthink about text messages from our crush or potential partner wondering if he/she is into us and tend to hand our smartphone to friends thinking they can be objective about it. They may give you their point of view but they can’t figure it out a hundred percent sure.

Imagine you can use a machine that has collected and learned from hundreds of thousands of text messages with flirty implications. That sounds fun!

A group of researchers at Stanford University, found out that humans are kind of terrible at detecting flirtatious behavior, and designed a flirtation-detection model trained on dialogue and lexical features to detect a speaker’s intent to flirt with up to 71.5% accuracy.

Check the paper here:

There are many reasons to have a wrong impression about something, due to our emotions and desires. That’s where the machines come up! A model can provide an objective answer in this task. So I believe, a machine can detect whether someone is flirting or being friendly with you through texts messages and a decent accuracy.

I will try to replicate the same experiment, using some data I collected and the Microsoft Azure Machine Learning Studio and predict sentiment analysis. The model will identify two classes: Flirty and Not Flirty text messages.

To start with this experiment. I collected almost a thousand text messages in the wild:

  • Used Twitter and hashtags such #flirttext and collected some Tweets
  • Collected text messages examples from relationships experts and interviews with real people that I found in different websites.

The collected data looks like this:

Let’s analyze the data!

Wordcloud for flirty text messages

What are the insights you get in this wordcloud?

  • Night and Day are the biggest words in this wordcloud. People flirting with you tends to send you “Good morning” and “Good night” texts religiously. That never goes away, it’s a classic! That’s a big sign when someone is into you.
  • Love is always a strong word. There is not explanation needed.
  • Smile, a good thing to admire in a people. When someone compliment your smile is something special.
  • Beautiful. I don’t need to explain this word!

Wordcloud for No-flirty texts

Can you spot an interesting insight in this wordcloud?

Want: This word was very used in the texts messages as following:

  • I like and respect you and want to be straightforward to be fair…I just don’t think I’m the right fit.
  • I don’t want to get serious because I don’t want to be in a long-distance relationship.
  • I just want to keep things simple.
  • I don’t want to rush into anything too soon.

Busy, Afraid, Hurt, Connection, Work, Career, Rush, Focus are some of the words that I find very interesting in this kind of texts. These words are very used when someone expresses that he/she is not into you.

Creating a Machine Learning Model to detect flirty texts using Azure Machine Learning Studio

https://cdn-images-1.medium.com/max/1000/1*9EezIdL-DsHBzZGmzm3ukg.png

Last month, I enrolled into the Introduction to Artificial Intelligence at Edx, a massive open online course provider. It hosts online university-level courses in a wide range of disciplines to a worldwide student body, including some courses at no charge.

To work in the projects from this course, I needed to get a Microsoft Azure account. I like how easy you can create a model in Azure by dragging and dropping elements to the workspace.

So this time, instead of using a Jupyter Notebook, I will use Azure to create and deploy the model.

Below are the main steps to create a sentiment analysis project on Azure. You can find these same steps in the Microsoft documentation to sentiment analysis model in Azure Machine Learning Studio.

  1. Clean and preprocess text dataset
  2. Extract numeric feature vectors from pre-processed text
  3. Train classification or regression model
  4. Score and validate the model
  5. Deploy the model to production

Step 1: Clean and preprocess text dataset

The first step is to divide the text messages into categorical low and high buckets to formulate the problem as two-class classification. Used Edit Metadata and Group Categorical Values modules.

Then, I clean the text using Preprocess text module. The cleaning reduces the noise in the dataset, help you find the most important features, and improve the accuracy of the final model. I remove stopwords — common words such as “the” or “a” — and numbers, special characters, duplicated characters, email addresses, and URLs. I also convert the text to lowercase, lemmatize the words, and detect sentence boundaries that are then indicated by “|||” symbol in pre-processed text.

After the preprocessing is complete, I split the data into train and test sets.

Step 2: Extract numeric feature vectors from pre-processed text

To build a model for text data, you typically have to convert free-form text into numeric feature vectors. In this example, I used Extract N-Gram Features module to transform the text data to such format. This module takes a column of white space-separated words and computes a dictionary of words, or N-grams of words, that appear in your dataset. Then, it counts how many times each word, or N-gram, appears in each record, and creates feature vectors from those counts. In this example, I set N-gram size to 2, so our feature vectors include single words and combinations of two subsequent words.

TF-IDF

I used TF-IDF (Term Frequency Inverse Document Frequency) weighting to N-gram counts. This approach adds weight of words that appear frequently in a single record but are rare across the entire dataset. Other options include binary, TF, and graph weighing.

Such text features often have high dimensionality. For example, if your corpus has 100,000 unique words, your feature space would have 100,000 dimensions, or more if N-grams are used. The Extract N-Gram Features module gives you a set of options to reduce the dimensionality. You can choose to exclude words that are short or long, or too uncommon or too frequent to have significant predictive value. In this tutorial, I exclude N-grams that appear in fewer than 5 records or in more than 80% of records.

Also, you can use feature selection to select only those features that are the most correlated with your prediction target. I use Chi-Squared feature selection to select 1000 features. You can view the vocabulary of selected words or N-grams by clicking the right output of Extract N-grams module.

As an alternative approach to using Extract N-Gram Features, you can use Feature Hashing module. Note though that Feature Hashing does not have build-in feature selection capabilities, or TF-IDF weighing.

Step 3: Train classification or regression model

Now the text has been transformed to numeric feature columns. The dataset still contains string columns from previous stages, so I use Select Columns in Dataset to exclude them.

Then, used Two-Class Logistic Regression to predict my target: high or low Text Message score. At this point, the text analytics problem has been transformed into a regular classification problem. You can use the tools available in Azure Machine Learning to improve the model. For example, you can experiment with different classifiers to find out how accurate results they give, or use hyperparameter tuning to improve the accuracy.

Step 4: Score and validate the model

How would you validate the trained model? I score it against the test dataset and evaluate the accuracy. However, the model learned the vocabulary of N-grams and their weights from the training dataset. Therefore, I should use that vocabulary and those weights when extracting features from test data, as opposed to creating the vocabulary anew. Therefore, I add Extract N-Gram Features module to the scoring branch of the experiment, connect the output vocabulary from training branch, and set the vocabulary mode to read-only. I also disable the filtering of N-grams by frequency by setting the minimum to 1 instance and maximum to 100%, and turn off the feature selection.

After the text column in test data has been transformed to numeric feature columns, I exclude the string columns from previous stages like in training branch. I then use Score Model module to make predictions and Evaluate Model module to evaluate the accuracy.

Step 5: Deploy the model to production

The model is almost ready to be deployed to production. When deployed as web service, it takes free-form text string as input, and return a prediction “high” or “low.” It uses the learned N-gram vocabulary to transform the text to features, and trained logistic regression model to make a prediction from those features.

To set up the predictive experiment, first save the N-gram vocabulary as dataset, and the trained logistic regression model from the training branch of the experiment. Then, save the experiment using “Save As” to create an experiment graph for predictive experiment. Remove the Split Data module and the training branch from the experiment. Then connect the previously saved N-gram vocabulary and model to Extract N-Gram Features and Score Model modules, respectively. Also remove the Evaluate Model module.

Insert Select Columns in Dataset module before Preprocess Text module to remove the label column, and unselect “Append score column to dataset” option in Score Module. That way, the web service does not request the label it is trying to predict, and does not echo the input features in response.

Testing the deployed model

This is my deployed model! It’s very simple: You write a text message in the TextBox and click the button for predictions: High or Low and the accuracy in numbers. You can see some examples below:

Conclusions

  • This is my first experiment using Microsoft Azure Machine Learning Studio, but I learned how to deploy a model and get it ready for production!
  • The accuracy can improve If I work in a larger and better dataset, that’s what I will do
  • It would be awesome to consider other factors to predict if the text is flirty or not