Predicting review scores using neural networks — Kaggle University Club Winter’18 Hackathon

Source: Deep Learning on Medium


In november-december of 2018 our team has participated in Kaggle University Club Winter Hackathon. As a result, we were the winners of the competition, along with teams from Penn State University and Hanyang University. Here is what we’ve done so far.


Problem Set

Teams were given a drug review data from https://www.kaggle.com/jessicali9530/kuc-hackathon-winter-2018. And the task was like… “do something awesome”.

Drug review data, divided into train/test parts

Our approach

Dataset provides several numerical and categorical values for each review. The rating was chosen as target value to predict. Also, we decided to use only the text data(patient review itself). Main reason was to prevent major data leak and create really useful model.

We employ Glove Common Crawl as embeddings + 2 types of models:

  1. 2-layer LSTM + Attention, focused on catching sequence information.
  2. BiGRU + CNN, used for extracting sentence structure(see https://arxiv.org/pdf/1806.11316v1.pdf).
2-Layer BiLSTM + Attn. Architecture
BIGRU + CNN + 2-layer Dense network Architecture

We combine those models into ensemble of 10 networks of each type. Training is done with MAE as criterion, Adam optimizer and Early Stopping regularization. To comply with competition rules, all the computation was done on Kaggle Kernels(with GPU support, obviously).

Final MAE on Test Partition is: 1.5932330707737263.

Details can be obtained from our kernel: https://www.kaggle.com/stasian/predicting-review-scores-using-neural-networks

Yours,

PFUR Confederation Team “5-top-100”: