First step toward AI-innovated HR system — CVChain: Named entity recognition

Original article was published by Techainer on Artificial Intelligence on Medium

First step toward AI-innovated HR system — CVChain: Named entity recognition

Named entity recognition is well-known as an fundamental task of nature language processing and also is applied into many real application. Our system also employed this techniques in the second step of CVChain.
You can read about the overview of the CVChain and the first step of our system about layout analysis.

This part introduces the most important part of the system which will extract the information from raw text. After obtaining groups of text from the layout analysis step with the hypothesis that contain key information, we use a NLP model to deal with the key information. Our model analyzes input context to find out important words or phrases. Our target is to accurately and completely find all information about personality, education, experience, skill, award,… that will be used for assessing resume.

Use sequence tagging for extracting information from structured text

Our model is sequence tagging model from Flair framework (visit for more details). We conducted many experiments with other popular models such BERT ( or Spacy, obtained different results that lead us to conclusions as follow:

  • Spacy’s accuracy is not good enough although it is fast.
  • BERT/Albert produces good prediction but it is very hard to control the model due to the impossible of editing pre-trained model.
  • Flair is lighter than BERT (so, it is faster) and we can control the prediction of Flair by configuring the Flair embedding (Language model).

We will not go too deep in technical details in Flair (or BERT) in this article (I will dig it deeper in other articles). We want to notice that that although Flair is faster than BERT, it’s time performance is still low due to using two different LSTMs (one at embedding layer and the other at Sequence tagging model). To increase prediction efficiency, we converted the input to batch with a fixed small length by dividing too-long sentence into shorter sentences. Studying from BERT, we utilize an overlap window to make sure that the divided sentence provide enough information.

Afther this step, the extracted information are some words or phrases, but it sill do not bring enough information as our expectation. So in the final step Entities grouping, we will do some high level processing to make the prediction more efficacy.