AI to Detect Speaker in a Speech

Original article was published on Artificial Intelligence on Medium

AI to Detect Speaker in a Speech

Using AI to detect the speaker in a speech from the voice data

Picture by icons8 on Unsplash

With the advancement in AI, one can come up with many interesting and helpful AI applications. These AI applications can be helpful in Health, Retail, Finance, and various other domains. The main idea is to keep thinking about how can we utilize these advanced technologies and come up with interesting use cases.

Through this blog post, I intend to cover an AI application where one can detect the speaker from their voice. I will also explain the process by which I created this dataset. The code and dataset are made available here. There are a few blog posts around this topic but this one is different in two ways, First, it will provide a clear guide on how to detect the speaker effectively using some best practices without falling in pit-falls and secondly, at the end I will cover some really interesting use cases/applications that can be extended from this work. So, let’s get started.

Creating the Dataset

I created a Dataset consisting of 5 celebrities/popular figures from India.

Dataset created of 5 celebrities/popular figures from India. Image Source ~ Wikipedia

I took many speeches/interviews of these celebrities from Youtube and converted them into an Mp3 file.

Further, I converted these MP3 files into Spectograms using a popular Librosa Python library. I created these spectrograms repeatedly at an interval of 90seconds from the mp3 clip.

def generate_spectogram(file,path,jump = 90):
total_time = librosa.get_duration(filename=file)
till = math.ceil(total_time/jump)
for i in range(till):
x , sr = librosa.load(file,offset=i*jump,duration=jump)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='log',cmap='gray_r')
plt.savefig(file_save,dpi=1200)

These spectrograms look like:

Spectrogram from Voice

There’s a really good read on Librosa and Music Genre Classification.

Once we have converted the audio clips to Images, we can train a supervised Convolutional Neural network (CNN), model.

Some Challenges

Developing such an application had some of its own challenges. These challenges are

  1. Our Dataset contains voices of pretty similar people. Detecting Gun-shot sound with the barking of the dog is not a very difficult task as these are different sounds. In our case, differentiating a person’s voice from another is a tougher problem.
  2. We have created a Dataset from Youtube speeches/interviews of these celebrities, so there are many times noise from the background or other person/interviewer speaking in between or the crowd applauding.
  3. The Dataset has at-most 6–7 clips per person which hampers the accuracy. A richer dataset would give better accuracy and confidence in detecting the person accurately.

Best Practices to Train such Models

While training this application, some things didn’t work well for me, and somethings worked like a charm boosting the model’s performance. In this section, I will call out the best practices to train such models without falling into the pitfall.

  1. Train the model on black and white spectrograms rather than colored ones. This increased model accuracy for me. One reason could be less complex data made things easier for the model to learn. This can be done by changing ‘cmap’ property of the library. librosa.display.specshow(Xdb, sr=sr, x_axis=’time’, y_axis=’log’,cmap=’gray_r’)
  2. Remove all kind of labels or ticks from the Spectrogram plots, the unwanted text just confuses the model.
  3. Training a Resnet or Vgg architecture (or any other architecture of choice) from scratch rather than just training the last layers betters the performance. One reason could be Imagenet data is quite different from the Spectrogram plots. Giving leverage to each layer to train individually boosts the model’s performance.
  4. Test model’s performance on completely different clips/videos of these people/celebrities. The reason being that spectrogram coming from the same clip even from different time-frame is likely to contain the same type of noise, background, or recording device. We wish the model to perform well generically.
  5. The model’s accuracy can be increased by enriching the training data by different clips of the person speaking. This enables the model to generalize.
  6. Provide model with high-quality generated spectrograms to train. This can be done by changing the dpi property while saving the plot.
plt.savefig(file_save,dpi=1200)

Model Training and Accuracy

I trained the model using FastAI Library. I used a ResNet architecture to train a CNN model. The dataset created and codes are made available here.

The model gave an accuracy of about 80–85% on the completely unseen test data (from different clips) when trained on a limited training set. The model performance can be improved by enriching the training dataset.

Other Interesting possible Usecases

Such a properly trained application can have its usage in

  1. Automatically tag speaker from a video/audio.
  2. Check how good one can mimic a celebrity, comparing the score coming out from the model for that celebrity.
  3. Creating an application to guess the Singer from a random song and compare how well AI can detect. Its liking playing versus AI.
  4. It can be useful in crime investigation to detect with high confidence the person/speaker from the tapped phone conversations.

Conclusion

Through this blog-post, I covered an AI application where one can detect the speaker from their voice. I also emphasized on the best practices of training such models and other interesting use cases possible out of it. The dataset created and codes are made available here.

If you have any doubts or queries, do reach out to me. I will also be interested to know if you have some interesting AI application/use case in mind to work on.

About the author-:

Abhishek Mungoli is a seasoned Data Scientist with experience in ML field and Computer Science background, spanning over various domains and problem-solving mindset. Excelled in various Machine learning and Optimization problems specific to Retail. Enthusiastic about implementing Machine Learning models at scale and knowledge sharing via blogs, talks, meetups, and papers, etc.

My motive always is to simplify the toughest of the things to its most simplified version. I love problem-solving, data science, product development, and scaling solutions. I love to explore new places and working out in my leisure time. Follow me on Medium, Linkedin or Instagram and check out my previous posts. I welcome feedback and constructive criticism. Some of my blogs –