Original article was published on Artificial Intelligence on Medium
AI to Detect Speaker in a Speech
Using AI to detect the speaker in a speech from the voice data
With the advancement in AI, one can come up with many interesting and helpful AI applications. These AI applications can be helpful in Health, Retail, Finance, and various other domains. The main idea is to keep thinking about how can we utilize these advanced technologies and come up with interesting use cases.
Through this blog post, I intend to cover an AI application where one can detect the speaker from their voice. I will also explain the process by which I created this dataset. The code and dataset are made available here. There are a few blog posts around this topic but this one is different in two ways, First, it will provide a clear guide on how to detect the speaker effectively using some best practices without falling in pit-falls and secondly, at the end I will cover some really interesting use cases/applications that can be extended from this work. So, let’s get started.
Creating the Dataset
I created a Dataset consisting of 5 celebrities/popular figures from India.
def generate_spectogram(file,path,jump = 90):
total_time = librosa.get_duration(filename=file)
till = math.ceil(total_time/jump)
for i in range(till):
x , sr = librosa.load(file,offset=i*jump,duration=jump)
X = librosa.stft(x)
Xdb = librosa.amplitude_to_db(abs(X))
librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='log',cmap='gray_r')
These spectrograms look like:
There’s a really good read on Librosa and Music Genre Classification.
Once we have converted the audio clips to Images, we can train a supervised Convolutional Neural network (CNN), model.
Developing such an application had some of its own challenges. These challenges are
- Our Dataset contains voices of pretty similar people. Detecting Gun-shot sound with the barking of the dog is not a very difficult task as these are different sounds. In our case, differentiating a person’s voice from another is a tougher problem.
- We have created a Dataset from Youtube speeches/interviews of these celebrities, so there are many times noise from the background or other person/interviewer speaking in between or the crowd applauding.
- The Dataset has at-most 6–7 clips per person which hampers the accuracy. A richer dataset would give better accuracy and confidence in detecting the person accurately.
Best Practices to Train such Models
While training this application, some things didn’t work well for me, and somethings worked like a charm boosting the model’s performance. In this section, I will call out the best practices to train such models without falling into the pitfall.
- Train the model on black and white spectrograms rather than colored ones. This increased model accuracy for me. One reason could be less complex data made things easier for the model to learn. This can be done by changing ‘cmap’ property of the library.
librosa.display.specshow(Xdb, sr=sr, x_axis=’time’, y_axis=’log’,cmap=’gray_r’)
- Remove all kind of labels or ticks from the Spectrogram plots, the unwanted text just confuses the model.
- Training a Resnet or Vgg architecture (or any other architecture of choice) from scratch rather than just training the last layers betters the performance. One reason could be Imagenet data is quite different from the Spectrogram plots. Giving leverage to each layer to train individually boosts the model’s performance.
- Test model’s performance on completely different clips/videos of these people/celebrities. The reason being that spectrogram coming from the same clip even from different time-frame is likely to contain the same type of noise, background, or recording device. We wish the model to perform well generically.
- The model’s accuracy can be increased by enriching the training data by different clips of the person speaking. This enables the model to generalize.
- Provide model with high-quality generated spectrograms to train. This can be done by changing the dpi property while saving the plot.
Model Training and Accuracy
The model gave an accuracy of about 80–85% on the completely unseen test data (from different clips) when trained on a limited training set. The model performance can be improved by enriching the training dataset.
Other Interesting possible Usecases
Such a properly trained application can have its usage in
- Automatically tag speaker from a video/audio.
- Check how good one can mimic a celebrity, comparing the score coming out from the model for that celebrity.
- Creating an application to guess the Singer from a random song and compare how well AI can detect. Its liking playing versus AI.
- It can be useful in crime investigation to detect with high confidence the person/speaker from the tapped phone conversations.
Through this blog-post, I covered an AI application where one can detect the speaker from their voice. I also emphasized on the best practices of training such models and other interesting use cases possible out of it. The dataset created and codes are made available here.
If you have any doubts or queries, do reach out to me. I will also be interested to know if you have some interesting AI application/use case in mind to work on.
About the author-:
Abhishek Mungoli is a seasoned Data Scientist with experience in ML field and Computer Science background, spanning over various domains and problem-solving mindset. Excelled in various Machine learning and Optimization problems specific to Retail. Enthusiastic about implementing Machine Learning models at scale and knowledge sharing via blogs, talks, meetups, and papers, etc.
My motive always is to simplify the toughest of the things to its most simplified version. I love problem-solving, data science, product development, and scaling solutions. I love to explore new places and working out in my leisure time. Follow me on Medium, Linkedin or Instagram and check out my previous posts. I welcome feedback and constructive criticism. Some of my blogs –