Original article was published by Apoorv Yadav on Deep Learning on Medium
However, we might get into an issue that it is possible for user to be interested in other types of products like Ancient history , which our system might fail to recommend.
2. Collaborative Filtering Systems: They are widely popular and usually better than the content based systems. It’s principal idea is People who are similar might have similar interests.
3. Hybrid Systems: They are combination both the principles to get a better recommendation engine, generally to avoid issues generated one another.
What is a Candidate?
Initially, our system might have thousands of items that may act as a recommendation, However we just can’t give them all. Imagine YouTube suggesting you 500 videos in recommendation. What we need is more than just a list of items. We need to evaluate them, score them, and then again take some more additional info before final rankings for them. After all of this, we can fairly say that yes we tried our best to give you a recommendation. All of this starts with the initial list of potential recommendations aka Candidates.
How do we store the Information required to make a recommendation?
For any problem in Data Science, Data preparation is like the foundation. Since a recommendation system requires lots of data both on User and Item as well as the interaction between them, We need to know how to structure them for our use.
These entities are the ones which your system recommends. They vary according to the service you provide. For example,
For Video Streaming service, Items would be Movies/T.V. Series.
For Audio Player, Items would be Songs/Artist/Albums.
For E-Com Website, Items would be Products.
For Social Media, Items would be Blogs.
I hope you have a clear picture in your mind for items.
— also known as Context. This regards to the information which is used to make a recommendation. In General, More Data we have might help us with better predictions. So what kind of knowledge we need for this. Typically, we need details about User and User-Item interactions.
Additional information like time of day, location, etc might help. For example, Some people who use a Map service use it frequently during their Office commute, Therefore the Map service can automatically give a direction. Now the User might not search on Weekends and thus our Map service might not give such irrelevant suggestions.
User-Item interactions for recommender systems help us a lot. However, this requires a detailed thought process about the interactions we are going to capture and on what items and then defining a structure to them to utilize completely. Some examples are rating or giving a review or when a user searches for an item.
Have you ever heard this before? If you have, It would probably be Word Embedding.
Let’s just revise the word embedding first and come back to this one later. So while dealing with textual data, we can represent words using a sparse representation by a vector as a size of vocab with all zeros expect for then index of the word. This might be size and does not contain information too meaningfully.
Other one is representing them with a lower dimensional vector (like a 512 or 300 dimension) and each element with a floating point between 0 and 1. This gives a meaningful representation like similar words have less distance between them.
Coming back to our system, we can learn embedding for user and items. This required a Neural Network to train and get ourselves embedding matrices and with neural networks comes great requirement of dataset!