Original article was published on Deep Learning on Medium
Facebook Inverse Cooking Algorithm
Predicting a full recipe from an image better than humans
This recipe retrieval algorithm was developed by Facebook AI Research and it is able to predict ingredients, cooking instructions and a title for a recipe, directly from an image (Figure 2) .
In the past, algorithms have been using simple systems of recipe retrieval based on image similarities in an embedding space. This approach is highly dependent on the quality of the learned embedding, dataset size and variability. Therefore, these approaches fail when there is no match between the input image and the static dataset .
Inverse cooking algorithm instead of retrieving a recipe directly from an image, proposes a pipeline with an intermediate step where the set of ingredients is first obtained. This allows the generation of the instructions not only taking into account the image, but also the ingredients (Figure 1) .
One of the major achievements of this method was to present higher accuracy than a baseline recipe retrieval system  and average human , while trying to predict the ingredients from an image.
Inverse Cooking algorithm was included in a food recommendation system app developed and published here. Based on the predicted ingredients in the web application, several suggestions are provided to the user, such as: different ingredient combinations (Figure 1).
 A. Salvador, M. Drozdzal, X. Giro-i-Nieto and A. Romero, “Inverse Cooking: Recipe Generation from Food Images,” Computer Vision and Pattern Recognition, 2018.
 A. Salvador, N. Hynes, Y. Aytar, J. Marin, F. Ofli, I. Weber and A. Torralba, “Learning cross-modal embeddings for cooking recipes and food images,” Computer Vision and Pattern Recognition, 2017.