Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images
Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images
Javier Marín Aritro Biswas Ferda Ofli Nicholas Hynes Amaia Salvador Yusuf Aytar Ingmar Weber Antonio Torralba

Abstract
In this paper, we introduce Recipe1M+, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M+ affords the ability to train high-capacity modelson aligned, multimodal data. Using these data, we train a neural network to learn a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Moreover, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M+ dataset and food and cooking in general. Code, data and models are publicly available.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| cross-modal-retrieval-on-recipe1m-1 | Marin et al. | Image-to-text R@1: 17 Text-to-image R@1: 21 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.