Robotic chefs are a promising technology that can bring sizeable health and economic benefits when deployed ubiquitously. This deployment is hindered by the costly process of programming the robots to cook specific dishes while humans learn from observation or freely available videos. In this paper, we propose an algorithm that incrementally adds recipes to the robot's cookbook based on the visual observation of a human chef, enabling the easier and cheaper deployment of robotic chefs. A new recipe is added only if the current observation is substantially different than all recipes in the cookbook, which is decided by computing the similarity between the vectorizations of these two. The algorithm correctly recognizes known recipes in 93% of the demonstrations and successfully learned new recipes when shown, using off-the-shelf neural networks for computer vision. We show that videos and demonstrations are viable sources of data for robotic chef programming when extended to massive publicly available data sources like YouTube.
INDEX TERMSComputer vision, hidden Markov model, learning by demonstration, robotic chef, salad chef.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.