Automatic Generation of Interactive Cooking Video with Semantic Annotation
Kyeong-Jin Oh (Inha University, Korea)
Myung-Duk Hong (Inha University, Korea)
Ui-Nyoung Yoon (Inha University, Korea)
Geun-Sik Jo (Inha University, Korea)
Abstract: Videos are one of the most frequently used forms of multimedia resources. People want to interact with videos to find a specific part or to obtain relevant information. To support user interactions, current videos should be transformed to interactive videos. This paper proposes an interactive cooking video system to generate automatically interactive cooking videos. To do this, the proposed system performs semantic video annotation on cooking videos. Semantic video annotation process includes three parts: synchronization between recipes and corresponding cooking videos based on a caption-recipe alignment algorithm, information extraction on food recipes using lexico-syntactic patterns, and semantic entity interconnection between recognized entities and semantic web entities. Cooking video annotation ontology is modeled to handle annotation data. To evaluate the proposed system, comparative experiments are performed on the caption-recipe alignment algorithm. The accuracy of information extraction and semantic entity interconnection is also measured. Experimental results show that the proposed system is superior to compared algorithms in alignment perspectives. Information extraction and semantic interconnection method also achieve high accuracy over 95%, respectively. Consequently, the proposed system generates interactive cooking videos in high accuracy and support user interactions by providing a user interface which allows users to easily find specific scenes and obtain detailed information on objects users have interested in.
Keywords: caption-recipe alignment, cooking video annotation, entity identification, interactive cooking video, ontology, semantic video annotation
Categories: L.3.0, L.3.2, M.0, M.1, M.7