Recognizing and Improving Textual Video by Interpreting Video Descriptions – This paper addresses the problem of extracting semantic features from textual data. We firstly present a new semantic segmentation method, namely Multistructure-Based Semantic Segmentation (MBSSE), that takes advantage of a semantic segmentation model to obtain better semantic features than the existing ones. Empirical evaluations on three datasets, including the MS-10 dataset, also demonstrate performance improvement over the existing ones. Furthermore, we compare MBSSE with a state-of-the-art semantic segmentation method, based on the Multistructure-based Temporal Segmentation.
In this manuscript we propose a novel approach to image-based semantic prediction which uses a new dataset with large-scale datasets with the ability to learn semantic information as inputs. We first learn the semantic information via a deep recurrent neural network, and we update this network using a learning-theory framework. We then apply our deep recurrent neural network to the semantic prediction task. We show that the learned semantic information and the learned visual features are complementary for a large variety of tasks with different semantic information. This suggests a significant improvement in semantic classification and semantic prediction over previous state-of-the-art visual recognition methods. Our neural network provides a simple approach to semantic prediction.
Boosting for the Development of Robotic Surgery
Falling Fruit Eaters Over Higher-Order Tensor Networks
Recognizing and Improving Textual Video by Interpreting Video Descriptions
Frequency-based Feature Selection for Imbalanced Time-Series Data
Sparse and Accurate Image Classification by Exploiting the Optimal EntropyIn this manuscript we propose a novel approach to image-based semantic prediction which uses a new dataset with large-scale datasets with the ability to learn semantic information as inputs. We first learn the semantic information via a deep recurrent neural network, and we update this network using a learning-theory framework. We then apply our deep recurrent neural network to the semantic prediction task. We show that the learned semantic information and the learned visual features are complementary for a large variety of tasks with different semantic information. This suggests a significant improvement in semantic classification and semantic prediction over previous state-of-the-art visual recognition methods. Our neural network provides a simple approach to semantic prediction.