Multi-modal Multi-domain Attention for Automatic Quality Assessment of Health Products – This paper presents a novel learning-based framework to identify the causal structure (i.e., the influence of several factors, like social, cultural and technical) in an individual’s performance. We propose a novel algorithm to recover the causal relation from data captured from different domains: a product of one domain, another product from another domain, and so on. Experiments using a public dataset of US adults show that, in comparison to other methods, our proposed framework outperforms state-of-the-art methods on a variety of benchmarks.
This paper presents a novel method for learning to recognize human actions in a 3D environment using convolutional neural networks (CNN). Our first approach is a multi-level CNN trained with convolutional neural networks, where the CNN is given a low-level representation of the user object model. The network is then trained with two layers in the network, and then an end-to-end CNN based on the first layer is used to learn the next layer without the user object model model. The end-to-end CNN is trained to learn a model of the user model. The feature representation of the user model is computed from the low-level representation, and then the end-to-end CNN is trained to predict the next layer. In addition, the end-to-end CNN is adapted to represent the user model with a low-level representation of the user object model. Experimental evaluation on the MNIST dataset demonstrates that the proposed approach significantly outperforms the state-of-the-art approaches in terms of performance.
Learning to Play the Game of Guess Who? by Training CNNs with Chesss
Multi-modal Multi-domain Attention for Automatic Quality Assessment of Health Products
Recurrent Neural Networks with Word-Partitioned LSTM for Action RecognitionThis paper presents a novel method for learning to recognize human actions in a 3D environment using convolutional neural networks (CNN). Our first approach is a multi-level CNN trained with convolutional neural networks, where the CNN is given a low-level representation of the user object model. The network is then trained with two layers in the network, and then an end-to-end CNN based on the first layer is used to learn the next layer without the user object model model. The end-to-end CNN is trained to learn a model of the user model. The feature representation of the user model is computed from the low-level representation, and then the end-to-end CNN is trained to predict the next layer. In addition, the end-to-end CNN is adapted to represent the user model with a low-level representation of the user object model. Experimental evaluation on the MNIST dataset demonstrates that the proposed approach significantly outperforms the state-of-the-art approaches in terms of performance.