Deep CNN-LSTM Networks – We are exploring the use of a non-convex loss to solve the minimization problem in the presence of non-convex constraints. We develop a variant of this loss called the non-convex LSTM-LSTM where the objective is to minimize the dimension of a non-convex function and its non-convex bound, i.e. non-linearity in the data-dependent way. We analyze the problem on graph-structured data, and derive generalization bounds on the non-convex loss. The results are promising and suggest a more efficient algorithm to improve the error of the minimizer by learning the optimality of LSTM from data.
Convergent inference algorithms are widely used to achieve state-of-the-art performance, particularly in large-scale problems with large-scale and real-world data. In this paper, we proposed a simple optimization algorithm for such problems to reduce the amount of computations that need to be performed by inference algorithms and to enable machine learning to cope with large-scale applications where many real-world data is missing. The framework is shown in detail, and is illustrated experimentally using a simulated example of a robot to evaluate a simulated learning task on a real-world data set.
End-to-End Learning of Interactive Video Game Scripts with Deep Recurrent Neural Networks
On the Performance of Deep Convolutional Neural Networks in Semi-Supervised Learning of Point Clouds
Deep CNN-LSTM Networks
Pseudo Generative Adversarial Networks: Learning Conditional Gradient and Less-Predictive Parameter
Convergence of Levenberg-Marquardt Pruning for Random Boolean ComputationConvergent inference algorithms are widely used to achieve state-of-the-art performance, particularly in large-scale problems with large-scale and real-world data. In this paper, we proposed a simple optimization algorithm for such problems to reduce the amount of computations that need to be performed by inference algorithms and to enable machine learning to cope with large-scale applications where many real-world data is missing. The framework is shown in detail, and is illustrated experimentally using a simulated example of a robot to evaluate a simulated learning task on a real-world data set.