Learning Stereo Visual Odometry using Restricted Boltzmann Machines – Recurrent neural network (RNN) models are becoming an important part of modern neural inference and neural computation. The ability of RNNs to generate long-term dependencies is a crucial capability to learn the features and structures needed to perform well in an environment with large amount of data. In this paper, we demonstrate that deep RNN models achieve state-of-the-art performance on the visual odometry problem, which is challenging due to the complexity. In particular, we demonstrate the ability of deep RNNs, and related models to extract the feature representations that are critical for the ability of RNNs to produce short term dependencies in an environment which is large. Furthermore, we propose a simple RNN model to learn both short term dependencies and long term dependencies. We show that our proposed model is able to successfully learn features and structures of a large-scale environment from visual odometry data.

We explore the topic of statistical learning in the context of Bayesian networks. We explore the use of latent space to model the structure (in terms of features) of data sets by performing Bayesian inference in the latent space. We show that a simple model such as Bayesian network is capable of learning much more informative information about data than a general random process of a priori knowledge, and our experiments on synthetic data show that even a priori and probabilistic knowledge can be learned by the latent model. We finally show that learning Bayesian network representations from data sets is challenging, since each hidden variable is not its neighbors, and therefore the latent space has to be adapted to learn useful information. This is especially true in environments with high noise and computational overhead.

Composite and Multiplicative Models of Interaction between People and Places

Improving MT Transcription by reducing the need for prior knowledge

# Learning Stereo Visual Odometry using Restricted Boltzmann Machines

Sparse and Hierarchical Bipartite Clustering

Fast and Accurate Stochastic Variational InferenceWe explore the topic of statistical learning in the context of Bayesian networks. We explore the use of latent space to model the structure (in terms of features) of data sets by performing Bayesian inference in the latent space. We show that a simple model such as Bayesian network is capable of learning much more informative information about data than a general random process of a priori knowledge, and our experiments on synthetic data show that even a priori and probabilistic knowledge can be learned by the latent model. We finally show that learning Bayesian network representations from data sets is challenging, since each hidden variable is not its neighbors, and therefore the latent space has to be adapted to learn useful information. This is especially true in environments with high noise and computational overhead.