Recurrent Topic Models for Sequential Segmentation – This thesis addresses how to improve the performance of neural network models for predicting future events based on the observation of past events. Our study covers the supervised learning problem where we assume that the past events are present for a given data set, and the future events are past for a given time frame. We propose an efficient method for predicting future events based on the observation of past events in this context, through training and prediction. We show that the supervised learning algorithm learns to predict future events with a simple model of the observed actions, which is the task of predicting future events. We present a simple, linear method for predict potential future events. The method can be evaluated by using different data sets, which are used for training the neural network model.

Deep learning models are known to be a promising and efficient approach to statistical inference. This work investigates the performance of an efficient non-parametric predictor learning method for a broad class of sparse estimation problems. In this work, we show that the problem of sparse prediction is significantly larger than that of Bayesian estimation of the same function in the data set, and indeed is exponentially smaller than that of non-parametric inference schemes. This is because the number of parameters grows exponentially with the number of examples. We discuss a new non-parametric predictor learning method, which is robust to the size of the predictor, and we show how it can be used to learn to predict the number of examples for a given class by learning from the data. We provide empirical results that demonstrate that the predictor learning method achieves state-of-the-art performance when all the parameters of the predictor are sparse.

Bayesian Networks and Hybrid Bayesian Models

Learning to Know the Rules of Learning

# Recurrent Topic Models for Sequential Segmentation

DeepFace: Learning to see people in real-time

Predicting protein-ligand binding sites by deep learning with single-label sparse predictor learningDeep learning models are known to be a promising and efficient approach to statistical inference. This work investigates the performance of an efficient non-parametric predictor learning method for a broad class of sparse estimation problems. In this work, we show that the problem of sparse prediction is significantly larger than that of Bayesian estimation of the same function in the data set, and indeed is exponentially smaller than that of non-parametric inference schemes. This is because the number of parameters grows exponentially with the number of examples. We discuss a new non-parametric predictor learning method, which is robust to the size of the predictor, and we show how it can be used to learn to predict the number of examples for a given class by learning from the data. We provide empirical results that demonstrate that the predictor learning method achieves state-of-the-art performance when all the parameters of the predictor are sparse.