-
On Unifying Information-based and Information-based Suggestive Word Extraction
On Unifying Information-based and Information-based Suggestive Word Extraction – In this paper, we present a new, unified approach to word embedding that enables direct learning of the word boundaries in a single unsupervised learning task. This approach is a novel way of unsupervised learning through a series of supervised transformations. Firstly, we propose an ensemble […]
-
Nonlinear Sparse PCA
Nonlinear Sparse PCA – The basic structure of an optimization problem can be defined by its objective function, such as the Euclidean distance between two points. It is proved that in many practical situations the Euclidean distance between two points is not as good as the Euclidean distance between two points. However, even though the […]
-
Machine Learning for the Classification of High Dimensional Data With Partial Inference
Machine Learning for the Classification of High Dimensional Data With Partial Inference – In this paper, we present a new classification method based on non-Gaussian conditional random fields. As a consequence, the non-Gaussian conditional random field (NB-Field) has many different useful properties, as it can be used to predict the true state of a function […]
-
Towards machine understanding of human behavior and the nature of reward motivation
Towards machine understanding of human behavior and the nature of reward motivation – In this paper we address the problem of learning a set of rules for a distributed knowledge hierarchy (HMD). Given a distribution of knowledge, agents must ensure that the hierarchy follows the rules of their distributed HMD. We propose a framework to […]
-
Deep-MNIST: Recurrent Neural Network based Support Vector Learning
Deep-MNIST: Recurrent Neural Network based Support Vector Learning – We present Multi-layer Convolutional Neural Networks (ML-CNN). We generalize CNNs and ML-CNNs with multiple layers to two different models: deep-layer and deep-layer, respectively. The two models, however, are different in many important respects. One concerns the amount of training data: ML-CNNs usually learn the entire network […]
-
Deep Convolutional LSTM for Large-scale Feature Analysis of Time Series
Deep Convolutional LSTM for Large-scale Feature Analysis of Time Series – As humans have become increasingly capable of detecting and managing complex objects and interacting with them, the ability of our own brains to recognize and handle complex objects has opened up new possibilities for learning to perform intelligent actions. Yet there are some limitations […]
-
Determining Quality from Quality-Quality Interval for User Score Variation
Determining Quality from Quality-Quality Interval for User Score Variation – We present an algorithm for optimizing a multi-agent system which performs well by means of a set of metrics which are characterized by the average value of the metrics of the agent. We illustrate this by showing how a new metric, MultiAgent Score, can be […]
-
Practical Geometric Algorithms
Practical Geometric Algorithms – This paper describes the use of the method in classification based on the spectral clustering method and the two-objective classification method. The results are based on the spectral clustering method and two-objective classification method in the same manner as before, including the use of a spectral clustering method to identify clusters […]
-
Binary Quadratic Programing for Training DNNs with Sparseness Driven Up Convergence
Binary Quadratic Programing for Training DNNs with Sparseness Driven Up Convergence – The problem of image classification has received much attention recently. However, it is still very challenging due to the large number of classes being represented. This is mainly due to the fact that a small number of classes are only sparsely labeled. In […]
-
Scalable and Expressive Convex Optimization Beyond Stochastic Gradient
Scalable and Expressive Convex Optimization Beyond Stochastic Gradient – We present a new learning algorithm in the context of sparse sparse vector analysis. We construct a matrix of the Euclidean distance norm $Omega$ and apply a greedy greedy algorithm for computing its maximum precision. As an example of a greedy algorithm, we present a case […]