Towards the Application of Machine Learning to Predict Astrocytoma Detection – This paper proposes a Deep Convolutional Neural Network (CNN) architecture for the purpose of Astrocytoma Classification. The proposed architecture utilizes an iteratively updated convolutional net to map the Astro cytoplasm to a local region that is the same from neuron-to-neuron. The Astro cytoplasm is generated by a mixture of two groups of neurons, each group is selected to represent the same type of disease. All groups of neurons are connected to a shared local region that represents the same type of disease under current state. A new network is proposed to learn the different types of disease in a local region. The proposed network is trained using two CNNs, and a novel Deep Neural Network (DNN) is trained to learn the different types of disease. In this work of learning, the proposed network is trained in a convolutional net, and a new CNN is applied to the extracted graph of neurons at hand. The learned network is used to improve accuracy on the Astrocytoma classification task. Results show that a network trained in this manner is able to classify all types of disease.
We present a method for learning new faces without relying on hand-crafted features from an individual user. The method uses a Convolutional Neural Network (CNN) to extract face features and perform a Convolutional Neural Network (CNN) to process them using a multi-task multi-layer CNN (M-CNN). The CNN is trained on faces in real world scenes to retrieve relevant information on the faces. The CNN uses a deep convolutional neural network (CNN-DNN) to extract the semantic information and use it to perform semantic segmentation. Experiments show that our method performs better than CNN-DNN on both tasks.
Feature Ranking based on Bayesian Inference for General Network Routing
A Survey on Semantic Similarity and Topic Modeling
Towards the Application of Machine Learning to Predict Astrocytoma Detection
Learning from Humans: Deep Face Recognition for Early Visual History and Motion RecognitionWe present a method for learning new faces without relying on hand-crafted features from an individual user. The method uses a Convolutional Neural Network (CNN) to extract face features and perform a Convolutional Neural Network (CNN) to process them using a multi-task multi-layer CNN (M-CNN). The CNN is trained on faces in real world scenes to retrieve relevant information on the faces. The CNN uses a deep convolutional neural network (CNN-DNN) to extract the semantic information and use it to perform semantic segmentation. Experiments show that our method performs better than CNN-DNN on both tasks.