The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper proposes an optimized pedestrian and vehicle detection method based on deep learning technique. We optimize the convolutional neural network architecture by three mainly methods. The first one is the choice of the learning policy. The second one is to simplify the convolutional neural network architecture. The last one is careful choice of training samples. With limited loss of accuracy,...
In this work, we investigate the hardware implementation of Support Vector Machine (SVM) prediction on an FPGA platform for industrial ultrasound applications. Specifically, SVM is used as classifier for identifying ultrasonic A-scan signals as signals with flaw or signals without flaw. Hardware acceleration using FPGA is the main theme of the presented work. The architecture used to implement the...
Traffic sign recognition is an important step for integrating smart vehicles into existing road transportation systems. In this paper, an NVIDIA Jetson TX1-based traffic sign recognition system is introduced for driver assistance applications. The system incorporates two major operations, traffic sign detection and recognition. Image color and shape based detection is used to locate potential signs...
As a popular deep learning technique, convolutional neural network has been widely used in many tasks such as image classification and object recognition. Convolutional neural network exploits spatial correlations in the images by performing convolution operations in local receptive fields. Convolutional neural networks are preferred over fully connected neural networks because they have fewer weights...
Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU. However, currently available device candidates based on non-volatile memory technologies do not satisfy all the requirements to realize the RPU...
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that closely mimic the time encoding and information processing aspects of the human brain. It has been postulated that these networks are more efficient for realizing cognitive computing systems compared to second generation networks that are widely used in machine learning algorithms today. In this paper, we review...
Genetic mutations are the first warning to the onset of lung cancer. The ability to early predict these mutations could open the door for a targeted treatment options for lung cancer patients. Three top candidate genes previously reported to have the highest frequency of lung cancer mutations. Each gene is encoded as a symbolic sequence of four letters. A novel method for gene representation is introduced...
In this paper, we present artificial neural network (ANN) models to predict hard and soft-responses of three configurations of arbiter based physical unclonable functions (PUFs): standard, feed-forward (FF) and modified feed-forward (MFF). The models are trained using data extracted from 32-stage arbiter PUF circuits fabricated using IBM 32 nm HKMG process. The contributions of this paper are two-fold...
A face recognition system which represents each image as a superposition of the dominant components in two transform domains is proposed. The Discrete Wavelet Transform (DWT) and the Discrete Cosine Transform (DCT) are the two domains. By the end of the Training mode, each pose in the gallery will have two final matrices. Feature Extraction step in the Training includes transforming the preprocessed...
Spiking Neural Networks offer low precision communication, robustness, and low power consumption and are attractive for autonomous applications. One of the well accepted learning rules for these networks is spike time dependent plasticity which is governed by the pre- and postsynaptic spike timings. To stabilize the plasticity and avoid saturation in these learning rules, synaptic normalization is...
Command extraction from human beings becomes easier for a machine if it can analyze the non verbal ways of communication such as emotions. This paper focuses on improving the efficiency of extracting emotion from human facial expression images. The features that were extracted in this experiment were obtained from JAFFE (Japanese Female Facial Expression) database which includes 213 images of different...
Incremental learning allows incorporating new data in a classifier model without full retraining for computational efficiency. In this paper, we present two ways of performing incremental learning on Grassmann manifolds. In a Grassmann kernel learning framework, data are embedded on subspaces and kernels are constructed to map data subspaces to a projection space for classification. As new data samples...
Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data. Two of the most recent proposals, gated recurrent units (GRU) and minimal gated units (MGU), have shown comparable promising results on example public datasets. In this paper, we introduce three model variants of the minimal gated unit which further simplify that...
The paper evaluates three variants of the Gated Recurrent Unit (GRU) in recurrent neural networks (RNNs) by retaining the structure and systematically reducing parameters in the update and reset gates. We evaluate the three variant GRU models on MNIST and IMDB datasets and show that these GRU-RNN variant models perform as well as the original GRU RNN model while reducing the computational expense...
The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias,...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.