The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Compact online learning architectures could be used to enhance internet of things devices to allow them to learn directly based on data being received instead of having to ship data to a remote server for learning. This saves communications energy and enhances privacy and security as the data is not shared. The learning architectures can also be used in high performance computing and in traditional computing architectures to learn approximations of the functions being performed based on runtime activities. This paper presents the Socrates-D a digital multicore on-chip learning architecture for deep neural networks. It has memories internal to each neural core to store synaptic weights. A variety of deep learning applications can be processed in this architecture. The system level area and power benefits of the specialized architecture is compared with an NVIDIA GEFORCE GTX 980Ti GPGPU. Our experimental evaluations show that the proposed architecture can provide significant area and energy efficiencies over GPGPUs for both training and inference.