AVS 65th International Symposium & Exhibition | |
Manufacturing Science and Technology Group | Tuesday Sessions |
Session MS+MI+RM-TuM |
Session: | IoT Session: Challenges of Neuromorphic Computing and Memristor Manufacturing (8:00-10:00 am)/Federal Funding Opportunities (11:40 am-12:20 pm) |
Presenter: | Hsinyu Tsai, IBM Almaden Research Center |
Authors: | H. Tsai, IBM Almaden Research Center S. Ambrogio, IBM Almaden Research Center P. Narayanan, IBM Almaden Research Center R.M. Shelby, IBM Almaden Research Center G.W. Burr, IBM Almaden Research Center |
Correspondent: | Click to Email |
In this presentation, we will focus on hardware acceleration of large Fully Connected (FC) DNNs in phase change memory (PCM) devices [1]. PCM device conductance can be modulated between the fully crystalline, low conductance, state and the fully amorphous state by applying voltage pulses to gradually increase the crystalline volume. This characteristic is crucial for memory-based AI hardware acceleration because synaptic weights can then be encoded in an analog fashion and be updated gradually during training [2,3]. Vector matrix multiplication can then be done by applying voltage pulses at one end of a memory crossbar array and accumulating charge at the other end. By designing the analog memory unit cell with a pair of PCM devices as the more significant weights and another pair of memory devices as the less significant weights, we achieved classification accuracies equivalent to a full software implementation for the MNIST handwritten digit recognition dataset [4]. The improved accuracy is a result of larger dynamic range, more accurate closed loop tuning of the more significant weights, better linearity and variation mitigation of the less significant weight update. We will discuss what this new design means for analog memory device requirements and how this generalizes to other deep learning problems.
1. G. W. Burr et al., “Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element,” IEDM Tech. Digest, 29.5 (2014).
2. S. Sidler et al., “Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: impact of conductance response,” ESSDERC Proc., 440 (2016).
3. T. Gokmen et al., “Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations,” Frontiers in Neuroscience, 10 (2016).
4. S. Ambrogio et al., “Equivalent-Accuracy Accelerated Neural Network Training using Analog Memory,” Nature, to appear (2018).