作者: Amogh Agrawal , Indranil Chakraborty , Deboleena Roy , Utkarsh Saxena , Saima Sharmin
DOI: 10.1109/TVLSI.2020.2991679
关键词:
摘要: In this era of nanoscale technologies, the inherent characteristics some nonvolatile devices, such as resistive random access memory (ReRAM), phase-change material (PCM), and spintronics, can emulate stochastic functionalities. Traditionally, these devices have been engineered to suppress switching behavior it poses reliability concerns for storage logic applications. However, leveraging stochasticity in led a renewed interest hardware–software codesign algorithms since CMOS-based implementations involve cumbersome circuitry generate “stochastic bits.” article, we consider two classes problems: deep neural networks (DNNs) combinatorial optimization. The rapidly growing demands artificial intelligence (AI) sparked an energy-efficient large DNNs, with binary representations synaptic weights neuronal activities. Stochasticity plays important role benefits representations, leading model compression optimization during training. optimization, graph coloring or traveling salesman problems, algorithms, Ising computing model, shown be effective. These problems require exhaustive computational procedures, uses natural annealing agent achieve near-optimal solutions reasonable timescale, without getting stuck “local minima.” present broad review utilizing based on technologies. We show how that enable optimal both local learning inference. Directly mapping device need storing bits separate leads efficient use hardware.