作者: Kyle Thomas , Muhammad Santriaji , David Mohaisen , Yan Solihin , None
DOI:
关键词:
摘要: Neural networks (NNs) are increasingly deployed to solve complex classification problems and produce accurate results on reliable systems. However, their accuracy quickly degrades in the presence of bit flips from memory errors or targeted attacks on dynamic random-access main memory. Prior work has shown that a few bit errors significantly reduce NN accuracies, but it is unclear which bits have an outsized impact on network accuracy and why. This article first investigates the relationship of the number representation for NN parameters with the impacts of bit flips on NN accuracy. We then explore the bit flip detection framework— four software-based error detectors that detect bit flips independent of NN topology. We discuss exciting findings and evaluate the various detectors’ efficacy, characteristics, and tradeoffs.