作者: Zbigniew Hajduk
DOI: 10.1016/J.NEUCOM.2018.04.077
关键词:
摘要: Abstract This brief paper presents two implementations of feed-forward artificial neural networks in FPGAs. The differ the FPGA resources requirement and calculations speed. Both exercise floating point arithmetic, apply very high accuracy activation function realization, enable easy alteration network's structure without need a re-implementation entire project.