作者: Patrick Siegl , Rainer Buchty , Mladen Berekovic
关键词:
摘要: A major shift from compute-centric to data-centric computing systems can be perceived, as novel big data workloads like cognitive and machine learning strongly enforce embarrassingly parallel highly efficient processor architectures. With Moore's law having surrendered, innovative architectural concepts well technologies are urgently required, enable a path for tackling exascale beyond -- even though current face the inevitable instruction-level parallelism, power, memory, bandwidth walls. As part of any system, general perception memories depicts unreliability, power hungriness slowness, resulting in future prospective bottleneck. The latter being an outcome pin limitation derived by packaging constraints, unexploited tremendous row is determinable, which off-chip diminishes bare minimum. Building upon towards systems, near-memory processing concept seems most promising, since efficiency performance increase co-locating tasks on bandwidth-rich in-memory units, whereas motion mitigates avoidance entire memory hierarchies. By considering umbrella near-data urgent required breakthrough this survey presents its derivations with special emphasis Processing-In-Memory (PIM), highlighting historical achievements technology architecture while depicting advantages obstacles.