作者: David Boland , George A. Constantinides
关键词:
摘要: In embedded computing, typically some form of silicon area or power budget restricts the potential performance achievable. For algorithms with limited dynamic range, custom hardware accelerators manage to extract significant additional for such a via mapping operations in algorithm fixed-point. However, complex applications requiring floating-point computation, improvement over software is reduced. Nonetheless, can still customize precision operators, unlike which restricted IEEE standard single double precision, increase overall at cost increasing error observed final computational result. Unfortunately, because it difficult determine if this tolerable, task rarely performed. We present new analytical technique calculate bounds on range relative output variables, enabling be tolerant floating point errors by design. contrast existing tools that perform task, our approach scales larger examples and obtains tighter bounds, within smaller execution time. Furthermore, allows user trade quality time procedure, making suitable both small large-scale algorithms.