Non-steady relaxation and critical exponents at the depinning transition. Supplemental Material: GPU-based numerical implementation

作者: EE Ferrero , S Bustingorry , AB Kolton

DOI:

关键词:

摘要: The acronym GPGPU stands for “General Purpouse Graphics Processing Unit”. GPGPU Computing is a common denomination for the practice of using GPUs (Graphics Processing Units) as a hardware facility to run general-purpouse progarms, apart from rendering graphics (or not). The fact that the GPU is used as an accelerator collaborating with the CPU is only one example of heterogeneous computing. Heterogeneous architectures came to mitigate the technical barriers emerged in the development of faster and faster processors [1]. From a decade on, the gain of GFlops in modern computers is given more by their ability of processing applications in parallel and providing different hardware profits (as cached memory flow) than by their increase in processor clock frequency. As a consequence, software in general was pushed to be programed for parallel performance. The improvement of compilers relieves in some cases the lack of parallel implementations, but in many others, applications have to be re-formulated to fit new architectures. In this sense scientific-computing software is not an exception and we should rethink our usual codes and numerical simulation techniques. Independently of their manufacturer, generation or model, all GPUs share the same Single Instruction Multiple Thread (SIMT) concurrency paradigm in order to exploit their high parallelism and high memory bandwidth [2]. Basically, the programming framework using SIMT paradigm consists in coding for an unlimited number of parallel threads for all practical purposes [3](typically one thread per component of our system). A remarkable point is that, from the programmer …

参考文章(0)