作者: Erhard Rahm , Donald Ferguson
DOI: 10.1016/0306-4379(93)90017-U
关键词: Cache 、 Asynchronous communication 、 Cache algorithms 、 CPU cache 、 Job scheduler 、 Parallel computing 、 Locality of reference 、 Scheduling (computing) 、 Computer science 、 Sequential access
摘要: Abstract This paper presents a new set of cache management algorithms for shared data objects that are accessed sequentially. I/O delays on sequentially is dominant performance factor in many application domains, particular batch processing. Our fall into three classes: replacement, prefetching and scheduling strategies. replacement empirically estimate the rate at which jobs proceeding through data. These velocity estimates used to project next reference times cached our replace with longest time re-use. The second type algorithm performs asynchronous prefetching. uses estimations predict future misses attempts preload avoid these misses. Finally, we present simple job strategy increases locality between jobs. evaluated detailed simulation study. experiments show substantially improve compared traditional management.