Disk drives are optimized for sequential access, and they continue
prefetching data into the disk cache even after a read operation is
completed [17]. Chunking for a read IO requests is illustrated in
Figure 2. The x-axis shows time, and the
two horizontal time lines depict the activity on the IO bus and the
disk head, respectively. Employing chunking, a large
is
divided into smaller chunk transfers issued in succession. The first
read command issued on the IO bus is for the first chunk. Due to the
prefetching mechanism, all chunk transfers following the first one are
serviced from the disk cache rather than the disk media. Thus, the
data transfers on the IO bus (the small dark bars shown on the IO bus
line in the figure) and the data transfer into the disk cache (the
dark shaded bar on the disk-head line in the figure) occur
concurrently. The disk head continuously transfers data after the
first read command, thereby fully utilizing the disk throughput.
Figure 3 illustrates the effect of the chunk size on the
disk throughput using a mock disk. The optimal chunk size lies
between and
. A smaller chunk size reduces the
waiting time for a higher-priority request. Hence, Semi-preemptible IO uses
a chunk size close to but larger than
. For chunk sizes smaller than
, due to the overhead associated with issuing a disk command, the IO
bus is a bottleneck.
Point
in Figure 3 denotes the point beyond which the
performance of the cache may be sub-optimal3.