To estimate the response time for higher-priority IO requests, we conducted experiments wherein higher-priority requests were inserted into the IO queue at a constant rate (). While the constant arrival rate may seem unrealistic, the main purpose of this set of experiments is only to ``estimate'' the benefits and overheads associated with preempting an ongoing Semi-preemptible IO request to service a higher-priority IO request.
Table 2 presents the response time for a higher-priority request when using Semi-preemptible IO in two possible scenarios: () when the higher-priority request is serviced after the ongoing IO is completed (non-preemptible IO), and () when the ongoing IO is preempted to service the higher-priority IO request (Semi-preemptible IO). If the ongoing IO request is not preempted, then all higher-priority requests that arrive while it is being serviced, must wait until the IO is completed. The results in Table 2 illustrate the case when the ongoing request is a read request. The results for the write case are presented in Table 4.
Preemption of IO requests is not possible without overhead. Each time a higher-priority request preempts a low-priority IO request for disk access, an extra seek is required to continue servicing the preempted request after the higher-priority request has been completed. Table 2 presents the average response time and the disk throughput for different arrival rates of higher-priority requests. For the same size of low-priority IO requests, the average response time does not increase significantly with the increase in the arrival rate of higher-priority requests. However, the disk throughput does decrease with an increase in the arrival rate of higher-priority requests. As explained earlier, this reduction is expected since the overhead of IO preemption is an extra seek operation per preemption. For applications that require short response time, the performance penalty of IO preemption seems acceptable.