Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Scheduling in Multimedia Systems - Introduction, Data Placement and Scheduling, Cycle-based Scheduling, Deadline-driven Scheduling, Other Scheduling Polices

disk time block period

Seon Ho Kim
University of Denver, Denver, USA

Definition: Streaming media servers employ the scheduling of available disk bandwidth to guarantee a continuous display of streaming media and to maximize the throughput by minimizing the wasteful work of disk drives .

Introduction

A general software structure of a multimedia server supporting continuous displays of streaming media (SM) objects such as audio and video consists of three main components: data placement and scheduling, buffering, and admission control. As shown in Figure 1, SM objects are stored in multiple magnetic disk drives following a specific data placement scheme. Assume that a user requests the display of object X, the server schedules the retrieval of the data blocks of X, while the network ensures the timely delivery of these blocks to the client. The SM server stages a block of X (say X ,) from disk into memory buffer and initiates its delivery to the display station (via the network). The server schedules the retrieval and delivery of the next block X i +1 prior to the completion of the display of X,. This ensures a smooth transition between the two blocks in support of continuous display. This process is repeated until all blocks of X have been retrieved, delivered, and displayed.

Upon an arrival of user request to display an object, admission control estimates a resource requirement of the request and admits it only when the required resource is available. This is to ensure maintaining the required quality of service for a smooth display. Once the request is accepted, the requested object will be retrieved from a disk array in a pre-defined scheme and staged in memory buffer. Various buffering scheme can absorb the variance of block retrieval time and provide a smooth data transmission over the network. Some buffering schemes such as batching can also increase the throughput by grouping requests for the same object that arrive within a short duration of time.

Data Placement and Scheduling

Traditionally, in conjunction with data placement, SM servers employ the scheduling of available disk bandwidth to guarantee a continuous display and to maximize the throughput by minimizing the wasteful work of disk drives. Among various possible disk scheduling algorithms, two scheduling approaches are widely utilized: deadline-driven and cycle-based . A technique for real-time scheduling of I/O tasks is a deadline-driven approach and it can be applied for the scheduling of SM data retrievals. With this approach, a request for a block is tagged with a deadline that ensures a continuous display. Disks are scheduled to service requests with the Earliest Deadline First scheme. A limitation of this approach is that seek times may not be optimized because the sequence of block retrievals are determined by deadlines. A cycle-based scheduling technique is an approach exploiting the periodic nature of SM display. To support continuous display of an object X, it is partitioned into n equi-sized blocks: X 0 , X 1 , X n-1 , where n is a function of the block size (B) and the size of X. A block is laid out contiguously on the disk and is the unit of transfer from disk to main memory. The time required to display a block is defined as a time period ( T p ). A time period is partitioned into a number of time slots such that the duration of a slot is long enough to retrieve a block from a disk. The number of slots denotes the number of simultaneous displays supported by a disk. During a given time period (or a cycle), the system retrieves up to N blocks, only one block for each display. Block requests for the next cycle are not issued until the current cycle ends. During the next cycle, the system retrieves the next blocks for displays in a cyclic manner. For example, assuming that the system can support up to three block retrievals during a time period, suppose that the system retrieves X i , Y j , and Z k for a given time period. During the next time period, it retrieves X i+1 , Y j+1 , and Z k+1 .

In general, the display time of a block is longer than its retrieval time from a disk drive. Thus, the bandwidth of a disk drive can be multiplexed among multiple simultaneous displays accessing the same disk drive. For example, with 4 Mb/s of display bandwidth requirement of MPEG-2 encoded objects, a disk drive with 80 Mb/s of data transfer rate can support up to 20 simultaneous displays. This is the ideal case when there is no overhead in disk operation. However, in reality, a magnetic disk drive is a mechanical device and incurs a delay when required to retrieve data. This delay consists of: 1) seek time to reposition the disk head from the current track to the target track, and 2) rotational latency to wait until the data block arrives under the disk head. These are wasteful operations that prevent a disk drive from transferring data. Both their number of occurrence and duration of each occurrence must be reduced in order to maximize the number of simultaneous displays supported by a disk drive. Thus, the performance of SM servers significantly depends on the physical characteristics of magnetic disk drives such as data transfer rate, seek times, and rotational latency. For example, it is obvious that we can increase the throughput of a server using disk drives having a higher data transfer rate.

Assuming SM servers with multiple disk drives, data blocks are assigned to disks in order to distribute the load of a display evenly across the disks. Thus, data placement can affect the continuous display and performance of SM servers in conjunction with scheduling techniques. There are two well-known approaches to assign blocks of an object across multiple disk drives; constrained and unconstrained . A typical example of constrained data placement is round-robin. As suggested by its name, this technique assigns the blocks of an object to disks in a round-robin manner, starting with an arbitrarily chosen disk. Assuming d disks in the system, if the first block of an object X is assigned to disk d i , j th block of X is assigned to disk d i+j-1 mod d . An example of unconstrained data placement is random that assigns data blocks to disk drives using a random number generator.

With the combination of cycle-based scheduling and round-robin data placement, one block is retrieved from each disk drive for each display in every time period. Thus, assuming d disk drives in the system, data retrieval for a display cycles through all d disks in d successive time periods, following the round-robin data placement in a lock-step manner. The system load should be distributed across disk drives to prevent formation of bottlenecks. This load can be balanced by intentionally delaying the retrieval of the first block of requested object whenever a bottleneck is formed on a disk drive. Due to the harmony of round-robin data placement and periodic cycle-based data retrieval, this approach provides a deterministic service guarantee for a hiccup-free display of a SM object once its retrieval is initiated. This approach maximizes the utilization of disk bandwidth by distributing the load of a display across disks evenly. Thus, the system throughput scales linearly as a function of the number of disk drives in the system. The drawback of this approach is that the worst startup latency linearly scales as a function of the number of disks while the average startup latency increases sub-linearly. This is because the system might delay the initiation of data retrievals of objects. Thus, this approach is suitable to the applications that require a high throughput and can tolerate longer startup latency such as movie-on-demand systems. To minimize this limitation, data replication techniques have been proposed to reduce latency while maintaining the same throughput.

Deadline-driven scheduling with random data placement controls the deadlines for block retrievals. This approach can provide shorter startup latency than the cycle-based and round-robin approach. Hence, this approach is more appropriate for the applications requiring short startup latency such as a digital editing system. However, this approach may suffer from the statistical variation of the number of block retrievals in a disk drive. Due to the nature of random data placement, a disk might receive more than its fair share of requests. A formation of bottleneck on a disk drive may result in the violation of deadlines set forth on requested blocks, causing some displays to incur hiccups. This hiccup probability might be significant depending on the system load. To address this limitation, a bulk prefetching technique has been proposed to minimize the hiccup probability while maintaining short startup latency.

Cycle-based Scheduling

The system is configured with a fixed amount of memory and a disk array. The transfer rate of a disk drive is denoted as R D . The objects that constitute the SM server belong to a single media type and require a fixed bandwidth for their display ( R C ). Without loss of generality, R D > R C . Then, in order to support simultaneous displays of several objects, a time period is partitioned into fixed-size slots, with each slot corresponding to the retrieval time of a block from the disk drive. The number of slots in a time period defines the number of simultaneous displays that can be supported by a disk drive. Figure 2 demonstrates the concept of time period and time slots. Each box represents a block reading time. Assuming that each block is stored contiguously on the surface of the disk, the disk incurs a disk head movement every time it switches from one block of an object to another, which is termed as seek .

Since the blocks of different objects are scattered across the disk surface, a basic technique ( Basic ) should assume the maximum seek time, i.e., a full stroke that is the time to reposition the disk head from the outermost cylinder to the innermost one, between two adjacent block retrievals. Moreover, it should also assume the maximum rotational latency of the disk drive. Otherwise, a continuous display of each object cannot be guaranteed. Thus, the following equation should be satisfied to support maximum N simultaneous display.

For a fixed block size, N is:

The maximum startup latency ( l ) observed by a request with this technique is one T p because a request might arrive a little too late to employ the empty slot in the current time period. Note that the average latency is l/2 when the number of active users is smaller than N . If the number of active displays exceeds N , then l should be extended with appropriate queuing models.

One approach to reduce the worst seek time is scheduling of the disk bandwidth for multiple block retrievals in a time period. One can apply a SCAN algorithm for the block retrievals during a time period. The system sorts the order of block retrievals during a time period based on the location of blocks in a disk. The movement of the disk head to retrieve blocks during a time period abides by the SCAN algorithm, in order to reduce the incurred seek times among retrievals. However, a hiccup may happen if the system initiates the display of a block immediately after its retrieval as in Basic. This is because the time elapsed between two consecutive block retrievals can be greater than a time period. In order to prevent hiccups, the displays of all the blocks retrieved during the current time period must start at the beginning of the next time period. Figure 3 demonstrates a continuous display with SCAN. The blocks A i , B j , F k are retrieved during the first time period. The displays of these blocks are initiated at the beginning of the next time period.

Eq. (1) and (2) still hold with SCAN but with a reduced seek time, seek(#cyl/N) . SCAN requires two buffers for a display because a block is displayed from one buffer while the next block is being retrieved from the disk into the other buffer. The maximum startup latency happens when a request arrives just after a SCAN begins in the current time period and the retrieval of the first block is scheduled at the end of the next time period. Thus, it is 2 T p .

A more general scheduling technique is Grouped Sweeping Scheme , GSS. GSS groups N active requests in a T p into g groups. This divides a T p into g subcycles, each corresponding to the retrieval of [ N / g ] blocks. The movement of the disk head to retrieve the blocks within a group abides by the SCAN algorithm, in order to reduce the incurred seek time in a group. Across the groups, there is no constraint on the disk head movement. To support the SCAN policy within a group, GSS shuffles the order that the blocks are retrieved. For example, assuming A i , B j , C k , and D 1 are retrieved within a time period, the sequence of the block retrieval might be A i followed by B j during the first subcycle of T P and C k followed by D 1 during the second subcycle of the same time period. In the next time period, the retrieval order might change to B j+i followed by A i+i and then D i+1 followed by C k+1 during the first and second subcycle, respectively. In this case, the display of (say) A might suffer from hiccups because the time elapsed between the retrievals of A i and A i+1 is greater than one time period. Thus, in GSS, the displays of all the blocks retrieved during subcycle i start at the beginning of subcycle i+1 . To illustrate, consider Figure 4 where g =2 and N =4. The blocks A i and B j are retrieved during the first subcycle. Their displays are initiated at the beginning of subcycle 2 and last for two subcycles. Therefore, while it is important to preserve the order of groups across the T p s, it is no longer necessary to maintain the order of block retrievals within a group. Eq. (1)

and (2) still hold with a further reduced seek time, seek.

The maximum startup latency observed with this technique is the summation of one time period (if the request arrives when the empty slot is missed) and the duration of a subcycle, i.e., l = T p + T p / g . It may appear that GSS results in a higher latency than Baisc . However, this is not necessarily true because the duration of a time period is different with these two techniques due to the choice of different block size. The duration of a time period is a function of the block size. GSS is a generalization of different techniques. Observe that g = N simulates Basic and g = 1 simulates SCAN.

Deadline-driven Scheduling

With random, the system may suffer from a statistical variation of the number of block requests in a disk. Even though all the objects are displayed with a constant rate, the probability that a disk receives a higher number of requests in a given time than other disks might be significant. Thus, the time to retrieve a block might be greater than a time period, resulting in a hiccup. Each disk has its own queue and requests remain in queues until being serviced. For a simple discussion, assume that each block request is tagged with the same deadline, a T p , and each disk drive can support up to three block retrievals during a T p . Then, all requests can be serviced in a T p when there are less than 4 requests in a queue. However, with more than 3 requests, say 5, the deadlines of the 4 th and 5 th requests are violated and hiccups happen.

We can employ a queueing model to quantify an expected hiccup probability with deadline-driven scheduling. With a Poisson arrival process and a deterministic service process, this is the probability that a request remains in the system more than the duration of a given deadline, T p (including waiting time in the queue and the service time). In particular, when a new request finds N or more waiting requests in the queue of a disk, this request cannot be serviced in T p and will experience a hiccup. Suppose there are d disks in a system and each disk supports a maximum of N requests in a T p . When there are k active requests in the system, each disk receives k/d requests on the average per T p . This is because blocks are randomly distributed across disks and a request accesses a specific disk with a probability of 1/d . Using a M/D/1 queueing model, the probability that a request spends time less than or equal to t in the system can be calculated using the queue length distribution, p n

where js = t < ( j+1 ) s , j =1,2,… and ? is the random variable describing the total time a request spends in the queueing system, and s is the average service time of a block request. And the probability that n requests are in the queue, p n , is:

where p is the system utilization (load). Then, the probability of hiccups is:

The average startup latency with deadkine-scheduling and random placement can be defined by the average time in the queueing system (average queueing time plus service time):

Other Scheduling Polices

Above mentioned techniques mainly consider retrieval scheduling of data blocks placed across multiple disks without any consideration of the location of blocks within a disk drive. There are some techniques constraining block locations within a disk as well as placing across disks. Organ-pipe placement tries to place more frequently accessed blocks near the center of a disk, i.e., cylinders in the middle of the disk. This approach can reduce the average seek time when objects are accessed with a deterministic access frequency. However, it fails when access frequency of objects varies over time. A region based block allocation ( REBECA ) was introduced to reduce seek times between two adjacent block retrievals by dividing a disk space into multiple regions and constraining disk accesses within a region during a certain time period. This technique can significantly reduce seek times by increasing the number of regions in a disk but it increases at the same time the maximum startup latency as a function of the number of regions.

Some techniques have studied scheduling of data blocks in multi-zone disks. Modern hard disks consist of multiple zones where each zone has a different storage capacity and data transfer rate. Disk manufacturers utilize zoning technique to store more number of bits in outer cylinders than inner cylinders based on the fact that physically longer outer tracks can provide more space than shorter inner tracks with a constant bit density (bits per inch). However, this zoning technique makes deterministic guarantee difficult in servicing real-time multimedia objects because of a greater variance in block retrieval times. Various techniques to handle multi-zone disks can be found in.

Scheele, Carl Wilhelm [next] [back] Schawlow, Arthur (Leonard)

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

almost 6 years ago

I want only networking scheduling but your site shows unnecessary contents


ibrahim

Vote down Vote up

almost 6 years ago

I want only networking scheduling but your site shows unnecessary contents


ibrahim