Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from K-O

Large-Scale Video-on-Demand Systems - Caching, Broadcasting, Patching, VCR Functionality

server buffer channels data

J. Feng
City University of Hong Kong, Hong Kong

W.F. Poon and K.T. Lo
The Hong Kong Polytechnic, Hong Kong

Definition: Large-scale video-on-demand systems allow distributed users to interactively access video files from remote servers and consist of four components: video server, directory server, proxy server, and clients/set-top-box.

With the advances in digital video technology and high speed networking, video-on-demand (VoD) services have come into practice in recent years. The VoD service allows geographically distributed users to interactively access video files from remote VoD servers. Users can request videos from a server at any time and enjoy the flexible control of video playback with VCR-like functions. Nevertheless, such systems have not yet been commercial success because server and network requirements are the limiting factors in the wide deployment of such services. Engineers have thus tried to minimize the resources requirement as well as increase the system scalability. There are mainly two approaches for building a large scale VoD system in a cost-effective way. The first is to use a proxy server to minimize the backbone network transmission cost. The second is to use multicasting/broadcasting techniques to share the system resources. Currently, caching, broadcasting and patching are the major data sharing techniques to provide a cost-effective VoD service.

Figure 1 shows a general large-scale VoD system architecture that basically consists of four components: video server, directory server, proxy server and clients/set-top-box. The video server is responsible for streaming the videos to the distributed clients. It determines whether there are sufficient resources such as disk bandwidth and available network channels for the clients. The server application is able to support both unicast and broadcast connections depending on the Quality of Service (QoS) requirement of the clients. When the server receives video requests, the video data are retrieved from the repository such as RAID and then transmitted to the clients. For a good design of the VoD system, the server should mostly handle the complexity. The set-top-box is only responsible for sending/receiving signals to/from the VoD server. The buffer is used to store the video data before playback to maintain a continuous and jitters-free display.

Caching

To reduce the backbone network traffic, a proxy server can be installed between the central server and the clients as shown in Figure 1. In this hierarchical VoD system, video data can be temporarily stored in a proxy server. In general, the proxy server caches the most popular videos for users’ repeating requests in order to minimize the backbone transmission cost. Upon the user’s request received by the proxy server, it will send back the video request if the video have been already cached. Otherwise, it will bypass the request to the higher level and then retrieve the requested video from the central server. One of the challenges in this hierarchical architecture is to decide which video and how many copies have to maintain at each proxy server. On the other hand, instead of storing the videos as entity, the proxy servers can just pre-cache the beginning portion of the video to support the local customers in order to efficiently utilize the limited proxy server capacity.

Apart from updating the video content in the proxy server periodically say 1 or 2 days, dynamic replacement algorithms may also be implemented to maximize the server’s utilization. The least frequently used (LFU) algorithm exploits the history of accesses to predict the probability of an object. However, it was originally designed for web caching purpose that does not work well for continuous media data like videos. A resource-based caching (RBC) algorithm for constant bit-rate encoded videos was proposed to employ caching gain and resource requirement of videos to make a decision in a limited cache resource environment. In addition, some people suggested encoding the videos into a number of fixed-sized or variable-sized segments as an atomic unit for a caching policy. For a heterogeneous network environment such as Internet, the system may use layered encoded videos to flexibly provide different quality of the videos streams according to the clients’; available bandwidth. In this case, some layers of the videos will be stored in the proxy server.

Broadcasting

If the network supports the broadcast/multicast traffic, data broadcasting techniques can further increase the system scalability. For example, if the video server is installed in the broadcast/multicast enabled network (see Figure 1) in which routers/switches support the multicast protocols such as Internet Group Management Protocol (IGMP) , all the users in this domain then enjoy the broadcast/multicast transmission technology and the video server can simply use the broadcast/multicast channels to serve a large group of users.

Staggered broadcasting (STB)] is the simplest broadcasting protocol proposed in the early days. The approach of STB is to open the video channels at a fixed regular interval. Suppose that the video length is L minutes and the video data rate is C . The protocol allocates K channels each with bandwidth C to transmit the whole video. The video is then broadcast at its transmission rate over the channels at a phase delay and the start-up latency is equal to L/K . Figure 2 illustrates that a video, which is composed of 4 segments, is delivered over 4 broadcasting channels. If the video is 120 minutes long, the maximum start-up delay, i.e. the waiting time of the user, will be as long as 30 minutes.

To reduce the start-up delay, pyramid broadcasting (PB) was introduced. In this scheme, the access time can be reduced with the cost of a large receiver buffer. The principle behind PB is to divide the physical channel into K logical channels of equal bandwidth B/K, where B is the total bandwidth of the network. Each video is partitioned into K segments of geometrically increasing size so that the i th logical channel will periodically broadcast the i th video segment for M videos in turns. No other segments are transmitted through this channel. With the PB scheme, both client I/O capacity and buffer requirement are very high.

To overcome the problems of PB, skyscraper broadcasting (SB) was developed. In SB, a new video fragmentation function was developed to divide the video into different segments. The system then broadcasts the video segments over K channels. The size of the i th video segment, in the units of the first segment size D 1,SB, is given by eqn. (1).

At the client side, reception of segments is done in terms of transmission group, which is defined as consecutive segments having the same sizes. Users need to download data from at most two channels at any time and the receiver buffer requirement is constrained by the size of the last segment. Figure 3 shows the transmission schedule of SB when K is equal to 5 and the shaded area is the receiver reception schedule when the customer arrives at time t.

D SBMAX is defined to restrict the segment from becoming too large. Thus, the start-up latency T sb , that is equal to the size of the first segment, si:

The storage requirement Buf SB can be computed by eqn. (3).

With similar approach, fast-data broadcasting (FB) can further reduce the start-up latency by dividing a video into a geometrical series of [1, 2, 4, 2 K-1 , 2 K ]. This protocol is very efficient in terms of server bandwidth requirement but the receiver is required to download video segments from all K channels simultaneously. Other broadcasting protocols like harmonic broadcasting , cautious harmonic broadcasting (CHB) and quasi-harmonic broadcasting (QHB) were also developed. In addition, some hybrid protocols called pagoda-based broadcasting were derived. These protocols tried to partition each video into fixed size segments and map them into video streams of equal bandwidth but use time-division multiplexing to minimize the access latency and bandwidth requirement.

Patching

As described before, all the video broadcasting protocols would introduce start-up latency that is ranged from several ten seconds to several ten minutes. The delay depends on the efficiency of the broadcasting schemes, channel bandwidth requirement as well as receiver buffer size. Thus, the patching scheme was designed to provide a true (zero-delay) VoD system in a broadcast environment. The idea of patching is that clients are able to download data on two channels simultaneously when they request for the videos. They can merge into the broadcasting channels while watching the videos. In this case, the video playback does not have to synchronize with the transmission schedule of the broadcasting channels and the clients can start the video as soon as they make the video requests.

Figure 4 shows the patching scheme based on the staggered broadcasting protocol. When a client arrives at time t1, instead of waiting for the video data from broadcasting channel 2 at t2, he/she will receive the missing portion of the video, s1, from a unicast channel and at the same time buffer the video data from the broadcasting channel 1. Once s1 has been received, the unicast channel can be disconnected, and the client can be served by the broadcasting channel until the end of the video. Thus, compared to the traditional VoD system in which the individual client is served by the unicast channel, the patching scheme can significantly reduce the resources requirement.

Based on the similar idea, the hierarchical stream merging (HSM)scheme hierarchically merges clients into broadcasting groups such that the bandwidth requirement can be further reduced compared with the simple patching scheme.

VCR Functionality

In a VoD system, clients may request different types of VCR functions such as pause and fast forward. Actually, it is still an open issue for implementation of continuous VCR functions in the above broadcasting systems except for the STB protocol. To implement the pause function in the STB system, we use the receiver buffer and exploit the jumping group property. Figure 5 illustrates how the system provides the pause operation. For example, the client who is being served by broadcasting group i pauses the video at t1. Since the client continues to receive the video data, his/her buffer will be full at t1 + W where W is the start-up delay. At this moment, he/she jumps from broadcasting group i to succeeding group i + 1 . After the change of the broadcasting group, the duplicated buffer content is removed. The data from group i + 1 can then continue to transmit to the buffer. In general, the customer can jump to group i + 2, i + 3, and so on until he/she resumes from the pause operation.

In the STB system, we may apply the Split-And-Merge (SAM) protocol to cover the details for the other interaction types that include Jump Forward/Backward and Fast Forward/Reverse. The basic idea of SAM is to use separated channels called interactive (unicast) channels to support the VCR functions. When the playout resumes, the VoD server attempts to merge the interactive user back into one of the broadcasting channels. Each interactive customer is allocated a buffer of maximum size SB located at the access nodes for the synchronization purpose. Instead of using the shared buffer in the access nodes, people suggested that a decentralized non-shared buffer is more suitable for a large-scale VoD system providing the interactive functions. Figure 6 shows the state diagram of interactive user’s behavior.

Apart from using the contingency channels, a buffer management scheme called Active Buffer Management (ABM) was developed to support the VCR functions using the receiver buffer. With the ABM scheme, it was assumed that the receiver can listen and receive data from several broadcasting channels at the same time. Data loading strategies are used to maintain the play point in the middle of the receiver buffer so as to support more VCR requests. Since the buffer size is limited, the system cannot guarantee to provide the continuous VCR functions.

[back] Laplace, Pierre Simon, Marquis de

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

about 7 years ago

i want information about what is use of buffer in proxy server in intranet caching protocol...plz send me reply its urgent...