Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from P-T

Quality of Service in Multimedia Networks - QoS Elements, Realization of QoS, Network Scalability and Application Layer Multicasting

traffic bandwidth time packet

Abdulmotaleb El Saddik
School of Information Technology and Engineering
University of Ottawa, Ontario, Canada

Definition: Multimedia applications must adjust their QoS according to the heterogeneous terminals with variable QoS requirements and support

During the last decade, the multitude of advances attained in terminal computers, along withthe introduction of mobile hand-held devices, and the deployment of high speed networks have led to a recent surge of interest in Quality of Service (QoS) for multimedia applications. Computer networks able to support multimedia applications with diverse QoS performance requirements are evolving. To ensure that multimedia applications will be guaranteed the required QoS, it is not enough to merely commit resources. It is important that distributed multimedia applications ensure end-to-end QoS of media streams, considering both the networks and the end terminals. The degradation in the contracted QoS is often unavoidable, thus there is a need to provide real-time QoS monitoring that not only is capable of monitoring the QoS support in the network but that can also take actions in real-time manner to sustain an acceptable multimedia presentation quality when the QoS level degrades. Presently, there are various kinds of networks; wired and wireless that co-exists with each other. These networks have QoS characteristics that are drastically different and whose degree of variability of the different QoS parameters, such as bandwidth, delay and jitter, differ considerably. Furthermore, there are various kinds of terminals, such as desktop computers, laptop computers, personal digital assistants (PDA), and cell phones, each with some multimedia support. Thus, multimedia applications must adjust their QoS according to the heterogeneous terminals with variable QoS requirements and support. ISO defines QoS as a concept for determining the quality of the offered networking services. In this article, we consider the following three issues: a) QoS elements, b) Realization of QoS, and c) Manifesting QoS in appropriate network.

QoS Elements

We can divide Quality of Service into the following three layers:

  • Application level Quality of Service: specifies those parameters related to user requirements and expectations. Frame size, sample rate, image and audio clarity are some parameters of this level.
  • System level Quality of Service: includes operating system and CPU requirements, such as processing time, CPU utilization, and media relations like synchronization.
  • Network level Quality of Service: defines communication requirements, such as throughput, delay, jitter, loss, and reliability.

The main QoS parameters are discussed next.

Bandwidth or throughput is a network QoS parameter that refers to the data rate supported by a network connection or interface. The most common term for bandwidth is bits per second (bps) i.e. effective number of data units transported per unit time. Multimedia applications usually require high bandwidth as compared to other general applications. For example, MPEG2 video requires around 4Mbps, while MPEG1 video requires about 1-2 Mbps. Network technologies that do not support such high bandwidth can not play multimedia contents. For instance, Bluetooth technology version 1 only supports a maximum bandwidth of 746 kbps and thus, devices relying on Bluetooth for connectivity can not play MPEG1 videos. Better throughput means better QoS received by the end-user.

Delay is defined as the time interval elapsed between the departures of data from the source to its arrival at the destination. In the case of a communication system, delay refers to the time lag between the departure of signal from the source and its arrival at the destination. This can range from a few nanoseconds or microseconds in local area networks (LANs) to about 0.25 s in satellite communications systems. Greater delays can occur as a result of the time required for packets to make their way through land-based cables and nodes of the Internet. Because of the clock synchronization problem, it is difficult to measure one-way delays; therefore, round-trip (i.e. forward and return paths on the Internet) delays are used.

Jitter refers to the variation in time between packets arriving at the destination. It is caused by network congestion, timing drift, or route changes. Depending on the multimedia application type, jitter may or may not be significant. For example, audio or video conference applications are not tolerable to jitter due to the very limited buffering in live presentations, whereas prerecorded multimedia playback is usually tolerable to jitter, as modern players buffer around 5 seconds to alleviate the affect of jitter. In jitter, the deviation can be in terms of amplitude, phase timing or the width of the signal pulse.

Loss mainly denotes the amount of data that did not make it to the destination in a specified time period. Loss is directly proportional to the QoS applicable. Different methods can be used to reduce the chances of loss; either by providing individual channels/guaranteed bandwidth for specific data transmissions, or by retransmission of data for loss recovery.

Reliability : some multimedia applications require real-time processing, which makes packet retransmission impossible. Multimedia applications usually employ recovery mechanisms, such as Forward Error Correction (FEC), to deal with packet loss. Most Multimedia applications are error tolerant to a certain extent. However, few multimedia applications, like distance learning examination or Tele-Surgery are sensitive to packet loss. The successful delivery of all packets of the Multimedia content in such applications is vital. System reliability depends on many factors in the network. Reliability is inversely proportional to failure rate, meaning that reliability will be deteriorating as the failure rate rises.

Frame size defines the size of the video image on the display screen. A bigger frame size requires a higher bandwidth. In QoS adaptation, frame size can be modified according to available QoS support. For example, if a video is transmitted at 800* 600 pixel size frames, and network congestion is experienced, then the frame size can be reduced to 640 * 480 pixels to lower the bandwidth requirement for the video transmission.

Frame rate defines the number of frames sent to network per unit time. A higher Frame rate requires higher bandwidth. If QoS adaptation is required, the frame rate can be modified to lessen the bandwidth requirements at the cost of video quality.

Image clarity refers to the perceptual quality of the image which a user perceives for a certain delivery of Multimedia content. Human eyes are less sensitive to high frequency components of the image, so the high frequency components can be suppressed without any noticeable loss in image quality.

Audio quality is defined in terms of sampling rate per second. A higher sampling rate renders better audio quality. For example, audio encoded in 128kbps is higher in quality than that is encoded in 16kbps. Usually, for audio conversation, 64kbps audio quality is adequate, however, 128kbps or higher is required for stereo music contents.

Audio Confidentiality and Integrity dictates the security and access right restrictions for an audio stream. Depending on the media contents, this parameter may or may not be of great importance. For example, security is vital for an audio conference between two campuses of a company discussing the new product line, whereas security is not important for a news broadcast over the Internet.

Video Confidentiality and Integrity ; Video security defines the security and access right restrictions of video (Sequence of images). Like audio confidentiality, video confidentiality may or may not have importance, depending on the multimedia contents and the used business model.

Realization of QoS

To realize and implement QoS parameters in a network, the following characteristics of network traffic need to be explained: packet classification, congestion management, congestion avoidance, traffic-shaping and policing and Link efficiency management.

Packet or traffic classification identifies and splits traffic into different classes and marks them accordingly. Packet classification allows for different treatments of transmitted data, therefore giving audio packets higher priority if, e.g., the underlying network supports it. Figurel is an example of classification. Traffic classifications can be determined in several ways, including physical ingress interface, ISO/OSI Layer 2 or Layer 3 address or Layer 4 Port number, or the Universal Resource Locator (URL).

Congestion management prioritizes traffic based on markings. It encompasses several mechanisms and algorithms, such as FIFO, PQ, CQ, FBWFQ, CBWFQ, IP RTP and LLQ. First-in first-out (FIFO) is the default algorithm used for congestion management; it forwards packets in order of arrival when the network is not congested. FIFO may drop some important packets because of its limited buffer space. Priority queuing (PQ) places packets into four levels based on assigned priority: high, medium, normal, and low. Custom queuing (CQ) reserves a percentage of an interface’s available bandwidth for each selected traffic type for up to 16 queues available. Weighted fair queuing (WFQ) is designed to minimize configuration effort and automatically adapts to changing network traffic conditions. It schedules interactive traffic to the front of the queue to reduce response time, and fairly shares the remaining band width among high-band width flows. Flow-based weighted fair queuing (FBWFQ) allows each queue to be serviced fairly in terms of byte count. Class-based weighted fair queuing (CBWFQ) extends the standard WFQ function by providing support for user-defined traffic classes. IP Real-time protocol queuing (IP RIP), also called PQ-WFQ, provides strict priority to time-sensitive traffic, such as voice traffic. Low latency queuing (LLQ) is PQ+CBWFQ. It also provides strict priority to time-sensitive traffic.

Congestion avoidance predicts and avoids congestion on network. It is achieved by packet dropping techniques. Random Early Detection (RED), Weight Random Early Detection (WRED), Random Early Detection with In/Out (RIO), Adaptive Random Early Detection (ARED), and Flow Random Early Detection (FRED) are main congestion avoidance algorithms.

The goal of RED is to control the average queue size by alerting end hosts on when to temporarily slow down their transmission rate. RED avoids congestion by monitoring the traffic load at points in the network and stochastically discarding packets before a queue becomes full. WRED combines the RED algorithms with a weight that is decided by the IP Precedence. This effectively assigns higher precedence traffic flows lower drop rates, and thus QoS is guaranteed. RIO uses packet marking to modify RED algorithms based on a packet-by-packet criterion. RIO assumes that packets have already passed through an upstream marker, and a single bit in the packet header signifies the packet to be “In” or “Out” of profile. More specifically, when the packet is within the limits of a specified policy, it is marked as “In” profile; when the packet crosses the limits of specified policy, it is marked as “Out” profile. ARED addresses RED’s limitations by modifying the RED parameters based on recentcongestion history. FRED provides greater fairness to all flows through an interface with regards to how packets are dropped. With WRED, packets are dropped based on the averagequeue length. When the packet number of the average queue length exceeds the minimum threshold, it begins to drop packets that arrive to the router interface indiscriminately to thetype of flow to which the packets belong. Therefore, WRED applies the same loss rate to all kinds of packet flow.

Traffic Shaping and Policing can also be used within certain types of networks to manage ingress and egress traffic and data flow. The main reasons to using traffic shaping arecontrolling access to available bandwidth, ensuring that traffic conforms to the policies established for it, regulating the flow of traffic in order to avoid congestion.

Link efficiency management uses compression technique (see chapter MPEG coding standard) and/or media scaling to increase throughput, and by doing so, improves link efficiency.

Network Scalability and Application Layer Multicasting

The traditional Internet unicasting service is based on one-to-one data communication. Unicasting is well suited when communications occurs between two specific hosts (e.g. Email, web browsers). However, many applications today require one-to-many and many-to-many communications. Among them are shared virtual space, multimedia distribution service, Video conferencing, etc. For these kinds of applications, which involve transmitting a largeamount of data to multiple recipients, the unicast service is no longer efficient. For example, consider a multimedia content provider that uses unicasting to distribute a 2 Mbps streamedvideo. To support 1,000 viewers, the server needs a 2 Gbps access. Therefore, the server interface capacity can very well be a significant bottleneck, limiting the number of unicastvideo streams per video server. Replicated unicast transmissions eat up a lot of bandwidth within the network, which is another significant limitation. To overcome this unnecessary consumption of bandwidth, an efficient one-to-many communication scheme – multicasting – has been developed. Multicasting is an efficient way of distributing data from one sender tomultiple receivers with minimal data duplication. Traditionally multicasting is implemented at the network layer, analogous to the manner that network routers define the data delivery tree.As packets flow through this tree, they are replicated by routers at different branch points of the tree. This is IP Multicast architecture. IP Multicasting eliminates traffic redundancyand improves bandwidth usage in group data delivery on wide-area networks. However, although IP Multicasting has been available for years, today’s Internet service providers arestill reluctant to provide a wide-area multicast routing service due to some technical, as well as non-technical reasons. The first of these reasons is scalability. IP Multicasting isdesigned for a hierarchical routing infrastructure. It does not scale well in terms of supporting large number of concurrent groups. Secondly, there are deployment hurdles. Currentdeployment practices of IP Multicasting require manual configuration at routers to form the MBone, which makes the MBone expensive to set up and maintain. Lastly, there are themarketing reasons. The traditional charging model is that only downstream is charged. But in multicasting, any participant may introduce large amount of upstream (e.g. videoconferencing), therefore the charging model needs to be redefined. Therefore an alternative has been proposed to shift multicast support to end systems. This is Application-Layer Multicasting (ALM). In ALM, data packets are replicated at end-hosts, not routers. The end-hosts form an overlay network, and the goal of ALM is to construct and maintain an efficient overlay for data transmission. Application-Layer Multicasting has advantages over IP Multicast. Since the routing information is maintained by application, it is scalable. It supports a large number of concurrent groups; and since it needs no infrastructure support, it is fully deployable on the Internet. By using ALM, it is possible for Content Providers to deliver bandwidth-intensive contents such as TV programs to a vast number of clients in real-time via the Internet. The network layer multicasting and application layer multicasting models are shown in Figure 3.This was impractical before because the bottleneck bandwidth between content providers and consumers is considerably ‘narrower’ than the natural consumption rate of such media.

However, ALM does come with a tradeoff: more bandwidth and delay (compared to IP multicasting) for the sake of supporting more users and scalability. But it has been shown that ALM-based algorithms can have “acceptable” performance penalties with respect to IPMulticasting, and when compared to other practical solutions.

Quant, Mary [next]

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

over 1 year ago

it was aquit nice work. i really enjoyedthework.

Vote down Vote up

over 1 year ago

I always like it when I read any material that is easy to understand. So many papers, articles, or book chapters are written almost in code, wording things in their own different way, making it hard to understand their train of thought. This is technical, complete information that every technical person can understand. Thank you!

Vote down Vote up

over 5 years ago

I am very thankful to you for supplying such important information on the net and every required thing are present in it and it is a very good study material. Thank you so much..