Subscribe

RSS Feed (xml)



Powered By

Skin Design:
Free Blogger Skins

Powered by Blogger

Thursday, April 10, 2008

MULTICASTING

MULTICASTING

Interactive applications such as video conferencing, delivery of live stock quotes, and shared whiteboard applications rely on multicast traffic, both within intranets and on the Internet. Multicasting saves bandwidth by forcing the network to replicate packets only when necessary. In addition, multicasting allows hosts to dynamically join and leave groups at any time, unrestricted by the number of members in the group or by the location of the group within the network.
Fundamental to multicasting is the concept of a process joining a multicast group on a given interface on a host. Membership in a multicast group on a given interface is dynamic (that is, it changes over time as processes join and leave the group). Thus end users can dynamically join multicast groups based on the applications they execute.
Previously, you learned that the Layer 2 switches operate at the data link or Media Access Control (MAC) layer. Layer 2 LAN switches process multicast MAC addresses like Layer 2 broadcasts. The result is that multicast traffic is broadcast to all switch ports. At this point, the effectiveness of both Layer 2 switching and IP multicasting is reduced.
Layer 2 switching and IP multicasting are rapidly gaining popularity and deployment within campus environments. Layer 2 switching has been embraced for providing scalable bandwidth for end users and servers, and IP multicasting has been embraced for providing an efficient, multipoint, Layer 3 transport mechanism for delivering audio, video, and data to networked devices. Applications such as IP/TV®, and multicast backbone (MBONE) tools such as VIC and VAT, for example, rely on IP multicasting to deliver multimedia traffic to groups of IP-addressable devices.
With a multicast design, applications can send one copy of each packet and address it to a group address; the client decides whether or not to listen to the multicast address. Multicasting is helpful in controlling network traffic while curbing network and host processing by eliminating traffic redundancy.
The following figure shows an example of multicast transmission:
In a multicast environment, a video server needs to send only one video stream to a multicast address; any number of clients can listen to the multicast address and receive the video stream. In this scenario, the server requires only 1.5 Mbps of bandwidth and leaves the rest of the bandwidth free for other uses.
Multicast transmission can be implemented at both the data link layer (Layer 2) and the network layer (Layer 3). Ethernet and Fiber Distributed Data Interface (FDDI) support unicast, multicast, and broadcast addresses. Token Ring also supports the concept of multicast addressing but uses a different technique. Token Rings have functional addresses that can be used to address groups of receivers. If the scope of an application is limited to a single LAN, using a data link layer multicast technique is sufficient. However, many multipoint applications are valuable precisely because they are not limited to a single LAN.
When a multipoint application is extended to a campus environment that consists of different media types, such as Ethernet, Token Ring, FDDI, Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit Data Service (SMDS), and other networking technologies, it is best to implement multicast at the network layer (Layer 3) so that the traffic can be contained to only those segments that request it.
The set of hosts listening to a particular IP multicast address is called a host group. A host group can span multiple networks. Membership in a group is dynamic—hosts may join and leave host groups.
Some multicast group addresses are assigned as well-known addresses by the Internet Assigned Numbers Authority (IANA). These groups are called permanent host groups, similar in concept to the well-known TCP and User Datagram Protocol (UDP) port numbers.
In addition to the reserved Class D addresses, the IANA owns a block of Ethernet addresses, which in hexadecimal is 00:00:5e. This block is the high-order 24 bits of the Ethernet address, meaning that this block includes addresses in the range 00:00:5e:00:00:00 through 00:00:5e:ff:ff:ff. The IANA allocates half of this block for multicast addresses.
Given that the first byte of any Ethernet address must be 01 to specify a multicast address, then the Ethernet addresses corresponding to IP multicasting are in the range 01:00:5e:00:00:00 through 01:00:5e:7f:ff:ff. This allocation allows for 23 bits in the Ethernet address to correspond to the IP multicast group ID. The mapping places the low-order 23 bits of the multicast group ID into these 23 bits of the Ethernet address, as shown in the following figure:
Because the upper five bits of the multicast address are ignored in this mapping, the mapping is not unique. Thirty-two different multicast group IDs map to each Ethernet address. Note that, since the mapping is not unique, the device driver or the IP modules must perform filtering, because the interface card may receive multicast frames in which the host is not interested.
The sending process specifies a destination IP address that is a multicast address, and then the device driver converts this address to the corresponding Ethernet address and sends it. The receiving processes must notify their IP layers that they want to receive datagrams destined for a given multicast address, and the device driver must enable reception of these multicast frames. This process is handled by joining a multicast group.
When a multicast datagram is received by a host, it must deliver a copy to all the processes that belong to that group. This scenario is different from UDP, where a single process receives an incoming unicast UDP datagram. With multicast, it is possible for multiple processes on a given host to belong to the same multicast group.
Complications arise when extending multicasting beyond a single physical network. In a Layer 2 design, the destination multicast MAC addresses are processed just like a Layer 2 broadcast and are delivered to all ports. All switches that received the multicast frame subsequently forward it out all their interfaces, and so on. In many cases, the result is excess traffic. Fortunately, routers at Layer 3 remedy the problem by not propagating any Layer 2 broadcast or multicast frames, but since routers do not inherently forward multicast traffic, a protocol must be implemented to instruct routers when to forward and when not to forward multicast traffic. This function is handled by the IGMP, which tells the router if any hosts on a given physical network belong to a given multicast group.
The IP multicast traffic for a particular source and destination group pair is transmitted via a spanning tree that connects all the hosts in the group. Different IP multicast routing protocols use different techniques to construct these multicast spanning trees. After a multicast spanning tree is constructed, all multicast traffic is distributed over it.
MULTICAST ROUTING PROTOCOLS
There are three different multicast routing protocols:
Distance Vector Multicast Routing Protocol (DVMRP)
Multicast Open Shortest Path First (MOSPF)
Protocol Independent Multicast (PIM)
The goal in each is to establish paths in the network so that multicast traffic can effectively reach all group members.
Described in RFC 1075 and an Internet draft update and widely used on the MBONE, DVMRP uses a technique known as Reverse Path Forwarding. When a router receives a packet, it floods the packet out of all paths except the one that leads back to the source of the packet, allowing a datastream to reach all LANs (possibly multiple times).
If a router is attached to a set of LANs that do not want to receive a particular multicast group, the router can send a "prune" message back up the distribution tree to stop subsequent packets from traveling where there are no members. DVMRP periodically refloods in order to reach any new hosts that want to receive a particular group. There is a direct relationship between the time it takes for a new receiver to get the datastream and the frequency of flooding.
DVMRP implements its own unicast routing protocol in order to determine which interface leads back to the source of the datastream. This unicast routing protocol is very much like Routing Information Protocol (RIP) and is based purely on hop count. As a result, the path that the multicast traffic follows may not be the same as the path that the unicast traffic follows.
DVMRP has significant scaling problems because it floods frequently, a limitation made worse by the fact that early implementations of DVMRP did not implement pruning. As a result, DVMRP typically uses tunneling mechanisms to control flooding and, in some cases, the lack of pruning. DVMRP has been used to build the MBONE—a multicast backbone across the public Internet—by building tunnels between DVMRP-capable machines. The MBONE is used widely in the research community to transmit the proceedings of various conferences and to permit desktop conferencing. In the near future, the MBONE will move away from DVMRP, opting to use PIM instead because of its greater efficiency.
MOSPF was defined as an extension to the OSPF unicast routing protocol. OSPF works by having each router in a network understand all the available links in the network. Each OSPF router calculates routes from itself to all possible destinations. MOSPF works by including multicast information in OSPF link state advertisements. An MOSPF router learns which multicast groups are active on which LANs. MOSPF builds a distribution tree for each source/group pair and computes a tree for active sources sending to the group. The tree state is cached on all routers, and trees must be recomputed when a link state change occurs or when the cache times out. This scenario can eventually slow multicast performance, depending upon the size of the network and the volatility of the multicast groups.
MOSPF works only in internetworks that are using Open Shortest Path First (OSPF) protocol. MOSPF is best suited for environments that have relatively few source/group pairs active at any given time. It will work less well in environments that have many active sources or environments that have unstable links.
Unlike MOSPF, which is OSPF-dependent, PIM works with all existing unicast routing protocols. And unlike DVMRP, which has inherent scaling problems, PIM offers two different types of multipoint traffic distribution patterns to address multicast routing scalability: dense mode and sparse mode.
PIM dense mode is most useful when:
Senders and receivers are in close proximity to one another
There are few senders and many receivers
The volume of multicast traffic is high
The stream of multicast traffic is constant
Dense-mode PIM uses Reverse Path Forwarding and looks much like DVMRP. The most significant difference between DVMRP and dense-mode PIM is that PIM works with whatever unicast protocol is being used.
In dense mode (shown in the following figure), PIM floods the network and prunes back based on multicast group member information. In a LAN TV multicast environment, for instance, dense mode would be effective because the probability is high that there are group members off all subnets. Flooding the network is effective because little pruning is necessary. The Cisco IOS® software supports the PIM dense mode.
PIM sparse mode is most useful when:
There are few receivers in a group
Senders and receivers are separated by WAN links
The type of traffic is intermittent
Sparse-mode PIM (shown in the following figure) is optimized for environments where there are many multipoint datastreams and each multicast stream goes to a relatively small number of the LANs in the internetwork.
For these types of groups, Reverse Path Forwarding techniques make inefficient use of the network bandwidth. Sparse-mode PIM works by defining a rendezvous point. When a sender wants to send data, it first sends to the rendezvous point; when a receiver wants to receive data, it registers with the rendezvous point. When the datastream begins to flow from sender to rendezvous point to receiver, the routers in the path optimize the path automatically to remove any unnecessary hops. Sparse-mode PIM assumes that no hosts want the multicast traffic unless they specifically ask for it. The Cisco IOS software supports the PIM sparse mode.

The Class D addressing scheme, IGMP, and PIM offer a well-defined mechanism for distributing IP multicast traffic in Cisco-routed environments. However, the mechanism requires a distributed Layer 3 framework. With distributed Layer 3 devices, a variety of Layer 3 mechanisms are available to control IP multicast transmissions. Simply disabling multicasting on a particular router interface, for example, helps to contain a multicast transmission. Similarly, manually reducing the Time To Live (TTL) on a particular router interface can also help to contain multicast transmission. At some point, however, it is inevitable that the IP multicast traffic will traverse a Layer 2 switch, especially in campus environments.
IGMP
IP hosts use Internet Group Management Protocol (IGMP) to report their group membership to directly connected multicast routers. Defined in RFC 1112, Host Extensions for IP Multicasting, IGMP is an integral part of IP.
IGMP uses group addresses, which are Class D IP addresses. The high-order four bits of a Class D address are 1110. This means that host group addresses can be in the range 224.0.0.0 to 239.255.255.255.
CGMP
Cisco Group Management Protocol (CGMP) is a dynamic Cisco proprietary protocol that allows Catalyst switches to leverage IGMP information on Cisco routers to make Layer 2 Forwarding decisions. CGMP manages multicast traffic in Catalyst 5000 series switches by allowing directed switching of IP multicast traffic within a network at rates greater than one million packets per second.
Although the primary thrust with CGMP is to enable IP multicasting in Layer 2 switches, it is not IP-specific. CGMP will work with other Layer 3 multicast technologies such as Simple Multicast Routing Protocol (SMRP) to ensure effective transmission in Layer 2 Catalyst switch fabrics.
If a LAN switch receives a frame with a broadcast or multicast destination address, the Layer 2 switch broadcasts the frame to all ports. This scenario describes what happens to IP multicast traffic when a Layer 2 switch receives it. It is important to note here that the CGMP operation will not adversely affect Layer 2 Forwarding performance.
As you learned earlier, IP multicast traffic maps to a corresponding Layer 2 multicast address, causing the traffic to be delivered to all ports of a Layer 2 switch. Consider video server A and video client B in the figure below.
The video client wants to watch a 1.5-Mbps IP multicast-based video feed coming from a corporate video server. The process starts by sending an IGMP join message toward the video server. The client's next hop router logs the IGMP join message and uses PIM to add this segment to the PIM distribution tree. At this point, IP multicast traffic is transmitted downstream toward the video client.
The switch detects the incoming traffic and examines the destination MAC address to determine where the traffic should be forwarded. Since the destination MAC address is a multicast address and there are no entries in the switching table for where the traffic should go, the 1.5-Mbps video feed is simply sent to all ports, clearly an inefficient strategy.
Using the 1.5-Mbps IP multicast video feed again illustrates the CGMP advantage. As before, the process starts by sending an IGMP join message toward the video server. But this time, when the next-hop router receives the IGMP join message, it records the source MAC address of the IGMP message and issues a CGMP join message downstream to the Catalyst 5000 switch.
Without CGMP, multicast traffic is flooded to the entire Layer 2 switch fabric. The upstream router prevents the multicast traffic from hitting the campus backbone, but does nothing to control the traffic in the switch fabric.
With CGMP, however, the multicast traffic can be controlled, not only in the Catalyst 5000 switch directly connected to the router, but also in the downstream Catalyst switches. This feature is accomplished because CGMP uses a well-known Layer 2 multicast address that all Catalyst switches listen to.
The Catalyst 5000 switch uses the CGMP message to dynamically build an entry in the switching table that maps the multicast traffic to the switch port of the client. In this example, the 1.5-Mbps video feed is delivered only to those switch ports that are in the switching table, sparing other ports that don't need the data.
CGMP requires only a software upgrade on the Catalyst 5000 series switch and at least one Cisco router running software Release 11.1(3) or later, and offers the following benefits:
Allows IP multicast packets to be switched only to those ports that have IP multicast clients
Saves network bandwidth on user segments by not propagating spurious IP multicast traffic
Does not require changes to the end-host systems
Does not incur the overhead of creating a separate virtual LAN (VLAN) for each multicast group in the switched network
CGMP software components run on both the router and the Catalyst 5000 series switch. A CGMP-capable IP multicast router sees all IGMP packets and can inform the Catalyst 5000 series switch when specific hosts join or leave IP multicast groups.
When the CGMP-capable router receives an IGMP control packet, it creates a CGMP packet that contains the request type (either join or leave), the multicast group address, and the actual MAC address of the host. The router then sends the CGMP packet to a well-known address to which all Catalyst 5000 series switches listen. When a switch receives the CGMP packet, the Supervisor Engine module interprets the packet and modifies the encoded address recognition logic (EARL) forwarding table automatically, without user intervention.
You can explicitly set up multicast groups by entering the set cam static command. User-specified multicast group settings are static, whereas multicast groups learned through CGMP are dynamic. If you specify group membership for a multicast group address, your static setting supersedes any automatic manipulation by CGMP.
Multicast group membership lists can consist of both user-defined and CGMP-learned settings. If a spanning-tree VLAN topology changes, the CGMP-learned multicast groups on the VLAN are purged and the CGMP-capable router generates new multicast group information. If a CGMP-learned port link is disabled for any reason, CGMP removes that port from any multicast group memberships.
When a host wants to join an IP multicast group, it sends an IGMP join message specifying its MAC address and which IP multicast group it wants to join. The CGMP-capable router then builds a CGMP join message and multicasts the join message to the well-known address to which the Catalyst 5000 series switches listen.
Upon receipt of the join message, each Catalyst 5000 series switch searches its EARL table to determine if it contains the MAC address of the host asking to join the multicast group. If a switch finds the MAC address of the host in its EARL table associating the MAC address with a nontrunking port, the switch creates a multicast forwarding entry in the EARL forwarding table. The host associated with that port then receives multicast traffic for that multicast group. As explained previously, the EARL automatically learns the MAC addresses and port numbers of the IP multicast hosts.
The CGMP-capable router sends periodic multicast-group queries. If a host wants to remain in a multicast group, it responds to the query from the router. If a host does not want to remain in the multicast group, it does not respond to the router query. If after a number of queries the router receives no reports from any host in a multicast group, the router sends a CGMP command to the Catalyst 5000 series switch, telling it to remove the multicast group from its forwarding tables.
If there are other hosts in the same multicast group and they do respond to the multicast-group query, the router does not tell the switch to remove the group from its forwarding tables. The router does not remove a multicast group from the forwarding tables of the switch until all the hosts in the group have not responded to the multicast-group query.
The CGMP fast-leave-processing feature allows the Catalyst 5000 series Supervisor Engine module to detect IGMP V.2 leave messages sent on the all-routers multicast address by hosts on any of the Supervisor Engine module ports. When the Supervisor Engine module receives a leave message, it starts a query-response timer. If this timer expires before a join message (an IGMP membership report) is received, then the port is pruned from the multicast tree for the multicast group specified in the original leave message. Fast-leave processing ensures optimal bandwidth management for all hosts on a switched network, even when multiple multicast groups are in use simultaneously.
BROADCAST/MULTICAST SUPPRESSION
Broadcast/multicast suppression prevents switched ports on a LAN from being disrupted by a broadcast storm on one of the ports. A LAN broadcast storm occurs when broadcast or multicast packets flood the LAN, creating excessive traffic and degrading network performance.
Since switched LANs act as a single LAN, a broadcast storm on one port can adversely affect the entire LAN. Errors in the protocol-stack implementation or in the network configuration cause a broadcast storm. Because Catalyst 5000 series LAN switches operate at Layer 2, broadcast/multicast suppression is a critical element to prevent network performance degradation.

The broadcast suppression threshold numbers and the time interval combination make the broadcast/multicast suppression algorithm work with different levels of granularity. A higher threshold allows more broadcast/multicast packets to pass through. Broadcast/multicast suppression is implemented in either hardware or software. Hardware broadcast/multicast suppression is bandwidth-based; software broadcast/multicast suppression is packet-based.
Hardware broadcast/multicast suppression circuitry in Catalyst 5000 series switches monitors packets passing from a port to the Catalyst 5000 switching bus. Using the Individual/Group bit in the packet destination address, the broadcast/multicast suppression circuitry determines if the packet is a unicast or broadcast/multicast packet. It keeps track of the current count of broadcast/multicast words within the one-second time interval, and when a threshold is reached, filters out subsequent broadcast/multicast packets.
Because hardware broadcast/multicast suppression uses a bandwidth-based method of measuring broadcast/multicast activity, the most significant implementation factor is setting the percentage of total available bandwidth that can be used by broadcast/multicast traffic.
Software broadcast/multicast suppression is supported in all Ethernet line cards that support hardware broadcast/multicast suppression. It is not available for use with ATM or FDDI cards. Because software broadcast/multicast suppression uses a packet-based method of measuring broadcast/multicast activity, the most significant implementation factor is setting a threshold value for the number of broadcast packets per second allowed.

No comments: