My new domain network Management

Ethernet Switching

Layer 2 bridging


As more nodes are added to an Ethernet physical segment, contention for the media increases. Ethernet is a shared media, which means only one node can transmit data at a time. The addition of more nodes increases the demands on the available bandwidth and places additional loads on the media. By increasing the number of nodes on a single segment, the probability of collisions increases, resulting in more retransmissions. A solution to the problem is to break the large segment into parts and separate it into isolated collision domains.

To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge then forwards or discards frames based on the table entries. The following steps illustrate the operation of a bridge:

  • The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the segment. When traffic is detected, it is processed by the bridge.
  • Host A is pinging Host B. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the packet.
  • The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on port 1, the frame must be associated with port 1 in the table.
  • The destination address of the frame is checked against the bridge table. Since the address is not in the table, even though it is on the same collision domain, the frame is forwarded to the other segment. The address of Host B has not been recorded yet as only the source address of a frame is recorded.
  • Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host A and the bridge receive the frame and process it.
  • The bridge adds the source address of the frame to its bridge table. Since the source address was not in the bridge table and was received on port 1, the source address of the frame must be associated with port 1in the table. The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is in the table, the port assignment is checked. The address of Host A is associated with the port the frame came in on, so the frame is not forwarded.
  • Host A is now going to ping Host C. Since the data is transmitted on the entire collision domain segment, both the bridge and Host B process the frame. Host B discards the frame as it was not the intended destination.
  • The bridge adds the source address of the frame to its bridge table. Since the address is already entered into the bridge table the entry is just renewed.
  • The destination address of the frame is checked against the bridge table to see if its entry is there. Since the address is not in the table, the frame is forwarded to the other segment. The address of Host C has not been recorded yet as only the source address of a frame is recorded.
  • Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host D discards the frame, as it was not the intended destination.
  • The bridge adds the source address of the frame to its bridge table. Since the address was in the source address field and the frame was received on port 2, the frame must be associated with port 2 in the table.
  • The destination address of the frame is checked against the bridge table to see if its entry is present. The address is in the table but it is associated with port 1, so the frame is forwarded to the other segment.
  • When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how the bridge controls traffic between to collision domains.

These are the steps that a bridge uses to forward and discard frames that are received on any of its ports.

Layer 2 switching


Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions made by a bridge are based on MAC or Layer 2 addressing and do not affect the logical or Layer 3 addressing. Thus, a bridge will divide a collision domain but has no effect on a logical or broadcast domain. No matter how many bridges are in a network, unless there is a device such as a router that works on Layer 3 addressing, the entire network will share the same logical broadcast address space. A bridge will create more collision domains but will not add broadcast domains.

A switch is essentially a fast, multi-port bridge, which can contain dozens of ports. Rather than creating two collision domains, each port creates its own collision domain. In a network of twenty nodes, twenty collision domains exist if each node is plugged into its own switch port. If an uplink port is included, one switch creates twenty-one single-node collision domains. A switch dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the necessary MAC information for each port.

Switch operation

A switch is simply a bridge with many ports. When only one node is connected to a switch port, the collision domain on the shared media contains only two nodes. The two nodes in this small segment, or collision domain, consist of the switch port and the host connected to it. These small physical segments are called microsegments. Another capability emerges when only two nodes are connected. In a network that uses twisted-pair cabling, one pair is used to carry the transmitted signal from one node to the other node. A separate pair is used for the return or received signal. It is possible for signals to pass through both pairs simultaneously. The capability of communication in both directions at once is known as full duplex. Most switches are capable of supporting full duplex, as are most network interface cards (NICs). In full duplex mode, there is no contention for the media. Thus, a collision domain no longer exists. Theoretically, the bandwidth is doubled when using full duplex.

In addition to faster microprocessors and memory, two other technological advances made switches possible. Content-addressable memory (CAM) is memory that essentially works backwards compared to conventional memory. Entering data into the memory will return the associated address. Using CAM allows a switch to directly find the port that is associated with a MAC address without using search algorithms. An application-specific integrated circuit (ASIC) is a device consisting of undedicated logic gates that can be programmed to perform functions at logic speeds. Operations that might have been done in software can now be done in hardware using an ASIC. The use of these technologies greatly reduced the delays caused by software processing and enabled a switch to keep pace with the data demands of many microsegments and high bit rates.

Latency

Latency is the delay between the time a frame first starts to leave the source device and the time the first part of the frame reaches its destination. A wide variety of conditions can cause delays as a frame travels from source to destination:

  • Media delays caused by the finite speed that signals can travel through the physical media.
  • Circuit delays caused by the electronics that process the signal along the path.
  • Software delays caused by the decisions that software must make to implement switching and protocols.
  • Delays caused by the content of the frame and where in the frame switching decisions can be made. For example, a device cannot route a frame to a destination until the destination MAC address has been read.
Switch modes

How a frame is switched to the destination port is a trade off between latency and reliability. A switch can start to transfer the frame as soon as the destination MAC address is received. Switching at this point is called cut-through switching and results in the lowest latency through the switch. However, no error checking is available. At the other extreme, the switch can receive the entire frame before sending it out the destination port. This gives the switch software an opportunity to verify the Frame Check Sum (FCS) to ensure that the frame was reliably received before sending it to the destination. If the frame is found to be invalid, it is discarded at this switch rather than at the ultimate destination. Since the entire frame is stored before being forwarded, this mode is called store-and-forward. A compromise between the cut-through and store-and-forward modes is the fragment-free mode. Fragment-free reads the first 64 bytes, which includes the frame header, and switching begins before the entire data field and checksum are read. This mode verifies the reliability of the addressing and Logical Link Control (LLC) protocol information to ensure the destination and handling of the data will be correct.

When using cut-through methods of switching, both the source port and destination port must be operating at the same bit rate in order to keep the frame intact. This is called synchronous switching. If the bit rates are not the same, the frame must be stored at one bit rate before it is sent out at the other bit rate. This is known as asynchronous switching. Store-and-forward mode must be used for asynchronous switching.

Asymmetric switching provides switched connections between ports of unlike bandwidths, such as a combination of 100 Mbps and 1000 Mbps. Asymmetric switching is optimized for client/server traffic flows in which multiple clients simultaneously communicate with a server, requiring more bandwidth dedicated to the server port to prevent a bottleneck at that port.

Spanning-Tree Protocol

When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to occur. However, switched networks are often designed with redundant paths to provide for reliability and fault tolerance. While redundant paths are desirable, they can have undesirable side effects. Switching loops are one such side effect. Switching loops can occur by design or by accident, and they can lead to broadcast storms that will rapidly overwhelm a network. To counteract the possibility of loops, switches are provided with a standards-based protocol called the Spanning-Tree Protocol (STP). Each switch in a LAN using STP sends special messages called Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence and to elect a root bridge for the network. The switches then use the Spanning-Tree Algorithm (STA) to resolve and shut down the redundant paths.

Each port on a switch using Spanning-Tree Protocol exists in one of the following five states:

  • Blocking
  • Listening
  • Learning
  • Forwarding
  • Disabled

A port moves through these five states as follows:

  • From initialization to blocking
  • From blocking to listening or to disabled
  • From listening to learning or to disabled
  • From learning to forwarding or to disabled
  • From forwarding to disabled

The result of resolving and eliminating loops using STP is to create a logical hierarchical tree with no loops. However, the alternate paths are still available should they be needed.

Cisco Systems, Inc.




0 comments:

Post a Comment