My new domain network Management

Ethernet Fundamentals

Introduction to Ethernet

Most of the traffic on the Internet originates and ends with Ethernet connections. From its beginning in the 1970s, Ethernet has evolved to meet the increasing demand for high speed LANs. When a new media was produced, such as optical fiber, Ethernet adapted to take advantage of the superior bandwidth and low error rate that fiber offers. Now, the same protocol that transported data at 3 Mbps in 1973 is carrying data at 10 Gbps.

The success of Ethernet is due to the following factors:

  • Simplicity and ease of maintenance
  • Ability to incorporate new technologies
  • Reliability
  • Low cost of installation and upgrade

With the introduction of Gigabit Ethernet, what started as a LAN technology now extends out to distances that make Ethernet a metropolitan-area network (MAN) and wide-area network (WAN) standard.

The original idea for Ethernet grew out of the problem of allowing two or more hosts to use the same medium and prevent the signals from interfering with each other. This problem of multiple user access to a shared medium was studied in the early 1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the Hawaiian Islands structured access to the shared radio frequency band in the atmosphere. This work later formed the basis for the Ethernet access method known as CSMA/CD.

The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of Digital Equipment Company, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from which everyone could benefit, so it was released as an open standard. The first products developed using the Ethernet standard were sold during the early 1980s. Ethernet transmitted at up to 10 Mbps over thick coaxial cable up to a distance of two kilometers. This type of coaxial cable was referred to as thicknet and was about the width of a small finger.

In 1985, the Institute of Electrical and Electronics Engineers (IEEE) standards committee for Local and Metropolitan Networks published standards for LANs. These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards were compatible with the International Standards Organization (ISO)/OSI model. To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.

The differences between the two standards were so minor that any Ethernet network interface card (NIC) can transmit and receive both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.

The 10-Mbps bandwidth of Ethernet was more than enough for the slow personal computers (PCs) of the 1980s. By the early 1990s PCs became much faster, file sizes increased, and data flow bottlenecks were occurring. Most were caused by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was followed by standards for gigabit per second (Gbps, 1 billion bits per second) Ethernet in 1998 and 1999.

All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a 100-Mbps NIC. As long as the packet stays on Ethernet networks it is not changed. For this reason Ethernet is considered very scalable. The bandwidth of the network could be increased many times without changing the underlying Ethernet technology.

The original Ethernet standard has been amended a number of times in order to manage new transmission media and higher transmission rates. These amendments provide standards for the emerging technologies and maintain compatibility between Ethernet variations.

IEEE Ethernet naming rules

Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.

When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. An abbreviated description (called an identifier) is also assigned to the supplement.

The abbreviated description consists of:

  • A number indicating the number of Mbps transmitted.
  • The word base, indicating that baseband signaling is used.
  • One or more letters of the alphabet indicating the type of medium used (F= fiber optical cable, T = copper unshielded twisted pair).

Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The data signal is transmitted directly over the transmission medium. In broadband signaling, not used by Ethernet, the data signal is never placed directly on the transmission medium. An analog signal (carrier signal) is modulated by the data signal and the modulated carrier signal is transmitted. Radio broadcasts and cable TV use broadband signaling.

The IEEE cannot force manufacturers of networking equipment to fully comply with all the particulars of any standard. The IEEE hopes to achieve the following:

  • Supply the engineering information necessary to build devices that comply with Ethernet standards.
  • Promote innovation by manufacturers.
Ethernet and the OSI model


Ethernet operates in two areas of the OSI model, the lower half of the data link layer, known as the MAC sublayer and the physical layer.

To move data between one Ethernet station and another, the data often passes through a repeater. All other stations in the same collision domain see traffic that passes through a repeater. A collision domain is then a shared resource. Problems originating in one part of the collision domain will usually impact the entire collision domain.

A repeater is responsible for forwarding all traffic to all other ports. Traffic received by a repeater is never sent out the originating port. Any signal detected by a repeater will be forwarded. If the signal is degraded through attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.

Standards guarantee minimum bandwidth and operability by specifying the maximum number of stations per segment, maximum segment length, maximum number of repeaters between stations, etc. Stations separated by repeaters are within the same collision domain. Stations separated by bridges or routers are in different collision domains.

Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media, components that put signals on media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between devices, but each of its functions has limitations. Layer 2 addresses these limitations.

Data link sublayers contribute significantly to technology compatibility and computer communication. The MAC sublayer is concerned with the physical components that will be used to communicate the information. The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be used for the communication process.

Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While there are other varieties of Ethernet, the ones shown are the most widely used.

Naming


To allow for local delivery of frames on the Ethernet, there must be an addressing system, a way of uniquely identifying computers and interfaces. Ethernet uses MAC addresses that are 48 bits in length and expressed as twelve hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier (OUI). The remaining six hexadecimal digits represent the interface serial number, or another value administered by the specific equipment manufacturer. MAC addresses are sometimes referred to as burned-in addresses (BIA) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the NIC initializes.

At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain control information intended for the data link layer in the destination system. Data from upper layer entities is encapsulated in the data link layer header and trailer.

The NIC uses the MAC address to assess whether the message should be passed onto the upper layers of the OSI model. The NIC makes this assessment without using CPU processing time, enabling better communication times on an Ethernet network.

On an Ethernet network, when one device sends data it can open a communication pathway to the other device by using the destination MAC address. The source device attaches a header with the MAC address of the intended destination and sends data onto the network. As this data propagates along the network media the NIC in each device on the network checks to see if the MAC address matches the physical destination address carried by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network, all nodes must examine the MAC header even if the communicating nodes are side by side.

All devices that are connected to the Ethernet LAN have MAC addressed interfaces including workstations, printers, routers, and switches.

Layer 2 framing

Encoded bit streams (data) on physical media represent a tremendous technological accomplishment, but they, alone, are not enough to make communication happen. Framing helps obtain essential information that could not, otherwise, be obtained with coded bit streams alone. Examples of such information are:
  • Which computers are communicating with one another
  • When communication between individual computers begins and when it terminates
  • Provides a method for detection of errors that occurred during the communication
  • Whose turn it is to "talk" in a computer "conversation"

Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.

A voltage vs. time graph could be used to visualize bits. However, when dealing with larger units of data, addressing and control information, a voltage vs. time graph could become large and confusing. Another type of diagram that could be used is the frame format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits (fields) that perform other functions.

There are many different types of frames described by various standards. A single generic frame has sections called fields, and each field is composed of bytes. The names of the fields are as follows:

  • Start frame field
  • Address field
  • Length / type field
  • Data field
  • Frame check sequence field

When computers are connected to a physical medium, there must be a way they can grab the attention of other computers to broadcast the message, "Here comes a frame!" Various technologies have different ways of doing this process, but all frames, regardless of technology, have a beginning signaling sequence of bytes.

All frames contain naming information, such as the name of the source node (MAC address) and the name of the destination node (MAC address).

Most frames have some specialized fields. In some technologies, a length field specifies the exact length of a frame in bytes. Some frames have a type field, which specifies the Layer 3 protocol making the sending request.

The reason for sending frames is to get upper layer data, ultimately the user application data, from the source to the destination. The data package has two parts, the user application data and the encapsulated bytes to be sent to the destination computer. Padding bytes may be added so frames have a minimum length for timing purposes. Logical link control (LLC) bytes are also included with the data field in the IEEE standard frames. The LLC sub-layer takes the network protocol data, an IP packet, and adds control information to help deliver that IP packet to the destination node. Layer 2 communicates with the upper-level layers through LLC.

All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field contains a number that is calculated by the source node based on the data in the frame. This FCS is then added to the end of the frame that is being sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed, the frame is discarded, and the source is asked to retransmit.

There are three primary ways to calculate the Frame Check Sequence number:

  • Cyclic Redundancy Check (CRC) – performs calculations on the data.
  • Two-dimensional parity – adds an 8th bit that makes an 8 bit sequence have an odd or even number of binary 1s.
  • Internet checksum – adds the values of all of the data bits to arrive at a sum.

The node that transmits data must get the attention of other devices, in order to start a frame, and to end the frame. The length field implies the end, and the frame is considered ended after the FCS. Sometimes there is a formal byte sequence referred to as an end-frame delimiter.

Ethernet frame structure


At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps to 10,000 Mbps. However, at the physical layer almost all versions of Ethernet are substantially different from one another with each speed having a distinct set of architecture design rules.

In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3 version of Ethernet, the Preamble and Start Frame Delimiter (SFD) were combined into a single field, though the binary pattern was identical. The field labeled Length/Type was only listed as Length in the early IEEE versions and only as Type in the DIX version. These two uses of the field were officially combined in a later IEEE version, as both uses of the field were common throughout industry.

The Ethernet II Type field is incorporated into the current 802.3 frame definition. The receiving node must determine which higher-layer protocol is present in an incoming frame by examining the Length/Type field. If the two-octet value is equal to or greater than 0x600 (hexadecimal), then the frame is interpreted according to the Ethernet II type code indicated.

Ethernet frame fields


Some of the fields permitted or required in an 802.3 Ethernet Frame are:
  • Preamble
  • Start Frame Delimiter
  • Destination Address
  • Source Address
  • Length/Type
  • Data and Pad
  • FCS
  • Extension

The Preamble is an alternating pattern of ones and zeroes used for timing synchronization in the asynchronous 10 Mbps and slower implementations of Ethernet. Faster versions of Ethernet are synchronous, and this timing information is redundant but retained for compatibility.

A Start Frame Delimiter consists of a one-octet field that marks the end of the timing information, and contains the bit sequence 10101011.

The Destination Address field contains the MAC destination address. The destination address can be unicast, multicast (group), or broadcast (all nodes).

The Source Address field contains the MAC source address. The source address is generally the unicast address of the transmitting Ethernet node. There are, however, an increasing number of virtual protocols in use that use and sometimes share a specific source MAC address to identify the virtual entity.

The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600 (hexadecimal), then the value indicates length. The length interpretation is used where the LLC Layer provides the protocol identification. The type value specifies the upper-layer protocol to receive the data after Ethernet processing is completed. The length indicates the number of bytes of data that follows this field. If the value is equal to or greater than 1536 decimal (0600 hexadecimal), the value indicates that the type and contents of the Data field are decoded per the protocol indicated.

The Data and Pad field may be of any length that does not cause the frame to exceed the maximum frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should not exceed that size. The content of this field is unspecified. An unspecified pad is inserted immediately after the user data when there is not enough user data for the frame to meet the minimum frame length. Ethernet requires that the frame be not less than 46 octets or more than 1518 octets.

A FCS contains a four byte CRC value that is created by the sending device and is recalculated by the receiving device to check for damaged frames. Since the corruption of a single bit anywhere from the beginning of the Destination Address through the end of the FCS field will cause the checksum to be different, the coverage of the FCS includes itself. It is not possible to distinguish between corruption of the FCS itself and corruption of any preceding field used in the calculation.

Cisco Systems, Inc.

0 comments:

Post a Comment