Designing-And Redesigning-Today's Local Area Network
by Art Wittmann
Table of Contents
Appendix: Concepts and Definitions
- The smallest piece of a network
on which stations can exchange data without intervention from
another intelligent device.
- A number of
segments joined together by bridges. Any broadcast or multicast
made on the extended segment should be seen by all stations on
an extended segment.
- The term itself has come to
be rather ambiguous, referring to a segment, extended segment
or internetwork. We often call any of these "The Network."
- A set of segments
extended segments joined together by a router.
- A data packet addressed
to a single station. An example might be data from a client to
- A data packet addressed
to a group of stations. The destination address is formed in such
a way that stations realize that the packet may be destined for
many other stations.
- A data packet addressed
to any and all stations on the local segment. Broadcasts are often
used by stations who have just joined the network - broadcasts
are made to find out information about the segment that has just
- A device that facilitates
connecting stations onto the segment. It does not understand network
addresses - it merely copies data bit by bit from and to the physical
media to which it is attached. On Token-Ring segments, this device
is often called a Media Access Unit or MAU. A repeater is not
considered an intelligent device.
- A bridge is used to connect two
or more similar segments together (for example, Token-Ring to
Token-Ring or Ethernet to Ethernet). A bridge has two purposes.
The first is to extend the length and number of stations that
a segment can support. Secondly, the bridge reduces overall traffic
flow by only passing data packets that are not destined for a
hardware address on a local segment. All broadcast and multicast
traffic must cross a bridge - since no true destination can be
known. In recent years, bridging technology has been used between
dissimilar media (for example, Ethernet to FDDI), this sometimes
may cause problems as we will see later. A bridge is considered
an intelligent device. (See also
- Sometimes called a gateway, it
is used to connect two or more (potentially extended) segments.
ments may be similar or dissimilar. Routing information
beyond the hardware address must be contained within the data
packet. Virtually no broadcasts or multicasts are ever propagated
across a router since no exact destination information is typically
contained within these packets. Hardware addresses have only local
significance to a router - higher level routing information is
globally significant. (See also
Sharing data on a network means multiplexing it on the basis of
either frequency or time, and arranging for some sharing, or contention,
Some of the earliest data networks devised used Frequency Division
Multiplexing or FDM. These networks are also known as broadband
networks. Just as we divide up the radio spectrum into channels,
so can we divide up the spectrum over a cable. A possible example
would be to use, say, 200 MHz of bandwidth and divide it into
20 channels of 10 MHz each. Each 10-MHz channel could, theoretically,
be used to transmit up to 10 Mbps of data.
Apart from the obvious advantage of providing more bandwidth,
the analog techniques used can also permit signals to be transmitted
over fairly long distances on standard cabling - sometimes up
to ten times further or more than straight digital signals on
the same cable. So broadband systems can be used to provide very
high data rates over fairly long distances.
The advantages of broadband seem significant. However, it is seldom
used for local area networks today. In fact, other than digital
systems that piggy-backed on private CATV systems, broadband is
almost strictly the domain of telephone carriers. There are good
reasons for this. First, analog broadband networks are expensive
to build and maintain. They must be tuned and retuned as the networ
is extended. The equipment required to do this is expensive and
the expertise is rare. Equipment that is attached to a broadband
network is also expensive as it must have a digital-to-analog
modem and transmitter - much like any radio system would have.
The real nail in the coffin for broadband systems were the time
of their introduction. Indeed, a good broadband network could
accommodate 500 Mbps or more data traffic and do it over a few
square miles. The problem was that in the mid-70's to early 80's
no one had computers that use 500 Mbps of bandwidth and we had
barely begun to build local area networks. Few had even thought
about extending the network over areas like a few square miles.
About the only commercial example of broadband being used for
LANs was IBM's PC-LAN products. It used only two fairly narrow
channels of a broadband network and hence only could transmit
and receive data at about 1 Mbps.
If sharing a network by dividing up the frequency spectrum doesn't
fly - then sharing by dividing up time is the only other alternative.
The idea here is to use baseband signaling - essentially putting
digital signals right on the wire - and sharing it by devising
mechanisms for computers to take turns accessing the bandwidth.
Through the 70's, a few companies devised ways to build baseband
networks. The four most popular systems where IBM's Token-Ring,
Xerox's Ethernet, Advanced Interlink's ARCnet and Apple's LocalTalk.
These all use a few basic techniques for arbitrating bandwidth.
When Xerox began building high-end printers that would produce
copy directly from workstations, they needed a mechanism to get
images from the moderately priced workstations to the fairly expensive
printers. In the 70's at Xerox's Palo Alto research center, work
was under way to develop a shared data net
work that could do the
The goal was to find a simple algorithm - one that could be implemented
in the fairly basic silicon available at the time. Ethernet was
the result of the efforts and the method for sharing the wire
was called Carrier Sense Multiple Access with Collision Detection
or CSMA/CD. The idea was a fairly simple one. Listen before you
talk - that's the multiple access part and stop talking if you
hear some one else, that's the collision detection part.
Essentially, if a station found no traffic on the wire, it could
start putting data onto the network. If, while it was putting
data on the network, it sensed a collision, the station would
stop immediately and wait a random amount of time before transmitting
again. The specifics of Ethernet were designed so that once a
station got a few bytes into its transmission, all stations on
the segment should be able to detect the signal and remain silent
until the transmitting station finished. So, on a properly implemented
Ethernet, the collision rate should only be a few percent of the
packets even when the network was 60 to 70 percent busy or more.
With its 10-Mbps data rate, Ethernet was easily able to handle
the transmissions of many PDP-11 or VAX 750 machines (the prominent
candidates for networking at the time) without difficulty. Networking
a computer at the introduction of Ethernet was not a trivial decision.
While today it is rare to find an Ethernet adapter with a price
tag above a few hundred dollars, a network interface from the
late 70's was likely to cost $5,000 or so.
Today, however, Ethernet has been rendered a single-chip solution
and provided by many vendors right on the motherboard of $1,200
PCs. In these configurations, Ethernet now adds no more than $25
to the price of a PC.
Token-Ring, and later FDDI, which employs token-ring techniques,
is a newer and more complex method for sharing network bandwidth.
Before we get
into the specifics of Token-Ring, lets talk a bit
about why anyone would find it necessary to build a more complex
technology than Ethernet.
In the early 80's, shortly after the introduction of the IBM PC,
it was observed that on a network with fast minicomputers and
comparatively slow PCs, the PCs could be starved on the network.
Further, due to the variety and quality of Ethernet implementations
available, many found that Ethernet was unusable when network
utilization reached only into the 20 or 30 percent range or so.
Because Ethernet employed random back-offs and was subject to
network hogs, it was thought to be unsuitable for mission critical
networking. The term bandied about was non-deterministic. In other
words, there was no way to mathematically assure that a given
station could transmit a given amount of data within any particular
time frame. In fact, the folks who thought up Ethernet could show
that they could make assurances within high probabilities - but
that wasn't good enough.
Token Ring's approach was to arrange stations into a logical ring.
Once the ring was formed, a token was generated and passed between
the stations on the ring. If a station had data to transmit, it
removed the token from the network, transmitted its data and then
passed the token along to the next station. Each station could
transmit data up to some maximum time or until it was out of data
to send - whichever was shorter. In this way, every station is
assured access to the network regardless of the station's speed
or network interface design.
This sounds really good, doesn't it? There, of course, are trade-offs.
First and foremost, the algorithm behind Token-Ring is an order
of magnitude more complex than Ethernet. When the ring is running
normally, Token-Ring seems pretty simple. However, consider the
added complexity of loosing a token, finding two tokens on the
network, unexpectedly loosing a station from the ring or bringing
a new station into the ring.
these complex cases must be handled correctly. It's just
a lot harder to implement than Ethernet's listen-then-talk model.
Add to this the fact that IBM picked 4 Mbps as the base data rate
for Token-Ring and you can see how simpler, faster Ethernet might
be more attractive. Token-Ring also had the problem that if it
became deterministic (that is, every station held the token as
long as it could), it, too, became painfully slow to use.
If ASIC technology were as advanced then as it is today, Token-Ring
might have Token-Ring networks everywhere. It wasn't very advanced,
Token-Ring cards were and are expensive when compared to Ethernet
and it is hard to make an argument for using IBM's style of Token-Ring.
In either case, whether your technology is Token-Ring, Ethernet
or some other shared medium technology, the success of the shared
network depends on each station having comparatively little data
to transmit or on having relatively few stations on each segment
of the network. The more a station can saturate a network, the
better it is to have fewer stations on the network.
Switching is basically a technology that is meant to facilitate
reducing the number of stations per segment. The term switching
is taken from the telecommunications industry where the devices
that routed telephone calls were originally called mechanical
switches. Switching has come to imply an architecture where any
inbound traffic can be redirected to any outbound port with relatively
little concern for traffic loss or congestion.
The way in which a switch decides how direct traffic could be
by almost any mechanism. It could using bridge or routing techniques,
or it could use some other mechanism to pre-determine the path
that subsequent data transmissions will take.
To realize the goal of very few stations per switched port, the
switch market aims to provide the band
width advantages of bridges
and routers at a price closer to that of a repeater. The only
way to achieve this sort of price point is to rely heavily on
ASICs and other custom silicon which has only recently become
reasonably cheap to produce.
These chips rip into packets and determine just enough to decide
how to direct the packet. Virtually every major networking vendor
has either developed such chips or is working on them. As new
generations of chips emerge, the chip count on switches goes down
and so do the prices. In fact, in the current generations of Ethernet
switches in particular, the high-speed uplink ports are the most
expensive pieces of the switch. In some cases, an FDDI or ATM
port can cost as much as 10 or 12 Ethernet ports.
As good as packet-based switching is, there are certain types
of traffic for which they are not ideally suited. Further, the
complexity associated with handling variable length packets, each
containing their own detailed addressing makes packet-based switching
still a fairly expensive proposition.
Cell-based switching is a solution aimed at handling non-data
traffic (for example, voice and video) along with data. One problem
with router and bridge-based systems is the latency that they
introduce to the network. Routers and Bridges almost always fully
capture and then forward a data packet. If the router or bridge
can do this instantly, up to 1.4 milliseconds of delay is introduced
to Ethernet packets traveling through them. Routers, in particular,
are likely to introduce more delay because they often must process
the packet with a single central CPU.
For data networks, these delays are usually not serious - in fact
they usually go unnoticed. However, these delays are significant
for video and audio traffic. Cell-based networks in general, and
ATM in particular, are architected to handle general digitized
data including voice, video and computer-originated data.
ea behind cell-based networks is to chop standard data packets
into much smaller fixed-length cells. In ATM's case, these cells
are 48 bytes long with another five bytes for addressing and control.
One fact that should immediately become obvious is that a five-byte
address field is too small to hold even a single six-byte physical
address. Obviously, there must be something else going on. Indeed,
ATM requires that a route be determined before data starts flowing.
The five-byte addresses are only relevant from an end station
to a switch or between switches. Each switch then builds a table
that includes the translation of incoming addresses with out-going
By predetermining the flow of data using that predetermined path
throughout a data exchange, ATM assures that cells will arrive
in the proper order at the end station. In fact, ATM includes
no mechanism for retransmission of cells. Higher order protocols
must take care of any data lost in the ATM network. However, when
cells do arrive in the correct order and an in a timely fashion,
it can be a simple matter (that is - cost effective) to retrieve
the data, voice or video information contained with in the cells.
We will touch on ATM only lightly here. A much more in-depth discussion
will be included later.
Since all traffic on a network must be seen by each and every
station on the network, there must be some way to designate which
data packets are destined for which stations. In other words,
each station must have an address that is unique to its hardware.
It seems clear that if two stations are on two completely separate
networks, they really don't need to have different hardware addresses.
After all, they'll never see each other's traffic. Up to this
point, we haven't talked about defines a network, so we must more
closely define some terms. These terms are used to describe the
pieces of hardware that tie a network together
It should be obvious that hardware addresses and their uniqueness
is most important on segments and extended segments. To that extent,
the hardware address of any station could be set by the local
administrator of any particular segment.
This indeed how ARCnet works. Up to 255 addresses may be configured
for ARCnet stations. These addresses are usually set by configuring
jumpers on the network card itself. Apple's LocalTalk takes a
slightly different tact. Rather than worry about setting addresses,
each station just picks one and then broadcasts to see if any
other station is using it. For small segments, both of these techniques
work well. However, the bigger the network, the less comfortable
this scheme becomes.
Ethernet, Token-Ring and FDDI have employed a different technique.
Their hardware addresses are considerably longer - six bytes long
rather than just one or two. The upper three bytes are assigned
to hardware manufacturers who then assign the lower three bytes
themselves. The scheme allows for 16 million manufacturers, each
of whom can then assign 16 million addresses to their products.
This scheme was originally administered by Xerox for Ethernet
until the standards for Ethernet were turned over the IEEE, which
now administers hardware address assignment as well.
Each packet that is sent on a network must contain a source and
destination hardware address. Some topologies have allowed for
different length addresses as well as local assignment of addresses.
However, in almost all cases, the globally administered six-byte
addresses are used.
We've described the basic functioning of bridges. They essentially
build a list of known physical addresses and note which port those
addresses reside. These addresses are valid only for a certain
length of time, after which, if no traffic has been seen from
the address, it is removed from the table. Any packe
t that has
a destination address unknown on the originating is retransmitted
on all ports of the bridge except the originating port.
If the bridge is a little smarter, it will determine if the address
is known on a different port and only transmit the packet on that
port which contains the known address. This is essentially the
functioning of a very basic switch.
Remember that broadcasts have no known destination and therefore
must be sent on all ports of the bridge. This can lead to problems
on large networks.
Other problems can occur when media are mixed on a bridge. The
most significant problem here occurs when one media has a different
maximum allowable packet size than the other (known as MTU or
Maximum Transmission Unit).
Some protocols provide for a mechanism called MTU discovery. This
is fine as long as the stations are using some connection-oriented
protocol and it makes sense to store the discovered MTU. However,
if they are using a connectionless protocol, it makes no sense
to rediscover the MTU with each transmission.
In general, the solution to the MTU problem, is to make bridges
that are at least smart enough about higher-layer protocols to
participate in MTU discovery if it is used. In the case of TCP/IP
specifically, the bridge must be capable of fragmenting packets.
Packet fragmentation is normally performed by routers and it can
be a fairly taxing task for some routers. IP fragmentation is
a process of taking large packets and breaking them down into
packets as small or smaller than the MTU of the destination media.
Bridges (or switches for that matter) that can perform IP fragmentation
are generally able to handle any protocol that might be thrown
The bridge's requirement to pass on all broadcasts can cause problems,
too. On large networks, usually ones with tens of bridges and
hundreds of stations, the propagation of broadcasts through the
network can result in other st
ations creating broadcasts as well.
This is known as a broadcast storm. They can last a while and
consume as much network bandwidth as is available.
A more common problem occurs when a significant number of broadcasts
occur on a fast backbone and have to be propagated to slower media.
If broadcasts consume 5 percent of the bandwidth on 100-Mbps media,
it probably isn't a problem. However, those same broadcasts would
saturate a 4-Mbps Token-Ring segment or take 50 percent of the
available bandwidth on an Ethernet segment. That is a significant
Most bridges provide mechanisms for filtering broadcasts and in
some cases, this may provide an adequate solution. However, on
larger networks at least some routers should be used.
TCP/IP, IPX/SPX, AppleTalk and a bunch of other protocols all
operate at the network layer. That is, they employ at least a
two levels of addressing where bridged systems have a flat, universal
addressing scheme. Bridging's technique of forwarding packets
with unknown destination addresses doesn't scale to global proportions,
indeed, it doesn't scale well past a few hundred nodes.
By dividing addresses into a network field and a node field, it
is possible to more accurately direct packets. In fact, just this
two-level hierarchy is enough to build a global network.
If a router's job were just to steer packets around an internetwork,
we'd probably have much cheaper routers than we do. The fact of
the matter is that routers usually do much more. They also store
and rebroadcast information about the internetwork, keep protocol
dependent tables, enforce administrative rules on network traffic
and provide redirection for special purpose broadcasts. All of
this is fairly CPU intensive, and routers, as a result, tend to
be bottlenecks in networks.
For all that, routers do have their uses and shoul
d in no way
be avoided. There is no better way to erect a wall between two
different parts of an organization (say, marketing and engineering).
Routers are also the only game in town when it comes to connecting
your private network to a public network (like the Internet).
Further, routers are the thing to use when connecting networks
via comparatively slow wide area networks. If you're paying for
wide area bandwidth, you'll want all the control possible over
the data that flows across the network.
These are the instances where there is no substitute for routing.
However, in the local area network, routing is not the best way
to increase the overall bandwidth within your network. That is
best done with switches that have some routing smarts.
Switching has matured beyond simple multiport bridging. There
are a number of important features that not only make switching
the most economical way to get more bandwidth in your network,
they also make a switched network much easier to administer.
In terms of bandwidth, switches provide high speed, low latency
bandwidth. Latency is usually much lower than for routers as there
is usually less processing going on in a switch as well as many
processors (most often ASICs). In instances where traffic is flowing
between like media (say Token-Ring to Token-Ring), switches can
begin retransmitting the packet before they have completely received
the packet. This is called cut-through bridging (as opposed to
store and forward) and can reduce latency even more.
On the administrative side, virtual LAN (VLAN) is now a feature
commonly found on switches. VLAN technology addresses some of
the flaws in bridging without necessarily introducing the complexities
The idea of VLANs is to take some group of ports on the switch
and treat them together as a LAN segment. The net effect of this
is to create b
roadcast domains since all other traffic is still
directed only at the port for which it is destined.
Traffic flowing between VLANs must be routed. However, VLANs can
usually encompass many more segments than a regular bridged network
might have. This reduces the number of router ports needed and
often results low levels of traffic between VLANs (often just
Some switch vendors have built routing functions into their switches
and others have chosen to not. While some route IPX, IP, AppleTalk
and DECnet, most only handle IPX and IP - bridging all other protocols.
Depending on the configuration of your network and the ease with
which you can reconfigure your network addresses, routing may
be worth its additional cost.
Switching, particularly as a means to accessing an ATM backbone,
will likely be the preferred mechanism for building high bandwidth
networks over the next three to five years. Virtually any network
that has outgrown a single segment design can benefit from switching.
Probably the bigger issue is converting networks that currently
employ routers. Reworking network addresses can be a challenge
and in some environments it can be almost impossible. (See also
Ethernet has been essentially described in four specifications
from the IEEE. These build upon the work done initially by Xerox
and later by Xerox, Intel and Digital Equipment Corp. together.
These specifications involve various types of cables, connection
rules and other hardware considerations. However, they all employ
the general CSMA/CD algorithms discussed earlier.
Note that in order for CSMA/CD to work properly, there must be
a minimum packet size on the network. That minimum size has been
set at 64 Bytes and the length of the various network
where more than two transceivers can exist has been determined
based upon the propagation speed of data over the media.
10BASE-5 is the original Ethernet system. It employs a quarter
of an inch diameter, 50 ohm coax cable (with a minimum bend radius
of 10 inches). 10BASE-5 segments can run in length up to 500 meters
with as many as 100 transceiver connections spaced at least 2.75
10BASE-5 transceivers access the media by piercing the thick coaxial
cable. These transceiver taps are known as vampire taps. Since
they don't actually require breaking the physical cable, the electrical
signals over the cable are typically fairly clean.
10BASE-5 systems were originally envisioned to be cheap and fairly
easy to build. The large cable needed simply to be run by rooms
where computing equipment would be located. Taps would be made
into the cable by using external transceivers. As it turned out,
the requirement of an external transceiver and the thick cable,
which was expensive and difficult to work with, limited to use
Thin Ethernet was a fairly popular specification and is still
used in many environments today. With a maximum segment length
of 203.5 yards, it requires that the 50 ohm cable be only .2 inches
thick (a bend radius of two inches). It also uses standard BNC
connectors and "T's" to provide access to the media.
Typically, T's are connected directly to the back of network interface
cards, thus eliminating the need for an external transceiver .
Only 30 transceivers can be inserted onto a Thin Ethernet segment
and they must be spaced at least 19.69 inches apart. 3Com was
heavily involved in developing Thin Ethernet hardware, much as
they are today. Their hardware was able to handle slightly longer
segments, up to 220 yards in
length. Unfortunately, mixing other
vendors equipment into an environment where cable runs exceed
203.5 yards can cause problems. For this reason, keeping total
lengths to 203.5 yards is a good idea.
Neither of the coax-based Ethernet specifications lent themselves
well to the structured wiring plants that telco workers had been
building for decades. Using telco-style wiring was seen as necessary
if networked computers were to populate most every desk in the
Various vendors realized this and began making Ethernet implementations
that could run over standard category 3 twisted pair wiring. The
same wiring that drives most every telephone in the world.
The standard eventually came down to supporting 110-yard segments
of category-3 cable with a maximum of two transceivers per cable
(the end station being one and the hub being the other). Standard
RJ-45 phone jacks are used for host connections and transceivers
are almost always built onto the network interface card, making
the connecting hardware and card very economical.
10BASE-F is an Ethernet over fiber-optics specification. Its main
purpose is to provide long Ethernet runs and electrical isolation
either up building risers or between buildings.
Like most other multimode fiber specifications, 10BASE-F segments
can go as long as 1.24 miles and accommodate only two transceivers.
Token-Ring is heavily used in IBM mainframe environments. It's
standardization has taken place in the IEEE 802.5 committee. Token
Passing need not be a ring topology, IEEE 802.4 defines Token
Bus. However, the ring topology is good since a station that put
data on the ring can also take it off, therefore knowing whether
the data made it all the way around uncorrupted.
Transmission speeds of 4 and 16 Mbps have been sta
units are always at least 22 bytes long and their maximum length
is determined by the Token Holding Time (THT), which usually allows
for packets up to approximately 4,500 bytes.
One station on each Token-Ring segment will act as the monitor.
This is usually the first station to enter the network, but each
station must be capable of acting as the monitor. The monitor
has a few very important responsibilities. It must create the
original token, compensate for ring jitters, be able to store
one whole token so that the token is occasionally fully removed
the ring, remove unowned or mangled packets from the ring and
finally establish the order of stations on the ring.
Each station must receive and retransmit each packet on the Token-Ring
network, so the major concern in Token-Ring is the aggregate differences
between the clocks on all of the Token Ring cards. This difference
in clock rates - and the potential data loss - is known as 'jitter.'
Almost all of the difficulties associated with multivendor Token-Ring
networks center around jitter problems.
Token-Ring can make use of a wide variety of topologies. The most
common today is through active hubs with end-station runs using
telephone grade wire. However different limitations exist for
four different Token Ring topologies. For each 4-Mbps and 16-Mbps
Token-Ring, there rules governing their use over both unshielded
twisted pair (UTP) wiring as well as shielded twisted pair (STP).
Until as recently as late 1991, IBM was unwilling to admit that
16-Mbps Token-Ring could or should be run over UTP wiring, preferring
STP wiring. Indeed, the Manchester II encoding used to put Token-Ring
data onto a wire (it's the same encoding mechanism as used for
Ethernet), requires a physical signaling rate of 32 MHz, and the
FCC is quite careful about systems that run at these rates as
they can interfere with a number of broadcast technologies. Companies
such as Proteon a
nd Synoptics (now Bay Networks) had shown an
UTP 16-Mbps system commonly and IBM has since agreed to standardize
For any ring, 4 Mbps or 16 Mbps, the maximum number of stations
has been set at 260 stations. The limit on the number of stations
is due to total jitter present throughout the ring. Each station
has a small buffer that can be used to compensate for differences
in clock rates around the network. Only the monitor station, however,
is responsible for correcting the ring's apparent jitter. This
allows for the one-bit delay between stations. If more than 260
stations are present on a ring, the monitor may not have enough
"room" in its latency buffer to account for all the
jitter present in the ring.
In reality, it is difficult to build a ring of 260 stations. Somewhat
less than 100 is probably a more realistic number. Each adapter
must maintain its own clock and each one on the network may meet
the requirements for a 260-station ring. Few stations is more
Passive token ring MAUs or Multistation Access Units where the
original means for building a ring as envisioned by IBM. MAU's
were unpowered devices that simply allowed for star-shaped rings,
thus permitting a structured wiring plant (like the telephone
network.) Passive MAUs have given way to active devices that have
a number of advantages. Active devices can act as repeaters, and
thus elevate concern for signal degradation due to overall ring
Whether active or passive, each MAU has a Ring-IN port and a Ring-OUT
port. These ports are used to extend the ring beyond the MAU.
As with Ethernet, Fiber can be used on these ports for distances
of up to 1.24 miles, and up to 550 miles of IBM type 1 STP cable.
Conversely, NICs are supposed to be able to drive signals on up
to 770 miles of type 1 STP cable. Lobes from MAU to end station
and back may not exceed 110 miles when using type 1 cable (since
110 + 550 + 110 would total to the 770 miles a single station
If UTP is used for end station runs, then no more than 72 stations
are permitted on the ring. Essentially all type 1 cable measurements
can be used, however a conversion of 1.1 miles of type 1 cable
to .495 miles of type 3 cable must be made.
Fiber Distributed Data Interface (FDDI) was the first standardized
100-Mbps technology. From day one, it was and is envisioned to
be a backbone technology. Station management, redundant links
and fairly flexible architecture give FDDI its backbone flavor.
They also make it a fairly expensive technology - especially compared
with other 100-Mbps technologies like Fast Ethernet and 100VG-anyLAN.
As the name implies, FDDI was intended to run on fiber. Standards
have been written for data grade UTP (category 5) as well as STP
wiring. FDDI raw baud rate is actually 125 Mbps as FDDI's minimum
data units are expanded from four bits to five bits. The additional
bit allows bit patterns to be chosen so that series of '0's are
not permitted. FDDI uses a non-return-to-zero, invert on 1's encoding
technique. By not allowing consecutive 0's, FDDI can maintain
its clocking at a frequency of 125 MHz rather than the doubled
frequencies required by Ethernet (although not Fast Ethernet)
FDDI uses token passing for its media access arbitration just
as Token Ring does. However, rather than specify a flat Token
Holding Time, FDDI uses a Target Token Rotation time. The Token
Holding Time is then calculated by dividing the target rotation
time by the number of active stations on the ring. This addresses
one of the faults of Token Ring - that crowded rings get quite
slow, and stations may not get a chance to send data for 50 milliseconds
or more. On most FDDI rings, the TRT is usually 5 to 10 milliseconds,
assuring that each station will have regular access to the ring.
The proper setting for the TRT is some matter for debate. Thos
who want to see lots of data on the ring and very little token
time encourage a high TRT. Those who want to see very regular
access and are less concerned about ring utilization push for
low TRTs. The 5 milliseconds mentioned above should be viewed
as a fairly low TRT, 10 milliseconds is moderate and anything
above it is a high TRT. This number is usually configurable on
a station by station basis. When new stations enter the ring,
a new TRT will be determined, it will be the lowest TRT requested
by all stations on the ring.
Jitter is less of a concern on FDDI rings than on Token-Rings.
Each station on the FDDI ring has a buffer that is used to compensate
for differences in clock rates - as opposed to Token-Ring where
only one station responsible for managing jitter compensation.
As a result, FDDI has maximum station count of 500 nodes. Each
ring can be up to 62 miles in length and the distance between
stations can be up to 2 km using multimode fiber. Single-mode
fiber can be used for distances of 12.4 miles, but get your wallet
out as single-mode transceivers and fiber are extremely expensive.
All stations directly on the ring must be dual-attached stations.
That is, four fibers will be used to build two distinct rings
and each station must be attached to both rings. The secondary
ring is normally not used for data transmission. It is there only
to fix faults that may occur. Packets on the two rings flow in
opposite directions and should there be a fault (broken cable
or down station), stations adjacent to the fault wrap their transmit
and receive lines from the two rings essentially forming one big
This bigger ring may actually be up to 220 yards in length. The
stations that have wrapped to form the new ring will constantly
probe for a fix in the fault and will return the ring to its normal
operation when the fault is no longer detected.
Single-attached stations including UTP-attached stations must
go through concentrators to attach to the main ring.
are active devices that manage the insertion of station into the
ring as well as provide for link integrity tests and other connection
management functions. They also make the architecture flexible
in that an end station with a dual-attach card can be "dual
homed" to two different concentrators. If the primary connection
should fail for any reason, the secondary connection can still
be used to access the ring - a good fault-tolerant option.
We have described ATM briefly here as a cell-based technology.
The technology will be discussed further in a future chapter devoted
just to ATM. It is worth pointing out some of the differences
between ATM and the technologies that we have talked about up
to this point.
ATM is a point-to-point technology. There is no concept of sharing
ATM's media. While this seems like a fairly odd choice, considering
how expensive it can be to dedicate media and bandwidth to each
and every station on the network, it is in fact a fairly logical
choice. ATM was originally conceived as a wide area transport
for use by telcos. In the telco world, the idea of many stations
attaching to the same wire is as outdated as party lines.
This is not to say that two stations' data will never travel over
the same wire, indeed this happens all the time, and must happen
for the whole notion of a network to be reasonable. However, only
two devices share each wire - one on each end. For that reason,
the arbitration mechanisms that we've fairly carefully described
for shared media networks are not appropriate for ATM. Rather,
mechanisms need to be developed to arbitrate the bandwidth that
will be available to two stations through the life of their data
The ATM specification for traffic flow control is called ABR or
Available Bit Rate. It is a complex specification that must be
implemented in silicon at the same level where packets are segmented
into cells and cells are reassembled into packets
. In the short
term, this makes the economical promise of simple SAR chips (Segmentation
and Re-assembly) a little hard to realize.
Another problem that faces ATM networks is their point to point
nature. By definition, point-to-point, connection-oriented networks
cannot support broadcasts and multicasts. However, we know that
upper-layer protocols like TCP/IP and IPX require broadcasts to
Finding a way to map the existing networking protocols onto ATM
is a complex task. There are two approaches to attacking the problem.
One is called LAN Emulation and it basically follows the same
model as bridging. The other is called MPOA or MultiProtocol Over
LAN Emulation (LANE) provides mapping of six-byte LAN addresses
into 20-byte ATM addresses as well as providing mechanisms for
setting up virtual circuits between stations wishing to communicate,
providing broadcast resolution mechanisms and handling for unknown
packets. In this way, the ATM network looks like a bridge with
various components exploded throughout the network. The devices
that provide access from a shared LAN technology like Ethernet
into the ATM network are called edge switches. Under LANE, the
edge switches need to send broadcast packets to the device on
the ATM network that can handle them. However, once a virtual
circuit is set up, the Edge switch sends to the intended end station
without outside intervention.
The problem with LANE is that when stations need to communicate
with other stations not on the same emulated LAN, they must go
through a router. That router is a potential bottle neck. A better
solution might be to have the ATM network emulate a router rather
than a bridge. That is, provide mechanisms for resolving addresses
at the network layer and making the edge switches smart enough
to determine the route without a router.
That is essentially what MPOA does. It includes the mechanisms
of LANE and adds a route server that builds routing tables and
pushes them out
to the edge switches. The edge switches then need
only consult the table to determine the proper path to configure
for the destination packet.
While this sounds simple, it isn't. Each protocol needs particular
handling and may require more processing than simple routing.
For example, if a packet originates on a Token-Ring node and is
destined for an Ethernet node, the original data packet may be
as large as 4,500 bytes - three times as is allowed by the Ethernet
node. Each routed protocol handles problems like this in different
ways and each must be accommodated.
Of course, the ideal way for a station to operate on an ATM network
is to set up its own virtual circuits after consulting some central
registry for an address (think of this as directory assistance
or the White Pages). However, we have a couple decades of development
and applications invested in our present applications and we can't
just throw all that away - so LANE and MPOA are important to the
success of ATM.
A number of speeds have been suggested for ATM. Perhaps the first
commonly implemented ATM system was the so-called ATM-TAXI system.
TAXI is a chipset intended to implement FDDI's physical layer
and therefore gave us 100-Mbps ATM. While this technology was
instructional, it will not be commonly used in the end.
The telco industry has settled on 155 Mbps (also known as OC-3)
as basic rate for ATM service. The next step up will be OC-12
or 622 Mbps. The step down from 155 Mbps is still a bit unclear.
However, right now IBM's 25.6-Mbps ATM specification is winning
favor as it uses many of the physical interface elements of 16-Mbps
Token-Ring. In fact, the first switch port and end-station card
combination to come to market with a price less than $1,000 was
The problem for ATM25, as it is known, is that it may not represent
enough of an advantage over switched Ethernet used in conjunction
with LANE or MPOA. On the other hand, virtually
any voice or video
technology likely to run to personal computers during this century
will likely work just fine over a 25-Mbps full duplex system.
They may not all work so well over Ethernet. These three rates,
therefore, are likely to be the ones to see significant volume
over coming the months and probably years.
As might be guessed, ATM622 will require fiber. ATM155 will run
over fiber or category 5 UTP cabling. ATM25 will work over category
3 or category 5 UTP wiring.
Table of Contents
November 15, 1996
Print This Page
E-mail this URL