Chapter 5

Purchasing Networking Hardware

Previous chapterNext chapterContents


In this chapter, you learn how to

Windows NT Server 4.0 is a stable and robust network operating system, equally well suited to environments ranging from small offices to multinational corporations. Choosing the proper hardware to operate it will ensure that your network works reliably and is easy to maintain and expand. Spending some time planning your installation will pay off down the road in fewer problems and easier growth.

Compared with other modern network operating systems, such as Novell NetWare 4.1, Windows NT Server makes surprisingly modest hardware demands. Microsoft's specified minimum requirements, however, are unrealistically low for most situations. Fortunately, Windows NT Server scales very well. Whereas Novell NetWare requires adding substantial amounts of memory to support additional disk storage, Windows NT Server has no such direct correlation between RAM requirements and the amount of disk storage supported. A Windows NT server that's properly configured initially with adequate processor power and RAM will support substantial growth before requiring replacement or significant upgrading of the server.

Selecting a Media Access Method

The first important decision you must make is which media access method to use. This decision is comparable in importance to a railroad's choice of which gauge to use (how far apart do you put the tracks?). Everything devolves from this decision, including your choice of active network components, the type of cabling that you'll install, and ultimately the performance, reliability, and cost of your network.

In Chapter 4, "Choosing Network Protocols," you learn about the OSI Model and the purpose of its various layers. Media access methods are addressed primarily in the second layer, or data-link layer, with some overlap into the first, or physical layer. The data-link layer is the subject of an Institute for Electrical and Electronic Engineers (IEEE) standard.

See "Understanding the OSI Seven-Layer Model," (Ch 4)

The IEEE is a standards body that codifies and publishes official standards documents. Among these standards are the IEEE 802 series, which cover media access methods for local area networks and other related issues. The IEEE develops these standards in cooperation with various industry bodies. When a standard is adopted and promulgated, all manufacturers of equipment addressed by that standard ensure that their products fully comply with the standard in question. This is why, for example, you can freely mix different manufacturers' Ethernet cards and be assured that they'll all talk to each other properly.

ARCnet

One of the oldest media access methods still in use is ARCnet. Developed in 1977 by DataPoint Corporation, ARCnet (for Attached Resource Computer Network) was deservedly popular in the early and mid-1980s. Its combination of simplicity, low cost, and reasonable performance made it a good choice at a time when Ethernet was extremely expensive and Token Ring didn't yet exist.

Thousands of Novell NetWare 2.x networks were installed using ARCnet, many of which are still running today. ARCnet is an obsolete media access method. If you happen to have an ARCnet network in place, you may choose to run a Windows NT server on it. Otherwise, you shouldn't consider ARCnet.

Token Ring

Introduced by IBM in 1986, Token Ring has a lot going for it, balanced by a few negatives. Token Ring is fast, reliable, and well supported by IBM in both the PC and mainframe environments. Balanced against this are the following factors:

Like ARCnet, Token Ring uses a token-passing access method, but with a somewhat different mechanism. In Token Ring, a station with data to be transmitted first waits for an idle token. It changes the status of the token to busy (called a burdened busy token), appends a destination address, attaches data to the token, and passes the burdened busy token to the next station in the ring. Each station functions in Token Ring as a unidirectional repeater, receiving data from the downstream station, regenerating it, and passing it to the upstream station. This next station regenerates the burdened busy token and passes it to the next station in sequence. This process continues until the burdened busy token reaches its destination. At the destination, the recipient extracts the data from the token and sends an acknowledgment (ACK) to the originating station. On receipt of this ACK, the originating station generates an idle token and passes it to the next station. Figure 5.1 illustrates a token passing between four computers in the ring.


5.1

Passing a token between four computers in a Token Ring network.

Because only one token can exist on the ring at any given time, one station is designated an active monitor. The responsibility of this station is to monitor the ring for out-of-norm conditions, including the absence of a token, a damaged token, or multiple tokens coexisting. When such a condition is found, the active monitor corrects the problem and returns the ring to normal operating conditions.

Token Ring uses a star-wired ring topology in which each station is connected by an individual cable run to a concentrator called a multistation access unit (MAU). Token Ring originally operated on 150-ohm shielded twisted-pair (STP) cable arranged in a star-wired ring with data throughput of 4mbps. Newer Token Ring equipment can operate on unshielded twisted-pair (UTP) cable and fiber-optic cable, as well as the original STP cable. The throughput has also been upgraded to 16mbps. Many newer Token Ring components can operate at either 4mbps or 16mbps to ease transition from the older technology to the newer. A ring must be exclusively 4mbps or 16mbps; the two speeds can't coexist on the same ring.

Token Ring, including its enhancements, has been adopted as standard 802.5 by the IEEE. Its market share is second only to Ethernet, although the growth of new Token Ring networks has slowed as Ethernet has increased its dominance. Although large numbers of Token Ring networks are installed, Token Ring remains in essence a niche product. It's primarily limited to exclusive IBM shops and to those locations with a need to interconnect their networks to IBM minicomputers and mainframes. Although IBM's commitment to Token Ring remains strong, even IBM has been forced to bow to market realities and begin offering Ethernet as an option.

Ethernet

Invented by Bob Metcalfe in 1973 and developed as a commercial product by Digital Equipment Corporation, Intel, and Xerox (DIX), Ethernet is the media access method of choice for most new networks being installed today. Ethernet is inexpensive, performs well, is easily extensible, and is supported by every manufacturer of network equipment. Ethernet has become the networking standard by which all others are judged.

Ethernet is based on a contention scheme for media access known as CSMA/CD, or Carrier Sense Multiple Access with Collision Detection. Carrier Sense means that each Ethernet device on the network constantly monitors the carrier present on the cable and can determine when that carrier is idle and when it's in use. Multiple Access means that all Ethernet devices on the cable have an equal right to access the carrier signal without obtaining prior permission. Ethernet is called a baseband system because only one transmission can be present on the wire at any one time. Collision Detection means that if two or more Ethernet devices transmit simultaneously and thereby corrupt all the data, the collision is detected and is subsequently corrected.

Unlike token-based schemes, any Ethernet station is permitted to transmit any time it has data to be sent. When a station needs to transmit, it first listens to make sure that the carrier is free. If the carrier is free, the station begins transmitting the data.

The original Ethernet specification, referred to as 10Base5, operated on 50-ohm thick coaxial cable in a bus topology with a data throughput of 10mbps and a maximum segment length of 500 meters. Concerns about the cost and difficulty of working with thick coax cable brought about the 10Base2 specification, which runs the same data rate over thinner and cheaper RG-58 coaxial cable, but limits segment length to only 180 meters. 10Base2 also is called thin Ethernet or thinnet. Figure 5.2 illustrates 10Base2 cabling to two Ethernet NICs.


5.2

Connections between 10Base2 cabling and Ethernet NICs.

With the introduction of the 10BaseT specification, Ethernet was extended from the original physical bus topology to a physical star topology running on inexpensive UTP cable, again with a data rate of 10mbps. Unlike 10Base2, 10BaseT requires a hub (described later in the "Simple Hubs" section) to interconnect more than two PCs (see fig. 5.3). You can create a two-PC 10BaseT network with a special cross-connect cable. 10BaseT uses RJ-45 modular connectors, shown in figure 5.4, which were originally designed for commercial telephone applications.


5.3

Three PCs connected by 10BaseT to a hub.


5.4

An RJ-45 modular connector.

High-Speed Ethernet Variants

As the load on an Ethernet cable increases, the number of collisions increases and the cumulative throughput begins to fall. The traditional solution to this problem has been to divide a heavily loaded network into multiple segments by using bridges, which are described later in the "Bridges" section. The use of bridges to create additional segments essentially breaks the network into multiple neighborhoods. Local traffic within a neighborhood is kept local, and only traffic that needs to cross segment boundaries does so.

To address these perceived Ethernet performance problems, three contending high-speed challengers have arisen. The first, Full Duplex Ethernet, doubles bandwidth to 20mbps. The remaining two, 100BaseT and 100VG-AnyLAN, compete at the 100mbps level. All these competitors propose to achieve their bandwidth increases using existing cable in at least one of their variants.

Full Duplex Ethernet.

Full Duplex Ethernet is one recent scheme for increasing bandwidth beyond the 10mbps available with standard Ethernet. An Ethernet device may be transmitting or receiving, but it can't do both simultaneously. Full Duplex Ethernet devices, on the other hand, transmit and receive data simultaneously, thereby doubling effective bandwidth to 20mbps.

Full Duplex Ethernet devices are designed to work on 10BaseT networks using UTP cabling, and achieve their higher data rates simply by placing data on the transmit pair and receive pair simultaneously. Although Full Duplex Ethernet can coexist with standard 10BaseT Ethernet on the same cabling system, implementing Full Duplex Ethernet requires that NICs, hubs, bridges, routers, and other active network components be upgraded or replaced to support it.

Full Duplex Ethernet is likely to disappear as superior high-speed alternatives become available to replace it. Don't consider it for your network.

100BaseT.

Of the two competing 100mbps proposed standards, 100BaseT was first out of the starting gate. 100BaseT remains conceptually close to Ethernet roots, using CSMA/CD media access combined with one of two signaling methods. 100BaseT signaling combines CSMA/CD and FDDI Physical Layer specifications and runs on two pairs of a Category 5 cable. 100BaseT can also run on STP or fiber. The 4T+ signaling method runs CSMA/CD on four pairs of Category 3 cable.

At the data-link layer, the major change involves timing. CSMA/CD-based systems function on the assumption that all collisions are detected. The maximum cabling length in an Ethernet network is limited by Round Trip Delay (RTD) time. The RTD of the entire network must be small enough that the two devices located farthest from each other can detect a collision in progress. The much higher signaling speed used by 100BaseT means that such collisions are present on the cable and detectable for a much shorter span. As a result, the maximum end-to-end cabling length for a 100BaseT network is 250 meters, or one-tenth that of a 10BaseT network.

100VG-AnyLAN.

The second of the 100mbps proposed standards is 100VG-AnyLAN. Although it's often considered to be an extended version of Ethernet, 100VG-AnyLAN is really not Ethernet at all. It replaces Ethernet's CSMA/CD media access method with a demand-priority method more similar conceptually to that used by Token Ring, although access to the cable is arbitrated by a centralized controller rather than by possession of a token. In fact, the 100VG-AnyLAN specification permits using Token Ring frames as well as Ethernet frames.

100VG-AnyLAN is named for a composite of its 100mbps transmission rate, its capability to run on four pairs of Category 3 Voice Grade (VG) cable, and its support for both Ethernet and Token Ring frame types. 100VG-AnyLAN also can be run on two pairs of Category 5 cable, on STP, and on fiber.

With 100VG-AnyLAN, all stations constantly transmit an IDLE signal to the hub. When a station has data to transmit, it sends a request to transmit to the hub. When a request arrives at the hub, the hub transmits an incoming (INC) signal to the other stations. These stations then cease transmitting the IDLE signal, which frees the line for traffic. After the hub receives the frame from the source station and forwards it to the destination, it sends an IDLE signal to all stations that didn't receive the frame, and the cycle begins again.

A request can be either a Normal Priority Request (NPR) or a High Priority Request (HPR). An NPR arriving at the hub is serviced immediately if no HPRs are outstanding. HPRs are always given initial priority in processing over NPRs. However, if HPRs exist when an NPR arrives at the hub, a timer is started on the NPR. After a certain period passes, the NPR is promoted to an HPR and processed. This priority-demand system allows high priority traffic such as real-time video and audio frames to be processed isochronously.

The major drawback to 100VG-AnyLAN is that the complexity and processing power required at the hub is similar to that required for a switching hub but doesn't provide the benefits of switching. For the cost of the required 100VG-AnyLAN network cards and hubs, you can instead install switched Ethernet with much less effort and superior results.

Other High-Speed Media Access Methods

Several other high-speed Ethernet alternatives exist. Each of these has, at one time or another, been proposed as a desktop standard, and each has failed to become such a standard because of the high cost of NICs and hub ports. As a result, each has been largely relegated to use in network backbones.

Fiber Distributed Data Interface (FDDI).

Developed by the ANSI X3T9.5 Committee, the FDDI specification describes a high-speed token-passing network operating on fiber-optic cable (see fig. 5.5). In its original incarnation, FDDI was intended to operate on dual counter-rotating fiber rings with a maximum of 1,000 clients and a transmission rate of 100mbps. FDDI can support a large physical network. Stations can be located as far as 2 kilometers apart, and the total network can span 200 kilometers. Like Token Ring, FDDI is deterministic and offers predictable network response, making it suitable for isochronous applications such as real-time video or multimedia.


5.5

A fiber-optic cable with a typical FDDI connector.

Two classes of FDDI stations exist. Class A stations, also called dual-attached stations, attach to both rings using four fiber strands. This offers full redundancy, because failure of one ring simply causes traffic to shift to the other, allowing communication to continue uninterrupted. Class B stations, also called singly attached stations, connect only to the primary ring and do so via a star-wired hub. During a ring failure, only Class A stations participate in the automatic ring reconfiguration process.

FDDI is commonly seen in network equipment rooms, providing the network backbone used to link hubs and other active network components. FDDI to the client is rarely seen, appearing primarily in U.S. government and large corporate locations, where the resistance of fiber-optic cable to electronic eavesdropping outweighs the large additional cost of installing FDDI.

Copper Distributed Data Interface (CDDI).

The FDDI specification actually makes provision for two physical-layer Physical Media Dependencies (PMDs). Traditional FDDI operates on a fiber PMD. CDDI, more properly referred to as Twisted-Pair Physical Media Dependent (TP-PMD), is exactly analogous to FDDI, but runs on Category 5 UTP cable rather than fiber.

The high cost of buying and installing fiber-optic cable was the original motivation for the development of TP-PMD. As fiber installation costs have decreased dramatically, the justification for TP-PMD has become more difficult due to other advantages of fiber-optic cable. By its nature, fiber-optic cable doesn't generate electrical interference, nor is it subject to being interfered with. Also, because FDDI is often used to link separate buildings on a campus, the fact that fiber is an electrical insulator allows you to use it without considering issues such as lightning protection and building grounds. Don't consider using TP-PMD in your network.

Asynchronous Transfer Mode (ATM).

ATM competes with FDDI for the network backbone. Although it's usually thought of as running at 155mbps on fiber media, ATM hardware is in fact available running at data rates ranging from 1.544mbps T1 through IBM's 25mbps implementation up through the telephone company's OC-48 multigigabit-per-second rates and on various media, including fiber-optic cable and unshielded twisted pair. Although ATM to the desktop has from time to time been proposed as a standard, the extremely high cost of ATM adapters and hub ports has limited ATM to use as a network backbone technology.

ATM is defined at the physical and data-link layers of the OSI Reference Model. It's a high-bandwidth, low-latency transmission and switching methodology similar in concept to traditional packet-switching networks. It combines the high efficiency and bandwidth usage of packet-switching networks with the guaranteed bandwidth availability of circuit-switching networks. ATM uses fixed-length 53-byte cells, each of which contains 48 bytes of data and 5 bytes of header information. Fixed cell length reduces processing overhead and simplifies the design and construction of the very high-speed switches required for ATM speeds.

The speed and switching capabilities of ATM were major factors in its selection by the ITU (formerly CCITT) as the methodology of choice for broadband Metropolitan Area Networking. ATM is a demand-priority isochronous technology, and therefore suitable for real-time applications such as voice, video, and multimedia.

Media Access Recommendations

If you have an existing ARCnet, Token Ring, or Ethernet network in place that's adequate for your needs, there's no question-use what you have. All of these are supported by Windows NT Server 4.0, and all will provide at least adequate performance.

If you'll be installing a new network or substantially upgrading and expanding an existing one, give strong consideration to using Ethernet. The performance and expandability constraints of ARCnet make it a strong candidate to be replaced when your network needs to grow. Most firms replacing ARCnet networks choose Ethernet over the newer high-performance ARCnet variants. Similarly, many firms considering large-scale expansions of an existing Token Ring network, particularly one running at 4mbps, find that the limited selection and high cost of 16mbps Token Ring NICs and active network components make it possible to replace the entire Token Ring network with 10BaseT Ethernet and still spend less than they would simply to upgrade the Token Ring network.

Consider installing a new Token Ring network only if you find yourself in one of the following two special situations:

Consider installing 100BaseT if you're planning a large network. Dual 100/10mbps NICs are standardized and are available now for little more than 10BaseT 10mbps cards. Although 100BaseT hub ports are still quite expensive, you may want to consider installing 100/10mbps-capable cards with the intent of upgrading your hubs and other active network components later as needed.

In the end, most organizations find 10BaseT Ethernet is the way to go. A properly designed Ethernet network offers high performance and reliability, and does so at a price much lower than that of the alternatives. With a properly designed cabling system, Ethernet networks are easily upgradable by replacing hubs and other central components as needed while leaving most of the network, including client NICs, in place. Also, Ethernet's market dominance ensures that you always have many competing products to choose from. In a competitive market such as Ethernet, manufacturers are constantly improving performance and reliability, and reducing prices.

Designing a Network Cabling System

The importance of cabling to the performance and reliability of your network can't be overstated. The cabling system is to your network what your circulatory system is to your body. Experienced network managers tell you that the majority of problems in most networks are due to cabling and connector problems. A properly designed, installed, documented, and maintained cabling system allows you to avoid these problems and to concentrate on running your network. Craftsmanship is the single most important factor in the reliability of a cabling system.

Another issue addressed by a well-designed and properly documented cabling system is that of maintainability. If your cabling system is a rat's nest, adding a station can turn into an all-day affair as you attempt to locate spare pairs, determine in which equipment closet they terminate, and so forth. By contrast, adding a station on a properly structured cabling system takes only a few minutes. If your company is like most, you probably don't have enough skilled network staff to do all that needs to be done. In terms of time saved, the presence of a good cabling system can be just like having one more full-time staff member.

The question of installing the cabling system yourself versus contracting it out often doesn't receive the attention it deserves. Many companies make the mistake of pricing only the cable and connecting hardware needed to do the job and forgetting that cable installation is a very labor-intensive process. When quotes arrive from the cabling contractors, they're shocked at the price difference between the raw materials cost and the quoted prices, and decide on that basis to do it themselves. For most companies, this is a major mistake. Unless your company is large enough to have staff members dedicated to installing LAN cabling, you should probably contract it out. Don't expect the voice guys in the telecommunications division to install LAN cabling. They don't have the equipment or the experience to do the job properly.

When this chapter was written, professionally installed LAN cabling started at about $50 per run. The price can be much greater, depending on the type of cable required, the number of runs to be made, the difficulty of installation, and so forth. Obviously, you can expect to pay more if you live in New York City than if you live in Winston-Salem.

Having made the decision to contract your LAN cabling out, don't make the mistake of deciding on the installer purely on the basis of price. If you put your cabling contract out to bid, as you should, you'll probably find that most of the low-priced responses come from companies that specialize in business telephone systems. Although these types of contractors may appear to know how to install LAN cabling, many don't.

Probably the best approach to make when deciding on a bid to install cable is to focus on those companies that specialize in installing LAN cabling. When making the final decision, use the following guidelines to help you choose the right cabling company for you.

Network Topologies

You can arrange your cabling in many ways. These are referred to as topologies. Physical topology describes the way in which the physical components of your cabling are arranged. Electrical topology describes the way in which it functions as an electrical circuit. Logical topology describes the way in which the system functions as a whole. Physical and logical topologies don't necessarily go hand in hand. It's possible to have a physical topology of one type supporting a logical topology of another. You also can have a hybrid physical topology.

Only three of the many possible physical and logical topologies are commonly used in LAN cabling-the bus, the star, and the ring:


5.6

Three PCs connected by a bus, such as 10Base2 Ethernet.

Ethernet runs on a bus topology. The older 10Base5 and 10Base2 Ethernet implementations use a logical bus running on a physical coaxial cable bus. The newer 10BaseT Ethernet implementation uses a logical bus running on a physical UTP cable star. ARCnet also runs on a logical bus. Early ARCnet implementations ran a physical bus on RG-62 coaxial cable. Later ones use a logical bus running on a physical UTP star.


5.7

Six PCs connected to a hub in a star network.

10BaseT Ethernet and newer implementations of ARCnet run a logical bus on a physical star. Token Ring runs a logical ring on a physical star. No common networking method uses a logical star topology.


5.8

Four PCs connected in a ring network.

FDDI runs a logical ring on a physical ring. Token Ring runs a logical ring on a physical star.

Cable Types

Cable is available in thousands of varieties, but only a few of these are suitable for connecting a LAN. You don't need to know everything about cable to install a LAN successfully, but you should know at least the basics. The following sections describe the main types of cable used for LANs, and explain how to select the cable that's most appropriate for your environment.

Coaxial Cable.

Coaxial cable, called coax for short, is the oldest type of cable used in network installations. It's still in use and is still being installed today. Coax comprises a single central conductor surrounded by a dielectric insulator surrounded by a braided conducting shield (see fig. 5.9). An insulating sheath then covers this assemblage. Coaxial cable is familiar to anyone with cable television.


5.9

Construction of a doubly shielded coaxial cable.

Coax has a lot going for it. It's a mature technology, is inexpensive, offers high data throughput, and is resistant to electrical interference. These advantages are balanced by a few disadvantages. Coax is difficult to terminate properly and reproducibly. Because coax is typically used in a physical bus topology, many connections are required, increasing the chances of a bad connection taking down the entire network.

The following three types of coax cable are commonly seen in network installations:

If the network you're building is very small-fewer than 10 clients all located in close physical proximity-you may be tempted to use thinnet coax to avoid the cost of buying a hub. If you insist on doing so, at least buy premade cables in the appropriate lengths to connect each device rather than try to terminate the coax yourself. For any larger network, don't even consider using coax.

Shielded Twisted-Pair Cable.

Shielded twisted-pair (STP) cable comprises insulated wires twisted together to form pairs. These pairs are then twisted with each other and enclosed in a sheath. A foil shield protects the cable against electrical interference. Depending on the type of cable, this shield may lie immediately beneath the sheath, protecting the cable as a whole, or individual pairs may be shielded from other pairs. Figure 5.10 shows the construction of a typical STP cable.


5.10

Construction of a typical shielded twisted-pair cable.

The sole advantage of STP cable relative to UTP cable is the greater resistance of STP cable to electromagnetic interference. STP cable is very expensive, typically costing from two to five times as much as equivalent UTP cable. IBM Token Ring was originally designed to run on STP cable, and that's still about the only place where you're likely to see it installed. Most new Token Ring installations are done using UTP cable. Don't consider using STP cable for your network.

Unshielded Twisted-Pair Cable.

Unshielded twisted-pair (UTP) cable dominates network cabling installations today. Like STP cable, UTP cable is constructed from twisted pairs of wire, which are then in turn twisted with each other and enclosed in a sheath (see fig. 5.11). The UTP cable normally used for client cabling includes four such twisted pairs, although UTP cable is available in pair counts ranging from two to more than 1,000. The twist reduces both spurious emissions from the cable and the effects of outside electromagnetic interference on the signal carried by the cable. UTP cable is available in thousands of varieties, differing in pair count, sheath type, signal-carrying capability, and other respects. UTP cable used for network wiring has a nominal impedance of 100 ohms.


5.11

A UTP cable with four pairs.

UTP cable is now the de facto and de jure standard for new network cabling installations. It's inexpensive, easy to install, supports high data rates, and is reasonably resistant to electrical interference, although less so than the other types of cable discussed here. Typical four-pair UTP cable ranges in price from 3 cents to 50 cents per foot, depending on the signal-carrying capability of the cable and the type of sheath. Don't consider using anything but UTP cable for your network. Even small networks benefit from the advantages of UTP cable.

On casual examination, a piece of UTP cable appears to be a simple item. Nothing is further from the truth. A quick glance at just one manufacturer's catalog reveals hundreds of different types of UTP cable. Only a few of these are appropriate for any given task, and one is probably exactly right. The following sections discuss some of the variables to consider when specifying UTP cable.

Conductor Size.

UTP cable is commonly available in 22, 24, and 26 AWG (American Wire Gauge) using either solid or stranded conductors. Although 22 AWG cable has some advantage in terms of the distance a signal can travel without degradation, this is primarily of concern in analog phone systems and is outweighed by the additional cost and other factors for data wiring. Data station cabling should be done using 24 AWG solid conductor cable. Use stranded cable only for patch cords and similar jumpers, in which the flexibility of stranded conductors is needed.

Pair Count.

UTP cable is available in pair counts ranging from two to more than a thousand. UTP cable runs made to the client are almost universally four-pair cables. High pair-count cables-25-pair, 50-pair, 100-pair, and greater-are used primarily in voice wiring as riser cables to concentrate station runs and link floors within a building. Pair count is inextricably linked to the signal-carrying capability of the cable, with higher signaling rates being limited to smaller cables.

Until 1994, not even 25-pair cable was available that met the minimal Category 3 data standards. Although such 25-pair cable is now available from AT&T, you should still avoid using riser cables to carry data. When you find yourself wanting a riser cable, it probably means that you should instead install a hub at that location and use fiber or another backbone media to link that hub to the rest of the network.

Sheath Material.

You can buy UTP cable, designed for inside wiring, with either general-purpose sheathing or plenum-rated sheathing. The sheath of general-purpose cable is usually constructed of polyvinyl chloride (PVC). PVC sheathing is acceptable for wiring that runs inside conduit or in spaces not used to route HVAC source and return. Plenum cable uses sheathing constructed of Teflon or a similar compound. Its use is mandated by law and by the National Electrical Code (NEC) if the cable is exposed within an HVAC (Heating, Ventilation, and Air Conditioning) plenum. When exposed to flame, PVC sheath produces toxic gases, whereas plenum sheath doesn't.

Unfortunately, plenum sheathing is extremely expensive. The same box of Category 5 cable that costs $130 with PVC sheathing may cost nearly $400 with plenum sheathing.

Don't even think about using PVC cable where you're required by the NEC to use plenum. First, doing so breaks the law. Second, if an inspector finds PVC cable where you should have used plenum, you're required to remove and replace it. Third, if a fatal fire ever does occur at your site, you would hate to wonder for the rest of your life whether your decision to save a few bucks on cable cost someone his life.

Quality Grades.

UTP cable varies widely in terms of how fast a signaling rate it supports and over what distance. The terms Type, Level, and Category are used and often misused interchangeably. The IBM Cabling System defines a variety of cables, each of which is designated a Type. Underwriters' Laboratories (UL) runs a LAN cabling certification plan that assigns Level ratings to various qualities of UTP cabling. The Electronic Industry Association/Telephone Industry Association (EIA/TIA) has a similar program that rates UTP cables by Category.

To offer just one example, IBM Type 3 is a UTP cable used in the IBM Cabling System, and is, in fact, the only UTP cable defined within that system. Type 3 cable is intended to carry analog telephone signals and low-speed data traffic-1mbps and 2mbps terminal traffic and 4mbps Token Ring. In terms of electrical characteristics and signal-carrying capability, Type 3 cable corresponds roughly to UL Level 2 cable. The lowest Category assigned by the EIA/TIA system is Category 3, which is substantially superior to Type 3. UL rates cable at levels 1 through 5, with levels 3, 4, and 5 corresponding to the EIA/TIA categories 3, 4, and 5. So although you may hear someone refer to Category 2 cable, no such thing exists.

You can spend a long time learning everything there is to know about UTP cable. Fortunately, you don't need to. The EIA/TIA has developed minimum specifications (categories) for grades of cable appropriate for various uses. All you have to do is pick a cable category that meets your requirements and your budget.

Category 3 cable (Cat 3, for short) is the minimum level you should consider for any new cable installation. Using the standard two pairs of the available four pairs, Cat 3 cable supports data rates as high as 10mbps Ethernet, and can be used over short runs for 16mbps Token Ring. Cat 3 can support the newer 100mbps technologies such as 100BaseT and 100VG-AnyLAN, but only by using all four pairs. Cat 3 typically costs about $60 per 1,000 foot spool in PVC sheathing, and about $125 per spool in plenum sheathing. Cat 3 is used almost exclusively now for voice cabling, with only about 10 percent of new data cabling being done using Cat 3.

Consider very carefully before you decide to install Category 3 cable. Although you can certainly save a few dollars now by installing Cat 3 instead of a higher grade cable, that wire is going to be in your walls for a long time. In five or 10 years, you may rue the decision to install Cat 3. Do so only if your budget is such that using Cat 3 is your only option.

Category 4 (Cat 4) cable is obsolete. Its reason for existing was simply that it was marginally better than Cat 3, allowing 16mbps Token Ring networks to operate over greater distances than are supported on Cat 3. It's now stuck in the middle-not much better than Cat 3 and not much cheaper than Cat 5. Don't consider Category 4 cable for your network.

Category 5 (Cat 5) cable is the cable of choice for most new network installations. Cat 5 cable supports 100mbps data rates using only two pairs, and offers the potential for carrying data at gigabit rates. Cat 5 typically costs about $130 per 1,000 foot spool in PVC sheathing, and about $350 per spool in plenum sheathing. About 80 to 85 percent of all new data cabling installed is done using Cat 5 cable. The huge demand for Cat 5 cable has resulted in the manufacturer of the sheathing material being unable to meet demand, particularly for the plenum variety, so Cat 5 cable can be hard to come by.

For flexibility, some organizations have gone to the extent of specifying Cat 5 cable for all their cabling, including voice. You should specify Cat 5 cable for your network. Because the majority of the cost to run cable is in labor, you'll likely pay only a 10 to 20 percent overall premium to use Cat 5 rather than Cat 3. This additional cost can easily be justified for most organizations by the additional flexibility Cat 5 offers over the 15- to 25-year life of a typical cabling plant.

Fiber-Optic Cable.

Fiber-optic cable (usually called fiber) has become increasingly common in network installations. Just a few years ago, fiber was an esoteric and expensive technology. Although fiber-optic cable was initially positioned for use as a network backbone, most networks continued to use coax to carry backbone traffic. Fiber itself was expensive, the active components were costly and in short supply, and even something as simple as putting a connector on a fiber-optic cable was a difficult and expensive process. In recent years, large reductions in the cost of the fiber media itself, improvements in the means used to terminate it, and the widespread availability of inexpensive fiber-based active network components have made fiber-optic cable a realistic alternative for use as a network backbone.

Fiber-optic cable offers the highest bandwidth available of any cable type. It has very low signal attenuation, allowing it to carry signals over long distances. The complete absence of crosstalk allows multiple high-bandwidth links to exist within a single sheath. Fiber-optic cable is completely immune to electromagnetic interference and conversely, because it doesn't radiate, is secure against eavesdropping.

Fiber-optic cable is much smaller in diameter and lighter than copper cables capable of carrying the same bandwidth, and is therefore easier to install. Lastly, because many fiber-optic cables have no metallic components, they are electrical insulators and can therefore be used to link buildings without concerns about lightning protection and grounding issues.

Set against all these advantages are a few disadvantages. The installed cost of fiber-optic cable is still somewhat higher than that of copper, and the active network components designed to work with it also cost more. Also, fiber-optic cable is typically more fragile than copper cable, requiring careful attention to bend radii limits and handling requirements during installation.

Structured Cabling Systems

The Electronic Industries Association and Telephone Industries Association (EIA/TIA) recognized the need for a truly standards-based wiring specification. In cooperation with IBM, AT&T, DEC, and other vendors, the EIA/TIA developed the EIA/TIA 568 Commercial Building Telecommunications Wiring Standard. This specification incorporated the best features of each competing standard to develop a composite standard that supported any vendor's equipment and at the same time ensured backward compatibility with existing cabling systems installed using the propriety cabling standards of each participating vendor. The vendors, in turn, updated their own specifications to full compliance with EIA/TIA 568. As a result, you can buy a cabling system from many vendors with the assurance that, as long as it's EIA/TIA 568 compliant, it will support any industry-standard networking scheme.

If you buy an installed cabling system, you should specify full compliance with both the EIA/TIA 568 Wiring Standard and the EIA/TIA 569 Pathways and Spaces Standard. If you design and install the cabling system yourself, you should pay close attention to the provisions of both the EIA/TIA 568 and 569 specifications, even if you don't intend your cabling system to achieve full compliance.

Both of these specifications can be purchased from Global Engineering Documents at 800-854-7179. Order these documents if you plan to design and install the cabling system yourself. If you intend to contract the installation, you probably don't need to buy the documents.

Cabling System Recommendations

Whether your cabling system needs to support five clients in a small office or 500 clients in a multibuilding campus, you should adhere to several guidelines to ensure that your cabling system provides the service that you expect:

Designing for Expandability and Ease of Maintenance

The first principle here is to overwire your station runs. It costs just a little more to run two cables than it does to run only one. If you have a location that you're absolutely positive never needs more than one cable run, go ahead and run a spare anyway, leaving the spare unterminated with enough slack on both ends for future use. Although you may end up with several $10 pieces of cable left unused in the wall for eternity, the first time you need one of those cable runs will more than pay for every extra run you made. Don't scrimp on cable runs. They're cheap and easy to make when the cabling system is being installed. They're expensive and difficult or impossible to make later on.

The second principle is to overwire your backbone and riser runs. If a 25-pair riser does the job now, put in a 50-pair or even a 100-pair riser. If a six-fiber cable carries your present traffic, install a 12-fiber cable instead. The cable itself is cheap. What costs money is labor, and most of the labor is spent terminating the cable. By installing more pairs than you need and leaving them unterminated, you have the best of both worlds. Having the extra pairs in place costs next to nothing, but they are there and can be terminated if and when you need them.

The third principle is to pay close attention to the physical environment of your wiring closets and equipment rooms. Nothing is more miserable than trying to maintain a cabling system in an overheated or freezing closet while holding a flashlight in your teeth. Make sure that your wiring closets and equipment rooms are well-lighted, properly ventilated, and provided with more electrical receptacles than you think you'll ever need. Your equipment will be much happier, and so will you when you need to make changes to the cabling.

Selecting Active Network Components

If the cabling system is the arteries and veins of your network, the active network components are its heart. By itself, the cabling system is just dead wire. Active network components bring that wire to life, putting data on it and allowing the various parts of your network to communicate with each other.

If you remember only one thing from this section, make it this: The single most important factor in choosing active network components is to maintain consistency. Having chosen one manufacturer as your supplier, it's easy to be seduced by another manufacturer's component that's a bit cheaper or has a nice feature or better specifications. Don't fall into this trap. You'll pay the price later in dollars, time, and aggravation when you find that you can't solve a problem related to the interaction of the two manufacturers' components because neither will take responsibility, or that you have to buy a second $5,000 network management software package because neither manufacturer's management software will work with the other's product.

Network Interface Cards

A network interface card, commonly referred to as a NIC, is used to connect each device on the network to the network cabling system. NICs operate at the physical and data-link layers of the OSI Reference Model. At the physical layer, a NIC provides the physical and electrical connection required to access the network cabling and to use the cabling to convey data as a bit stream. At the data-link layer, the NIC provides the processing that assembles and disassembles the bit stream on the cable into frames suitable for the media access method in use. Every device connected to the network must be equipped with a network interface, either by means of a peripheral NIC or by means of similar circuitry built directly into the component.

A NIC is media-dependent, media-access-method-dependent, and protocol-independent. Media-dependent means that the NIC must have a physical connector appropriate for the cabling system to which it's to be connected. Media-access-method-dependent means that even a card that can be physically connected to the cabling system must also be of a type that supports the media access method in use. For example, an Ethernet card designed for UTP cable can be connected to but doesn't function on a Token Ring network wired with UTP cable. Protocol-independent means that a particular NIC, assuming that appropriate drivers are available for it, can communicate using a variety of higher-level protocols, either individually or simultaneously. For example, an Ethernet NIC can be used to connect a client simultaneously to a LAN server running IPX/SPX and to a UNIX host running TCP/IP.

It's common in the industry to differentiate between NICs intended for use in clients and those intended for use in servers. Conventional wisdom says that inexpensive NICs are fine for use in a client where performance is less of an issue, but that for your server you should buy the highest-performance NIC you can afford. In fact, although you can use any NIC in any computer system it fits (client or server), you should pay close attention to selecting NICs for both your clients and your servers. Installing high-performance NICs in your server and using low-performance NICs in your clients is actually counterproductive. In this situation, because the server NIC can deliver data so much faster than the client NIC can accept it, the network is flooded with retransmissions requested by the client NIC. These constant retransmissions greatly degrade network performance. The solution is simple. Balance the performance of the NICs you use in your clients with the performance provided by the server NIC.

The following sections provide guidance on choosing the right NIC for clients and servers.

NIC Bus Types for Client PCs.

The first consideration in choosing a client NIC is the client's bus type. There are two schools of thought on this issue:

There's something to be said for each of these positions, but, on balance, standardizing on ISA NICs for 10mbps clients usually makes sense. Clients using 100BaseT NICs, however, are likely to have PCI buses; thus, you're likely to need to support both ISA and PCI NICs.

For clients running Windows NT Workstation 4.0, make sure that your server NIC is listed on the Windows NT Hardware Compatibility List (HCL) described later in the "Conformance to the Microsoft Windows NT Hardware Compatibility List" section.

Software-Configurable or Jumper-Configurable NICs.

A NIC has various settings that usually need adjustment before you can use the card. Nearly all cards are configurable for IRQ. Many require setting DMA, base address, and media type. All ARCnet cards and some Token Ring cards allow the network address to be set by the user.

One important factor that contributes to the usability of a NIC is whether these settings are made using physical jumpers or by using a software utility. A card that requires setting physical jumpers is inconvenient. Making a change involves disassembling the computer, removing the card, setting the jumpers, and then putting everything back together. It's often necessary to go through this process just to examine how the card is now set. A software configurable card, on the other hand, can be both examined and set simply by running a program. Fortunately, most cards shipping today are configurable with software. Don't consider buying one that isn't.

Remote Boot Clients.

Some NICs support remote boot. Remote-boot capability simply means that the NIC itself contains all the files needed to boot the client and connect it to the network. These files are generated by the network administrator and stored on a ROM chip installed on the NIC.

Several years ago, diskless workstations had a brief vogue based on a two-fold justification:

The Network Computer (NC), proposed by Larry Ellison of Oracle Corp. and other competitors of Microsoft, is similar in concept to a diskless workstation. The NC contains sufficient software in read-only memory (ROM) to boot the operating system and connect to a TCP/IP network. NCs download applications, such as a Web browser, from the network and run the applications in RAM. There's no reason, however, that an NC can't be designed to boot from the network.

Name-Brand NICs vs. Clone NICs.

The best reason to pay the premium for a name-brand NIC is that part of that premium goes to pay for better quality control and reliability. In working with thousands of name-brand and clone NICs over the years, many experienced network administrators seldom see name-brand NICs that were damaged out of the box, and almost never experience a failure of a name-brand NIC that wasn't due to some easily explained problem such as a lightning strike. The choice is yours, but most LAN administrators recognize that the relatively small amount of money saved by buying clone NICs can be rapidly swamped by the costs of just one network failure attributable to using cheap cards.

Server NICs.

All the same factors that apply to choosing client NICs also apply to choosing server NICs, with a few more. Whereas a client NIC is responsible for handling only the traffic of its own client, a server NIC must process traffic from many clients in rapid sequence. Doing so demands a high-speed link to the I/O system of the server. Accordingly, NICs intended for use in servers are universally designed to use a high-speed bus connection.

Until recently, the requirement for speed meant that server NICs were either EISA or MCA cards. The advent of PCI, which transfers 4 bytes at a time on a 33 MHz mezzanine bus for a total throughput of 132mbps, has made both EISA and MCA obsolete. The pending extension of PCI to 64 bits will allow it to transfer 8 bytes at a time, doubling throughput to 264mbps. All newly designed servers are PCI-based. If they offer EISA or MCA slots, it's only for compatibility with legacy expansion cards. Even IBM, the originator and long the sole champion of MCA, has begun offering PCI-based servers.

Another factor that's of little concern in client NICs but more important in server NICs is that of processor usage. All NICs require some assistance from the system processor, but a properly designed server NIC should minimize this dependence. This issue is of particular concern with 100BaseT server NICs. In some cases, for instance, one server equipped with four such NICs may lose more than 50 percent of the processor just in handling the overhead for the NICs. Find out what level of processor usage your proposed server NICs require, and favor those with low usage.

Deciding which NIC to install in your server is straightforward. If you have a PCI server (as recommended in the later section "Choosing a Bus Type"), install a name-brand PCI NIC. In choosing the brand of NIC, simplify global network management issues by giving first preference to a card made by the manufacturer of your other active network components. Alternatively, choose a card made by the server manufacturer to simplify server management.

Make sure that your server NIC is listed on the Windows NT Hardware Compatibility List (HCL) described later in the "Conformance to the Microsoft Windows NT Hardware Compatibility List" section.

One of the most valuable pieces of real estate in your network is a server expansion slot. Even though a typical server has many more slots than a client, the competition for these slots is intense. Multiport NICs combine the function of more than one NIC onto a single card. They're available in dual- and quad-NIC versions from a variety of suppliers. If your LAN runs Ethernet, and if you segment it, consider carefully how many server slots you have available and whether you should buy a dual-port or quad-port NIC.

Repeaters

Repeaters are used to clean up, amplify, and rebroadcast a signal, extending the distance that the signal can be run reliably on a particular type of cable. They function exclusively at the physical layer of the OSI Reference Model. Repeaters are media-dependent and protocol-independent. As purely electrical devices, repeaters are unaware of the content of the signals that they process. They simply regenerate the signal and pass it on. Figure 5.12 illustrates, in the context of the OSI Model, a repeater between two network segments.


5.12

Connecting two network segments with a repeater.

Local repeaters are used to extend signaling distance within the confines of a LAN. Remote repeaters are used to extend a LAN segment, often by means of fiber-optic cable, to a remote location without requiring installation of a bridge or router. The clients at the remote site appear logically to be on the same local segment as that to which the remote repeater is attached.

Hubs, bridges, routers, gateways, and all other active network components include basic repeater functionality. Use stand-alone local repeaters as needed to manage cable length limitations within your network, or to extend your LAN to remote locations that don't justify installation of a bridge or router.

Simple Hubs

Hubs are used as concentrators to join multiple clients with a single link to the rest of the LAN. A hub has several ports to which clients are connected directly, and one or more ports that can be used in turn to connect the hub to the backbone or to other active network components.

A hub functions as a multiport repeater. Signals received on any port are immediately retransmitted to all other ports on the hub. Hubs function at the physical layer of the OSI Reference Model. They're media-dependent and protocol-independent. Hubs come in a wide variety of sizes, types, and price ranges.

Stand-Alone Hubs.

Stand-alone hubs are simple and inexpensive devices, intended primarily to serve a self-contained workgroup in which the server is connected to one port and the clients to other ports, and no interconnection to a larger network exists. They range in size from four to 24 ports. Depending on port count, stand-alone hubs vary from about the size of a modem to about the size of a pizza box. They normally operate on low voltage supplied by a power brick similar to that used to power external modems and consume little power.

Stand-alone hubs offer little or no expandability. This isn't usually a problem because these hubs are inexpensive and can simply be purchased in a size appropriate for your needs. Stand-alone hubs usually offer no provision for management, although high-end stand-alone hubs may have management features standard or as an option. Manageability-the ability to control the hub from a remote management station-isn't normally a major issue in the small workgroup environment for which these hubs are intended.

A typical name-brand, eight-port, stand-alone, unmanaged hub has a street price of less than $200, or less than $25 per port. Hubs with higher port counts may cost somewhat more on a per-port basis. Hubs that include management features may approach $100 per port. If your network comprises only a few clients that are all located in close proximity, a stand-alone hub is an excellent and inexpensive way to connect these systems.

Make sure that any stand-alone hub you buy has jabber protection. This feature monitors each port for constant uncontrolled transmission that will flood the network. If this condition is detected, jabber protection temporarily shuts down the offending port while allowing the remainder of the ports to continue functioning.

Stackable Hubs.

Stackable hubs are a fairly recent development. In the past, you had to choose between a stand-alone hub, which provided little functionality but at a low price, and an enterprise hub-also called a chassis hub-which provided enterprise-level functionality but at a very high cost. Stackable hubs do a good job of blending the best characteristics of both other types of hub. They offer much of the expandability and management capabilities of enterprise hubs at not much greater cost than stand-alone hubs.

What distinguishes a true stackable hub is that it possesses a backplane used to link it directly to a similar hub or hubs, melding the assembled stack into a single hub running a single Ethernet bus. Although stand-alone hubs can be physically stacked and joined simply by connecting a port on one to a port on the next, the result isn't a true stack. To understand why, you need to understand that Ethernet places a limit on the maximum number of repeaters that are allowed to separate any two stations. A device connected to any port of any hub in a stack of stackable hubs is connected directly to the single bus represented by that stack. As a result, any two devices connected to that stack can communicate while going through only the one repeater represented by that stack. The stack of stand-alone hubs, on the other hand, results in two stations connected to different hubs within that stack going through at least two repeaters to communicate.

Stackable hubs are commonly available in 12-port and 24-port versions. Some manufacturers also offer 48-port versions. The street price of stackable hubs ranges from about $75 to $250 per port, depending on port density and what, if any, management options are installed. For most organizations, these hubs offer the best combination of price, features, and expandability available.

Hub Manageability.

Manageability is another consideration in choosing stand-alone or stackable hubs. This is simply a hub's capability to be configured and monitored from a remote client running specialized software. Although protocols such as Simple Network Management Protocol (SNMP) and Remote Monitoring (RMON) are standardized, their implementations aren't. This means that it's unlikely that you'll be able to manage one manufacturer's hub with remote software intended for use with another manufacturer's hub. (This is yet another reason to stick with one manufacturer for your active network components.)

Although hub manufacturers attempt to represent dumb or unmanaged hubs as a separate category from smart or manageable hubs, the reality is that manageability is just one more feature that may be standard, optional, or unavailable on a particular hub. Most low-end stand-alone hubs aren't manageable and can't be upgraded. High-end stand-alone hubs and most stackable hubs either come with management standard, or at least offer it as an option. Unless yours is a very small network, you should buy manageable hubs, or at least ones that can be upgraded to manageability.

Bridges

Bridges are used to divide a network into mutually isolated segments while maintaining the whole as a single network. Bridges operate at the data-link layer of the ISO Reference Model (see fig. 5.13). They work with frames, which are organized assemblages of data rather than the raw bit stream on which hubs, repeaters, and other physical-layer devices operate.


5.13

Connecting two network segments with a bridge.

Bridges are media-dependent. A bridge designed to connect to UTP cabling can't connect to a coax cable, and vice versa. Bridges are protocol-independent above the data-link layer. It doesn't matter to an Ethernet frame or to the bridge that directs it whether that frame encapsulates an IP packet, an IPX packet, or some other type of packet at the logical level. A bridge simply sees that the frame originated from a particular hardware address and needs to be sent to another hardware address.

Frames include, in addition to the raw data itself, a header that identifies the address of the source station and the address of the destination station. Frames use physical rather than logical addresses. When a client transmits an Ethernet frame to the server, the source address is the MAC or hardware address of the client's Ethernet card, and the destination address is the MAC address of the Ethernet card in the server.

A bridge divides a single network cable into two or more physical and logical segments. The bridge listens to all traffic on all segments and examines the destination hardware address of each frame. If the source and destination hardware addresses are located on the same segment, the bridge simply discards that frame because the destination can hear the source directly. If the source and destination addresses are located on different segments, the bridge repeats the frame onto the segment where the destination address is located. Traffic with both source and destination addresses on the same segment is kept local to that segment and isn't heard by the rest of the network. Only traffic whose source and destination addresses are on different segments is broadcast to the network as a whole.

When used properly, bridges can significantly reduce the total volume of traffic on the network. This is particularly important on Ethernet networks, because as the network grows and the traffic volume increases, collisions become more frequent and the overall performance of the network degrades.

Another important characteristic of bridges is that they negate the impact of repeaters. Ethernet allows a maximum of four repeaters to intervene between the source and destination stations. By using a bridge, a frame on the source segment may pass through as many as the legal limit of four repeaters before reaching the bridge. When the bridge places that frame on the destination segment, it may again pass through as many as four repeaters on its way to the destination station. This can be an important design consideration in Ethernet networks, particularly those that are large and physically widespread.

Defined in the IEEE 802.1 specification, the Spanning Tree Algorithm (STA) detects loops within a bridged network. STA allows a bridge to determine which available alternative path is most efficient and to forward frames via that path. Should that path become unavailable, a bridge using STA can detect that failure and select an alternate path to prevent communications from being severed.

Routers

Routers are used to connect one network to another network. Routers operate at the network layer of the ISO Reference Model (see fig. 5.14). They work with packets, which are composed of a frame encapsulated and with logical addressing information added.


5.14

Connecting two network segments with a router.

Routers are similar to bridges but are media-independent. A router designed to process IP packets can do so whether it's physically connected to UTP cabling or to a coax cable. Routers are protocol-dependent above the data-link layer. A router designed to process IP packets can't process IPX packets and vice versa, although multiprotocol routers do exist that are designed to process more than one type of network-layer packet.

Packets are encapsulated within a data-link layer frame. Packets include a header, which identifies the addresses of the source station and the destination station. Packets use logical rather than physical addresses. Unlike bridges, which require the source and destination address to be on the same network, router addressing allows the destination address to be on a different network than the source address.

Routers are much more complex than bridges and are accordingly more expensive and require more configuration. A bridge makes a simple decision. If the destination address is on the same segment as the source address, it discards the frame. If the two addresses are on different segments, it repeats the frame. A router, on the other hand, must make a complex decision about how to deliver a particular packet to a distant network. A router may have to choose the best available route from a variety of alternatives.

Routers work with the logical addressing information contained in network-layer packets, and not all upper-layer protocols provide this information. Windows NT Server includes native support for TCP/IP, IPX/SPX, and NetBEUI. TCP/IP and IPX/SPX packets include the network-layer logical addresses needed by routers and are referred to as routed or routable protocols. NetBEUI packets don't include this network-layer information, and accordingly are referred to as non-routed or non-routable. This means that if you're using TCP/IP or IPX/SPX transport, you have the choice of designing your network around bridges, routers, or both. If instead you're using NetBEUI, bridging is your only alternative, because routers can't handle NetBEUI packets.

Routing vs. Bridging

A great deal of confusion exists about the differences between bridges and routers and about when the use of each is appropriate. This confusion is increased by the availability of brouters, which simply combine the functions of bridges and routers.

The differences between routers and bridges are the differences between a single network and an internetwork. A group of connected devices that share a single common network layer address are defined as a single network. Bridges are used to join segments within a single network. Routers are used to join separate networks.

Bridges function at the data-link layer and work with hardware addresses, which provide no information about the geographical location of the device. Routers function at the network-layer and work with logical addresses, which can be mapped to geographical locations.

Gateways

A gateway is used to translate between incompatible protocols, and can function at any one layer of the OSI Reference Model, or at several layers simultaneously (see fig. 5.15). Gateways are most commonly used at the upper three layers of the OSI Model. For example, if you have some users using the Simple Mail Transfer Protocol (SMTP) for e-mail and others using the Message Handling System (MHS) protocol, a gateway can be used to translate between these two protocols so that all users can exchange e-mail.


5.15

Connecting two networks using dissimilar protocols with a gateway.

Because of the work they must do to translate between fundamentally incompatible protocols, gateways are usually very processor-intensive and therefore run relatively slowly. In particular, using a gateway to translate transport protocols, such as TCP/IP to and from IPX/SPX, usually creates a bottleneck. Use a gateway only if it's the only solution available, and then only on an interim basis while you migrate to common shared protocols. Gateways typically are difficult to install and maintain, and the translations they provide are often imperfect at best, particularly at the upper ISO layers.

Enterprise Hubs, Collapsed Backbones, Ethernet Switches, and Virtual LANs

Simple networks use hubs, bridges, and routers as building blocks. As networks become larger and more complex, performance and manageability requirements often make it necessary to make fundamental changes to the network architecture. The following sections describe how you can use enterprise hubs, collapsed backbones, Ethernet switches, and virtual LANs to accommodate the demands of large and complex networks.

Enterprise Hubs.

Enterprise hubs are in fact more than hubs. These devices are based on a wall-mounted or rack-mounted chassis that provides two or more passive backplanes into which feature cards are inserted to provide the exact functionality needed (see fig. 5.16). Cards are available to provide hubs for Ethernet, Token Ring, FDDI, and ATM networks. Other cards can be installed to provide routing, bridging, gateway, management, and other functions.


5.16

A typical enterprise hub with plug-in cards.

An enterprise hub is something you buy only if you have to. The unpopulated chassis alone can cost more than $10,000. The cards used to populate an enterprise hub are likewise expensive, both in absolute terms and on a per-port basis. Why, then, would anyone buy an enterprise hub instead of a pile of stackables? For three reasons: to establish a collapsed backbone, to implement Ethernet switching, and to build a virtual LAN.

Collapsed Backbones.

Collapsing the backbone simply means to move all active network components into a single physical cabinet and then to link them by using a very high-speed bus. Rather than locate hubs and other active network components in equipment rooms near the clients, these components are reduced to cards contained in a single chassis sited centrally. This scheme has certain perceived advantages both in terms of management and in terms of implementing technologies such as switching and virtual LANs.

Ethernet Switches.

To understand Ethernet switches, you need to understand the progression in hub design and functionality from simple stand-alone hubs and stackables to the modern full-featured enterprise hub:

The single characteristic that distinguishes a hub from a switch is that, on the hub, all traffic generated by any port is repeated and heard by all other ports on the hub, while a switch instead establishes virtual circuits that connect the two ports directly (see fig. 5.17). Traffic on this virtual circuit can't be heard by ports not a part of the virtual circuit. In essence, the hub resembles a CB radio conversation, whereas the switch resembles a telephone conversation. To look at it another way, each device attached to a switch port is to all intents and purposes connected to its own dedicated bridge or router port.


5.17

Traffic between six PCs with a switched Ethernet hub.

Switches use one or both of two methods to process traffic. Store-and-forward processing receives and buffers the entire inbound frame before processing it, whereas cut-through begins processing the inbound frame as soon as enough of it has arrived to provide the destination address. A cut-through switch therefore never has possession of the entire frame at any one time. The advantage of cut-through is raw speed and increased throughput. The advantage of store-and-forward is that because the entire frame is available at once, the switch can perform error checking, enhanced filtering, and other processing on the frame. Both methods are used successfully and, in fact, many switches incorporate elements of both types of processing.

The first and most common type of switch is the segment switch, which is conceptually similar to a bridge or a router. Functioning as a switched learning bridge, the segment switch provides basic filtering and forwarding between multiple segments.

The next step up in switches is the private LAN switch, in which each port can connect to any other port via a dedicated virtual segment built up and torn down as needed by the switch itself. As long as the software is properly written and the backplane has sufficient cumulative bandwidth, every pair of ports can, in theory, communicate with each other at full network bandwidth without contention or collision. In practice, of course, some ports (such as servers) are more popular than others, so the theoretical cumulative throughput of any switch can never be reached. A private LAN switch simply extends the concept of bridging multiple segments to reduce traffic to its ultimate conclusion of providing each device with its own segment.

You also can add routing functionality to a switch. The switches discussed previously filter and forward frames and otherwise function as bridges at the data-link layer of the OSI Reference Model. Routing switches extend this functionality to working with the network layer of the OSI Model. The level of routing functionality varies by manufacturer and model, from those on the low end (which provide barrier routing) to those on the high end (which essentially provide each device with a dedicated router port). Routing switches are expensive components, and as the level of routing functionality increases, so does the price.

Virtual LANs.

Switching is itself the logical conclusion of the use of segmenting to reduce traffic. However, reducing traffic to the extent that a particular frame or packet is received only by the designated port raises a fundamental problem.

Until now, the discussion has focused only on user traffic, those frames and packets that contain user data. There is, however, another type of traffic on a LAN. Overhead or administrative traffic comprises frames and packets used by the network to maintain itself. Novell servers generate Service Advertising Protocol (SAP) traffic to inform all devices on the LAN of the services available from that server. Routers use Routing Information Protocol (RIP) and other protocols to communicate routing information to each other. All of this administrative traffic is essential. Without it, the LAN doesn't run. For switching to reach its ultimate utility, you must somehow ensure that this overhead traffic is heard by all devices that need to hear it.

The solution to this problem is the virtual LAN (VLAN). A VLAN allows any arbitrary group of ports to be clustered under programmatic control. Just as all the users attached to a traditional Ethernet segment hear all the traffic present on that segment, all the users on a VLAN hear all the traffic on the VLAN. The difference is that VLAN users can be located on different physical networks and scattered broadly geographically.

With a traditional hub, the physical port into which a client is connected determines which network that client belongs to. With a switch, the physical port no longer determines which network the client belongs to, but the client must still belong to a network physically located on that switch. With a VLAN, any client plugged into any port located on any switch can be logically configured to be a member of any network. This feature of VLANs breaks the dependency of physical location to network membership, allowing users to be relocated anywhere within the internetwork and still remain a member of the same network.

Some of the building blocks of VLAN technology have been around for several years, but the technology as a whole hasn't yet been fully implemented. The high cost of fast data links has also contributed to the slow adoption of VLANs. As the technology continues to mature and the cost of high-speed data links continues to drop, VLANs will become an increasingly common part of the network landscape.

Summary Recommendations for Active Network Components

The following list summarizes the recommendations for active network components discussed in the preceding sections, with additional suggestions for power conditioning and spares:

Specifying Server Hardware

The most important computer on your network is the server. The server runs Windows NT Server software to provide shared access to programs, data files, and printers. On many Windows NT Server networks, programs actually run centrally on the server itself. For small networks, the server represents the majority of the cost of the network. On larger networks, the cost of the server becomes a correspondingly smaller part of the total cost, but the server is no less important for this fact.

Many Windows NT server installations use a standard PC with upgraded memory and disk storage as their server. Others are designed around a true purpose-built server. Although using an upgraded PC as a server can be viable for a very small network, purpose-built servers have many advantages in architecture, performance, and redundancy features. Using a purpose-built server benefits any network, and should be considered mandatory if your network will have more than a handful of users.

Installing a standard PC system board in a tower case with a lot of RAM and some large high-performance SCSI disk drives doesn't make a server. Many mail-order vendors have done just this and represent the result as a true server. The system may even be certified by Microsoft to run Windows NT Server 4.0. Think twice before choosing such a system as your server. Look instead at servers from the large, well-established computer companies offering on-site maintenance performed by locally resident technicians. You'll find that such systems may cost a bit more than servers from mail-order PC companies, but purpose-built servers offer a host of advantages in performance, reliability, and maintainability.

General Types of Servers

For convenience and marketing reasons as much as anything else, true servers are usually classified by the size of network they're intended to support. There's a great deal of overlap in these categories, and a fully configured entry-level server may, in fact, exceed a minimally configured mid-range server in both price and capability.

Workgroup Servers.

Workgroup servers are the entry-level server. These servers are designed to support up to 25 or so users running primarily file and print sharing, but can function as light-duty application servers as well. Workgroup servers cost little more than an equivalently configured PC, running from about $4,000 on the low end to perhaps $8,000 or $9,000 when fully expanded. They're usually based on a single Pentium processor and usually can't be upgraded to multiple processors. Disk arrays and Error Checking and Correcting (ECC) memory are optional items, if they're offered at all. Workgroup servers are usually built in a mini- or mid-tower case, although some use a full-tower case. Examples of this class of server are the Compaq ProSignia 300 and ProLiant 1000, the DEC Prioris LX, the Hewlett-Packard NetServer LC/LF lines, and the IBM PC Server 320.

Departmental Servers.

Departmental servers are the mid-range servers, supporting from 20 to 150 users. These servers can do double duty as both file and print servers, and as moderate-duty application servers. The price of departmental servers ranges from about $15,000 on the low end to perhaps $40,000 on the high. At this level, disk arrays, ECC memory, and redundant power supplies are often standard equipment, or are at least available as options. Departmental servers are based on 133-MHz or faster Pentium processors.

Although they may be offered at the entry level with a single processor, these servers are designed to support symmetrical multiprocessing (SMP) with at least two processors. Cases are always at least full-tower, and may be double-wide cubes or rack-mountable. Examples of this class of server are the Compaq ProLiant 2000, the DEC Prioris HX and Prioris XL, the Hewlett-Packard NetServer LH/LM lines, and the IBM PC Server 520.

Enterprise Servers.

Enterprise servers are the high-end products. These servers can provide file and print sharing services for 100 to 500 users, and at the same time support heavy-duty application server functions in a transaction processing environment. Enterprise servers range in price from perhaps $30,000 to $100,000 and more. At this level, fault tolerance and redundancy elements, such as disk arrays, ECC memory, and duplicate power supplies, are standard.

Enterprise servers available today are based on 133-MHz and faster Pentium processors, and offer SMP support for at least four and often more processors. Cases are monsters, providing as many as 28 drive bays. Examples of this class of server are the Compaq ProLiant 4000, the DEC Prioris HX/5100, the HP LS, and the IBM PC Server 720.

Server Type Recommendations.

General server types don't really end with the enterprise server. Companies such as TriCord and NetFrame produce Intel-based servers in the $100,000 to $1 million price range, equipped with scores of processors and gigabytes of RAM. DEC, Hewlett-Packard, IBM, and other traditional minicomputer and mainframe manufacturers produce equally powerful servers based on non-Intel processors. These extremely high-end servers have very limited application. Most companies can rule them out based on cost alone. Even if you can afford one, the question becomes whether it makes sense to buy one.

File and print sharing services can be handled for any reasonable number of users, such as fewer than 250, by an enterprise or even a departmental server. Even if your user count exceeds the capabilities of one of these servers, it usually makes more sense to organize your users into workgroups and provide each workgroup with its own smaller server than it does to put all your users on a single huge server. About the only place a very large server makes sense is as an application server in a heavy transaction processing environment accessing huge databases. Such large-scale client/server software development is problematic, so it makes sense at this level to consider traditional multiuser minicomputer and mainframe solutions, thus rendering the very large server a solution in search of a problem.

Conformance to the Microsoft Windows NT Hardware Compatibility List

Make sure that any server you consider, with all its peripheral components, appears on the Microsoft Windows NT Server Hardware Compatibility List. A printed booklet with this information ships with the Windows NT Server software. However, the list is updated frequently to account for rapid changes within the industry, and you should download the most recent version when you're ready to place the order for your server. You can retrieve the most recent version of the Hardware Compatibility List from http://www.microsoft.com/BackOffice/ntserver/hcl or via anonymous FTP from ftp://ftp.microsoft.com\bussys\winnt\winnt-docs\hcl\. It can also be found in Library 1 of the WINNT forum or Library 17 of the MSWIN32 forum on CompuServe Information Service.

Warranties, Maintainability, and Local Service

Server downtime can cost your company hundreds or even thousands of dollars per hour. More important, it can cost the LAN manager his job. When the LAN is down, employees still draw salaries, queries go unanswered, orders go unfilled, and work backs up. The issues of reliability and service are therefore an important aspect-some would say the most important aspect-of choosing a server.

You can configure your server with every fault-tolerant and redundancy feature offered by the manufacturer. Despite every precaution you take, the day will still come when your server goes down. What happens then can make the difference between an unpleasant episode and a disaster. The time to consider this problem is when you buy your server.

Choosing the Server Components

As you read earlier, the most critical part of your network is the cabling and active network components. This isn't to say, however, that the decision of which server to buy is a trivial one. This section discusses the most important factors to consider when deciding on the right server for your installation.

Know what role your server will play and the tasks it will be required to perform. Older generation network operating systems, such as Novell NetWare, focus almost exclusively on providing shared file and print services, a use that Windows NT Server also supports admirably. However, unlike Novell NetWare, Windows NT Server is also a stable and reliable application server platform.

Providing only file and printer sharing puts a relatively light load on a server. Raw processor speed is much less important than disk subsystem performance and network I/O. Although any server operating system likes memory, a file server has relatively modest RAM requirements.

An application server needs more of everything than a file server:

There's no hard-and-fast dividing line between file servers and application servers. Any computer that can run Windows NT Server 4.0 can function as an application server. What distinguishes the two server types is what you run on them and what hardware you provide to support what you choose to run. There's a great deal of overlap between configurations appropriate for each. Following are common configurations for file/printer sharing and application servers:

The first system might be a perfectly adequate application server in a small network with light application server demands, whereas the second configuration might not even be an adequate file server in a large location with hundreds of clients using it to share files and printers.

The second factor to consider is how much fault tolerance you need and how much you're willing to pay to get it. If your server will be supporting mission-critical applications, you would do well to place fault tolerance, redundancy, and maintenance issues at the top of your list of priorities. Modern purpose-built servers are designed to minimize single points of failure, and have many fault-tolerance features built in, including RAID disk subsystems, ECC memory, and redundant power supplies. If your application is truly critical, weigh the costs versus the benefits of providing complete redundancy, including one or more hot standby servers, standby power generators, and so forth.

Because components with moving parts-disk subsystems, power supplies, and so on-are by far the most likely to fail, purpose-built servers typically address these components first by using redundant disk subsystems and overbuilt or duplicated power supplies. For most users, this provides an adequate level of fault tolerance at a reasonable price. When you get into the realm of standby servers, SCSI switches, and so forth, costs begin to mount rapidly.

The third factor to consider is whether to buy a new server or to convert an existing machine to run Windows NT Server 4.0. Particularly at smaller locations, the latter choice often seems attractive. Although it may seem that you can save money by just adding some memory and disk to an existing system, the reality is almost always that attempting to do so results in a server that costs nearly as much as a new purpose-built server and is substantially inferior to it.

Deciding on the Processor(s).

The fundamental nature of a server is determined by the type of processor it uses-and how many. Windows NT Server gives you a wide range of flexibility in making this choice. Windows NT Server runs on popular processors from Intel, DEC, IBM, and other manufacturers. The following sections examine the issues involved in selecting the processor that your server will use.

RISC-Based Servers.

Windows NT Server 4.0 was designed from the ground up to be a processor-independent operating system. In addition to running on the Intel x86 series of processors, Windows NT Server is available for several RISC processors from other manufacturers, including the DEC Alpha, the MIPS, and the PowerPC.

CISC stands for Complex Instruction Set Computer and RISC for Reduced Instruction Set Computer. Intel processors, including the Pentium and Pentium Pro, are considered to be CISC processors. Processors such as the DEC Alpha, the MIPS, and the PowerPC are considered to be RISC processors. In fact, this line is nowhere near as clearly defined as many believe it to be. Intel processors incorporate many RISC elements, and many RISC processors have instruction sets that can be difficult to differentiate from a CISC instruction set.

In theory, RISC processors are designed with relatively few simple processor instructions. These simple instructions are used as building blocks to accomplish more complex tasks. CISC processors have a larger selection of basic instructions, many of which are more complex than RISC instructions and accomplish more in a single instruction.

Comparing the current crop of RISC and CISC processors in a real-world environment shows that all of them are very fast. They have similar performance on integer operations, with RISC showing somewhat better performance on floating-point operations. Judged purely on performance benchmarks, then, RISC would seem to be the way to go. This turns out not to be the case.

Intel's market dominance ensures that software is written first, and possibly only, for Intel processors, greatly limiting both the breadth and depth of your choices on RISC-based competitors. Even if the database server or other software you intend to use is available today on a particular RISC platform, updates are likely to be slower in coming to this platform than to Intel. Also, a real risk exists that sales for a specific RISC platform may not meet expectations, and support for it may be dropped entirely, leaving your hardware and software orphaned.

Choosing Intel Processors.

The Pentium is the mainstream processor for Windows NT Server 4.0. It's available in versions running at 75 MHz, 90 MHz, 100 MHz, 120 MHz, 133 MHz, 150 MHz, and 166 MHz. The 90-MHz and 100-MHz versions are closely related, as are the 120-MHz and 133-MHz versions. When this book was written, most purpose-built servers used one or two 133-MHz or faster Pentium processors.

The Pentium Pro offers some significant architectural and performance advantages over the Pentium when running 32-bit software, such as Windows NT Server. It shows up to 50 percent greater performance compared with the Pentium of the same clock speed. The Pentium Pro was designed from the ground up to be used in multiple-processor systems, and allows the use of up to four processors without the additional complex and expensive multiprocessing circuitry required by the Pentium.

There's little doubt that the Pentium Pro will become the processor of choice for servers. When this book was written, single-processor versions of Pentium Pro systems were beginning to appear. Release of multiprocessor Pentium Pro servers has been delayed by problems Intel encountered with support for SMP systems based on the Pentium Pro. Multiple Pentium Pro servers are expected to become available in late 1996.

If your server is to be a single-processor one, it makes sense to consider the Pentium Pro for its superior performance with Windows NT Server. If you need the additional power of symmetrical multiprocessing, your only choice at the moment is the standard Pentium.

Sizing Processor Cache Memory.

Processor cache memory (usually just called cache) consists of a relatively small amount of very high-speed memory interposed between the processor and the main system memory. Cache buffers the very high-speed demands of the processor against the relatively modest speed of the main system's dynamic RAM.

All modern Intel processors have processor cache memory built in. This internal cache is called Level 1, or L1, cache. Pentium processors have 8K of L1 cache. Pentium Pro processors come with 256K or 512K of L1 cache. L1 cache is essential; without it, the processor would spend most of its time waiting for the slower main memory to provide data. 8K of L1 cache isn't enough to provide adequate performance.

The key determinant of the value of a cache is its hit rate, which is simply a percentage expressing how often the required data is actually present in the cache when it's needed. Hit rate can be increased either by increasing the size of the cache itself or by improving the efficiency of the algorithms used to determine which data is kept in cache and which is discarded.

Increasing the cache hit rate to improve performance must be done by adding a second level of cache externally. This external cache is called Level 2, or L2, cache. Following are the three types of external L2 cache:

The question of how much L2 processor cache to get and of what type may be resolved by the server you choose. You may find that the amount and type of cache is fixed, or you may find that you have alternatives as to size and type. In general, prefer quality to quantity. A system with 128K of synchronous cache usually outperforms a system with four times as much asynchronous cache. If you're given the option, consider 128K of pipeline burst cache as a minimum for your single-processor Pentium server, but get 256K or 512K if you can.

Deciding on a Multiprocessor System.

As demands for processor power increase, it might seem apparent that one solution is to simply add more processors. Doing so is not the quick fix that it first appears to be, however. Broadly speaking, there are two methods to use multiple processors:

Windows NT Server 4.0 has native support for up to 32 processors in an SMP arrangement, although commonly available servers usually support only two or four processors, with a few servers offering support for eight. Adding processors doesn't scale linearly-that's to say, a system with two processors isn't twice as fast as the same system with only one. This is because overhead is involved in managing work assigned to multiple processors and in maintaining cache coherency. Experience has shown that a system with two processors is perhaps 70 or 80 percent faster than the system with only one processor. Equipping a system with four Pentium processors can be expected to result in a server about three times faster than the same server with only one processor.

See "Server Scalability," (Ch 1)

You should, therefore, give some weight to a server's capability to handle more than one processor. Dual-processor motherboards carry a modest price premium over the single-processor variety. Even if your current requirements are fully met by a single processor server, being able to add a second processor can mean the difference between having to upgrade the server and having to replace it.

Determining Memory Requirements.

Servers run on memory. Every time Windows NT Server has to swap data from RAM to disk, you take a big performance hit. A server using a slow processor with lots of memory outperforms a server with a fast processor and less memory every time. Memory makes up a relatively small percentage of the overall cost of a typical server. The drastic price reduction for dynamic RAM that occurred in early 1996 has made additional memory very affordable.

Memory comes in a variety of physical forms, densities, and types. The Single Inline Memory Module (SIMM) has become the standard packaging in the last few years, taking over from the Dual Inline Package (DIP) chips used formerly. The Single Inline Pin Package (SIPP) was an early attempt to package memory more densely, but has since disappeared because of the physical and other advantages of SIMM packaging. Almost any server you buy uses SIMM memory, with the exception of a few that use the newly introduced Dual Inline Memory Modules (DIMMs). DIMMs are uncommon and often are of a proprietary design. SIMMs, on the other hand, are the industry standard and therefore are preferred.

SIMMs were originally manufactured in 30-pin packages. Nearly all 30-pin SIMMs are 1 byte wide and must therefore be added four at a time to a 32-bit wide system bus. Newer SIMMs are manufactured in 72-pin packages and are 4 bytes wide, allowing them to be added one at a time to most systems. Some systems, particularly servers, have a memory architecture that requires adding 72-pin SIMMs two at a time. Even this, however, gives you substantially more flexibility than the older 30-pin SIMMs. No current server worthy of that name uses 30-pin SIMMs.

Memory can be designed with one or more extra bits that are used to increase its reliability. Parity memory has one such extra bit. A 30-pin parity SIMM is actually 9 bits wide rather than 8, with the extra bit assigned as a parity check bit. Similarly, a 72-pin parity SIMM is actually 36-bits wide rather than 32 bits, with the extra 4 bits used as a parity check. Parity can detect on single-bit errors. When it detects such an error, the result is the infamous Parity Check Error--System Halted message.

For clients, the future is no doubt non-parity memory. In fact, the Intel Triton chip set used by most Pentium system boards doesn't even support parity memory; the Triton chip set ignores the ninth bit if it's present. That's a clue, by the way. If the server you're considering for purchase uses the Triton chip set, you can be assured that the system board wasn't intended for use in a server.

Memory reliability is so important in a server that even parity memory isn't a sufficient safeguard. Borrowing from mainframe technology, manufacturers of Intel-based servers have begun using Error Checking and Correcting (ECC) memory. ECC memory usually is 11 bits wide, dedicating 3 error-checking bits for each 8 data bits. ECC memory can detect and correct single-bit errors transparently. Single-bit errors are the vast majority, so ECC memory avoids nearly all the lockups that would otherwise occur with parity memory. Just as important, ECC memory can at least detect multiple-bit errors and, depending on the type of ECC memory, may also be able to correct some of these errors. Although ECC memory may lock the system on some multiple-bit errors, this is far preferable to allowing data to be corrupted, which can occur with simple parity memory.

Although ECC memory is 11 bits wide or wider, its implementations normally use standard SIMMs and simply allocate an extra SIMM to providing the ECC function. This allows the system to use standard SIMM memory rather than proprietary and costly memory designed especially as ECC memory. ECC memory is usually standard on enterprise-level servers, standard or optional on departmental-level servers, and may be optional or not available on workgroup-level servers. You should install ECC memory if possible.

Although Windows NT Server loads in 16M of system memory, you won't have much of a server with only that amount. As with any network operating system, the more memory, the better the performance. Following are recommendations for the initial amount of RAM you need to install for different server environments:

  • If you're configuring your Windows NT server to provide only file and print sharing, consider 24M the absolute minimum. 32M is a better starting point, and if your server supports many users, you should probably consider equipping it with 64M to start.
  • If your Windows NT server provides application server functions, consider 64M as the bare minimum and realize that some products may not run in this little memory. 128M is usually a safer bet, and many choose to start at 256M.
  • As a general rule of thumb, calculate what you consider to be a proper amount of memory given your particular configuration and the software you plan to run, and then double it as your starting point. Monitor your server's performance, and add more memory as needed.
  • Consider the availability of SIMM sockets when you're configuring memory. Buy the densest SIMMs available for your server to conserve SIMM sockets. Choose two 32M SIMMs in preference to four 16M SIMMs. Bear in mind that the day may come when you have no more room to add memory without swapping out what you already have installed.

    Choosing a Bus Type.

    All newly designed servers use the PCI bus; make sure that the server you buy uses PCI. All other bus designs for servers are obsolete, including EISA and VLB buses. PCI still places a limit on the number of slots available on a single PCI bus, so larger servers use either dual PCI or bridged PCI buses. These systems simply use supporting circuitry to double the bus or to extend a single PCI bus to provide additional PCI slots.

    Disk Subsystems.

    The disk subsystem comprises the disk controllers and fixed-disk drives of your server. Choosing a good disk subsystem is a key element in making sure that your server performs up to expectations. The following sections discuss fixed-disk drives, with emphasis on SCSI-2 devices and SCSI-2 host adapter cards, and introduce you to clustering technology. Tape backup systems, which almost always are SCSI-2 devices, are the subject of Chapter 8, "Installing File Backup Systems."

    Enhanced Integrated Drive Electronics (EIDE) Drives.

    The Enhanced Integrated Device Electronics (EIDE) specification, developed by Western Digital, has gained wide industry support, although Seagate and some other manufacturers have proposed a competing standard called Fast ATA. These two standards are very similar and are merging as this is written.

    EIDE supports as many as four drives. It allows a maximum drive size of 8.4G. It increases data-transfer rates of between 9mbps and 13mbps, similar to those of SCSI-2. It makes provision under the ATA Packet Interface (ATAPI) Specification for connecting devices other than hard drives. ATAPI CD-ROM drives are now common, and ATAPI tape drives are becoming more so.

    Small Computer System Interface (SCSI).

    Small Computer System Interface (SCSI, pronounced scuzzy) is a general-purpose hardware interface that allows the connection of a variety of peripherals-hard-disk drives, tape drives, CD-ROM drives, scanners, printers, and so forth-to a host adapter that occupies only a single expansion slot. You can connect up to seven such devices to a single host adapter and install more than one host adapter in a system, allowing for the connection of a large number of peripheral devices to a system.

    SCSI is the dominant drive technology in servers for the following two reasons:

  • SCSI supports many devices, and many types of devices, on a single host adapter. Expansion slots are precious resources in a server. SCSI's capability to conserve these slots by daisy-chaining many devices from a single host adapter is in itself a strong argument for its use in servers.
  • SCSI provides request queuing and elevator seeking. Other drive technologies process disk requests in the order in which they're received. SCSI instead queues requests and services them in the order in which the data can be most efficiently accessed from the disk. This isn't a particular advantage in a single-user, single-tasking environment because requests are being generated one at a time by a single user.
  • In Windows NT Server's multiuser, multitasking environment, however, queuing and elevator seeking offer major performance advantages. Rather than service disk requests in the order in which they're received, SCSI determines the location of each requested item on the disk and then retrieves it as the read head passes that location. This method results in much greater overall disk performance and a shorter average wait for data to be retrieved and delivered to the requester.
  • SCSI is a bus technology. In fact, it's convenient to think of SCSI as a small LAN contained within your server. Up to eight devices can be attached to a standard SCSI bus. The SCSI host adapter itself counts as one device, so as many as seven additional devices can be attached to a single host adapter. Any two of these devices can communicate at any one time, either host to peripheral or peripheral to peripheral.

    SCSI can transfer data using one of two methods. Asynchronous SCSI, also referred to as Slow SCSI, uses a handshake at each data transfer. Synchronous, or Fast, SCSI reduces this handshaking to effect a doubling in throughput.

    SCSI uses a variety of electrical connections, differing in the number of lines used to carry the signal. Single-ended SCSI uses unbalanced transmission, where the voltage on one wire determines the line's state. Differential SCSI uses balanced transmission, where the difference in voltage on a pair of wires determines the line's state. Single-ended SCSI allows a maximum cable length of three meters for Fast SCSI and six meters for Slow SCSI. Differential SCSI allows cable lengths up to 25 meters. Single-ended SCSI is intended primarily for use within a single cabinet, whereas Differential SCSI allows the use of expansion cabinets for disk farms.

    You need to be aware of the following terminology used to refer to SCSI subsystems:

  • SCSI-1 uses an 8-bit bus connection and 50-pin D-Ribbon (Centronix) or D-Sub (DB50) external device connectors. (25-pin DB25 connectors also are used.) SCSI-1 offers a maximum data transfer rate of 2.5mbps asynchronous and 5mbps synchronous. SCSI-1 is obsolete.
  • SCSI-2 uses an 8-bit, 16-bit, or 32-bit bus connection. The data-transfer rates range from 2.5mbps for 8-bit asynchronous connections to 40mbps for 32-bit synchronous connection.
  • Fast SCSI is a SCSI-2 option that uses synchronous transfers to double the data transfer rate compared with systems using Asynchronous or Slow SCSI. Fast SCSI requires a 50-pin Micro-D external device connector, which is considerably smaller than the D-Sub connector.
  • Wide SCSI is a SCSI-2 option that uses a 16-bit or 32-bit wide connection to double or quadruple the data-transfer rate compared with the 8-bit wide connection used by SCSI-1. 16-bit Wide SCSI uses a 68-pin Micro-D connector for external devices. Fast and Wide SCSI-2 quickly is becoming the standard for server fixed-disk drives.
  • Wide SCSI lets you connect 15 SCSI devices to a single host adapter. Dual host adapters, such as the Adaptec AHA-3940UW Ultra Wide adapter, consist of two host adapters on a single PCI card, letting you connect up to 30 devices to two internal and one external SCSI cables.

  • Ultra SCSI is a subset of the SCSI-3 specification, which was pending approval when this book was written. Ultra is Fast SCSI with a doubled clock rate, which provides twice the potential throughput. Ultra and Ultra Wide SCSI host adapters are readily available, but delivery of large quantities of Ultra SCSI and Ultra Wide SCSI drives isn't likely to occur until late 1996.
  • EIDE vs. SCSI Drives for Servers.

    Although EIDE has addressed many of the theoretical drawbacks of IDE versus SCSI for use in servers, actual EIDE implementations continue to lag SCSI. Maximum supported drive size is becoming similar, although current EIDE drives top out at 4.3G versus 23G for Ultra Wide SCSI. Similarly, drive rotation rates, which are an important factor in determining real-world throughput, top out at 5,400 rpm in current EIDE drives, versus 7,200 rpm for SCSI drives. Theoretical throughput is similar, but Fast and Wide SCSI-2 drives, such as Seagate's ST15150W 4.3G Barracuda drive, have greater throughput than any currently available EIDE drive.

    The key advantage of SCSI for servers remains its support for queued requests and elevator seeking, along with its support for daisy-chaining many devices. Its sole drawback relative to EIDE is its somewhat higher cost. Although EIDE drives can be used successfully in a small server with very light demands on its disk subsystem, you should use SCSI if your server will do anything more than provide file and printer sharing for perhaps four or five users.

    SCSI Bus Termination.

    Proper bus termination is essential for successful data transfer from SCSI devices to the SCSI host adapter card. You must terminate both the internal (ribbon cable) and external SCSI buses. If you're using only internal or only external devices, the adapter card's built-in termination circuitry must be enabled. Internal termination must be disabled if you connect a combination of internal and external devices. In most cases, you determine whether on-card termination is enabled by setting the adapter's BIOS parameters during the hardware boot process. Figure 5.18 illustrates SCSI termination for internal devices; figure 5.19 shows termination of a combination of typical internal and external SCSI devices.


    5.18

    SCSI termination for internal devices.


    5.19

    SCSI termination for a combination of internal and external devices.

    Internal devices include termination circuitry, which usually is enabled or disabled with a jumper. Drives that conform to the SCAM (SCSI Configured AutoMagically) specification don't require setting a jumper. You terminate external devices with a connector that contains the termination circuitry. Active (rather than passive) termination is necessary for Fast, Wide, and Ultra SCSI devices.

    Disk Controllers.

    If you buy a purpose-built server, the choice of host adapter is straightforward. Low-end servers normally are equipped with a standard SCSI-2 host adapter and offer few options, except perhaps the amount of cache RAM on the adapter. High-end servers normally ship with a host adapter, which provides various hardware RAID options. Mid-range servers usually offer a choice between a standard host adapter and an upgraded adapter with hardware RAID support. In each case, your options are limited to those offered by the server manufacturer. This is really no limitation at all, because server manufacturers carefully balance their systems to optimize throughput with the chosen adapter.

    When you choose a disk subsystem, you must consider how much disk redundancy you want to provide. At the simplest level, a Windows NT server may have a single host adapter connected to one or more disk drives. Disk redundancy increases the safety and performance of your disk subsystem at the cost of requiring additional disk drives and more complex host adapters. Windows NT's built-in support for software RAID (Redundant Array of Inexpensive Disks) and hardware RAID controllers are the subject of Chapter 7, "Setting Up Redundant Arrays of Inexpensive Disks (RAID)."

    If you're assembling your own server, following are recommendations for purchasing PCI SCSI-2 host adapters to be used as conventional (independent) drives or with Windows NT 4.0's built-in software RAID system:

  • If you must support both conventional (narrow) and Wide SCSI-2 devices with a single adapter, buy an adapter with both 50-pin and 86-pin connectors. Don't try to mix narrow and Wide SCSI devices on a single cable. The Adaptec AHA-2940UW adapter, shown in figure 5.20, provides both internal and external Wide SCSI-2 connectors for high-performance disk drives, plus an internal narrow SCSI-2 connector for CD-ROM and conventional SCSI-2 drives.
  • If you don't need to support existing narrow SCSI-2 devices, consider using a dual Fast and Wide SCSI-2 host adapter, such as the Adaptec AHA3940UW shown in figure 5.21. Your Pentium or Pentium Pro motherboard must support PCI bridging to use the AHA3940UW. You gain a significant speed advantage by connecting drives in pairs to the two internal SCSI cables.

  • 5.20

    A single Ultra SCSI adapter with a narrow and Wide internal connector and a single Wide external connector (courtesy of Adaptec Inc.).


    5.21

    A dual Ultra Wide SCSI adapter with two Wide internal connectors and a single Wide external connector (courtesy of Adaptec Inc.).

    Clustering.

    A cluster is defined as an interconnected group of two or more servers that appear transparently to a user as a single server. Clustering uses a combination of specialized high-speed connectivity hardware with software running on each server to link the servers into a cluster and provide this transparent access.

    See "Forging Alliances for Scalable Windows NT Server Hardware," (Ch. 1)

    Although clustering is common in the minicomputer environment-particularly with the Digital Equipment Corporation line of VAX minicomputers-clustering for PC servers is just beginning to appear. Microsoft is committed to a clustering strategy for Windows NT Server, which uses industry-standard hardware components to provide inexpensive clustering functionality.

    Uninterruptable Power Supplies and Power Protection.

    All this expensive hardware does you no good whatsoever unless you keep it supplied with clean, reliable AC power provided by an uninterruptable power supply (UPS). A UPS performs the following two functions:

  • It supplies auxiliary power to keep your network components running when the mains power fails. UPSs vary in how much power they supply, the quality of that power, and for how long they can supply it.
  • It conditions the mains power to protect your equipment against spikes, surges, drops, brownouts, and electrical noise, all of which can wreak havoc with your servers, clients, and active network components.
  • Understanding Power Protection Issues.

    Power companies do a remarkably good job of supplying consistent power in most areas of the United States. Still, when the power leaves the power plant-more particularly, when it leaves the distribution transformer near you-the electric utility ceases to have complete control. Voltage is subject to variations due to loads placed on a circuit and to the higher voltage circuit that supplies it. Following are some of the problems that commonly affect your power:

  • A sudden and complete loss of power for an extended period is called a blackout or outage. Blackouts can be caused by something as common as a cable being cut or a drunk hitting a power pole, or by something as unusual as load-shedding, where at times of peak demand the power company intentionally cuts service to whole areas to protect the rest of the grid. Blackouts are what causes most firms to consider purchasing UPSs.
  • A high-voltage, high-current device, such as an elevator motor starting, can cause the voltage to drop significantly on all 110-volt circuits supplied by the same high-voltage circuit. This drop, called a brownout, is noticeable as a dimming of incandescent lights. Brownouts can last up to several seconds if caused by an overload. Short-term brownouts are referred to as sags. At times of peak demand, the electric utility may intentionally reduce the delivered voltage, thereby causing a long-term brownout. Brownouts are potentially more destructive to your equipment than any other common electrical problem.
  • A short-duration absence of voltage is called a drop. Drops are common but aren't often noticed. No dimming is seen in incandescent lamps because the filament doesn't have a chance to cool down. PC power supplies have large capacitors, giving them enough electrical momentum to cover the drop.
  • A sudden drop in load, such as an elevator motor coming off line, can cause a surge. A surge is a temporary sustained overvoltage condition in which the delivered voltage is substantially higher than the nominal 108 volt to 125 volt. Surges can last from a fraction of a second to several seconds, and are much less dangerous to equipment than are extended brownouts. Most equipment can deal successfully with any surges likely to occur. Still, surges shouldn't be discounted as a potential problem.
  • A sudden peak in voltage which lasts for only a fraction of a second is called a spike or a transient. Spikes are produced by a variety of sources, including motors, lightning-induced voltages, and transformer failures. Although spikes can have extremely high voltage, they're also usually of very short duration, and accordingly the total power delivered is normally quite small. The switching power supplies common in computer equipment are actually quite good at suppressing transient voltages, but they make expensive surge protectors.
  • One factor often overlooked in considering electrical protection is that damage to components can be incremental and cumulative. It's easy to spot the damage after an overvoltage event, such as a lightning strike. Chips are charred and may literally be smoking. It isn't as easy to spot the damage after a less severe event. The component may pass all diagnostics and run flawlessly, but be damaged nonetheless.

    Most chips are designed to work at 5 volts. A spike at 100 or even 1,000 times that voltage may leave the chip apparently undamaged, but may in fact have done near-fatal damage to delicate circuitry and traces. Because silicon doesn't heal itself, this damage is cumulative. The next minor overvoltage event may be the straw that breaks the camel's back, leaving you with a dead network and no idea what caused it. Choosing and using good UPSs will ensure that this doesn't happen to you.

    Understanding UPS Types.

    The original UPS designs comprised a battery, charging circuitry, and an inverter. The load-your equipment-is never connected directly to mains power but is instead powered at all times by the inverter, driven by battery power. The battery is charged constantly as long as there's mains power (see fig. 5.22). This type of UPS is called a true UPS or an online UPS.


    5.22

    Power flow in an online UPS.

    An online UPS has several advantages. First, because the load is driven directly by battery power at all times rather than switched to battery when mains power fails, there's no switch-over time. Second, because there's no switch, the switch can't fail. Third, full-time battery operation allows the equipment to be completely isolated from the AC mains power, thereby guaranteeing clean power.

    Balanced against these advantages are a few drawbacks. First, an online UPS is expensive, often costing 50 to 100 percent more than alternatives. Second, because an online UPS constantly converts mains AC voltage to DC voltage for battery charging and then back into AC to power the equipment, efficiencies are often 70 percent or lower, compared to near 100 percent for other methods. In large installations, this may noticeably increase your power bill. Third, battery maintenance becomes a more important issue with a true UPS, because the battery is in use constantly.

    The high cost of online UPS technology led to the development of a less expensive alternative that was originally called a standby power supply (SPS). Like an online UPS, an SPS includes a battery, charging circuitry, and an inverter to convert battery power to 120-volt AC. Unlike the online UPS, the SPS also includes a switch. In ordinary conditions, this switch routes mains power directly to the equipment being powered (see fig. 5.23, top). When the mains power fails, the switch quickly transfers the load to the battery-powered inverter, thus maintaining power to the equipment (see fig. 5.23, bottom).


    5.23

    A standby power supply with mains power (top) and powering the server during a power outage (bottom).

    Five or 10 years ago, the switching time of an SPS was a major concern. For example, IBM used power supplies from two different manufacturers in the PC/XT. One of these had sufficient "inertia" to continue powering the system for between 8 and 15 milliseconds (ms) after power failed, while the other allowed only 2 ms or 3 ms grace. Systems that used the first type of power supply worked just fine with SPSs of that era. Systems using the second type of power supply crashed immediately when power failed, because the switching time required by the SPS was longer than the power supply could stand.

    Switching time is of much less concern today for two reasons. First, SPSs have been improved to reduce switching times from the 5 ms to 8 ms range common several years ago to the 1 ms range today. Second, PC power supplies have also been improved. A typical PC power supply today can continue to operate without power for a substantial fraction of a full 60 ms cycle. These two factors combined make it unlikely that the short switching time required by a modern SPS will cause the PC to lose power.

    A simple SPS functions in only two modes. When mains power is at or above the minimum threshold voltage required, it's passed directly to the computer. When mains power voltage falls below the threshold, the SPS switches to battery power. This means that each time a sag or brownout occurs, your systems are running on battery power. A typical simple SPS may switch to battery if the incoming voltage drops below 102 volts-a relatively common occurrence.

    A line-interactive SPS adds power-conditioning functions to monitor and correct the voltage of the incoming mains power. When a sag or brownout occurs, the line-interactive SPS draws more current at the lower voltage in order to continue providing standard voltage to your equipment without using battery power. A typical line-interactive SPS may continue to provide standard AC voltage to your equipment without using battery power when mains power voltage drops as low as 85 volts, thereby reserving its battery for a true blackout situation.

    Understanding UPS Specifications.

    The three critical elements of a UPS are how much power it can provide, of what quality, and for how long. The first two are determined by the size and quality of the inverter circuitry. The third is determined by the size of the battery connected to the UPS.

    The VA (volt-ampere) rating of a UPS determines the maximum amount of power that the UPS can supply and is determined by the rating of the components used in the inverter circuitry of the unit. For example, an SPS rated at 600 VA can supply at most 5 amps of current at 120 volts. Trying to draw more than 5 amps overloads and eventually destroys the inverter circuitry.

    The second essential issue in choosing a UPS is the quality of power it provides, which is determined by the design of the inverter circuitry. Nearly all online UPSs and the better grade SPSs provide true sine wave output from their inverters. Less expensive units with lower-quality inverters provide only an approximation of sine wave output, using square wave, modified square wave, or sawtooth waveforms. Any of these departures from true sine wave put an additional burden on the power supply of connected components, causing overheating of the component power supply and making its failure more likely. Don't settle for less than true sine wave output in your UPS.

    The third essential issue in choosing a UPS is how long it can continue to provide backup power for a given load. This is determined by the type and amp-hour rating of the battery. The amp-hour rating is simply the product of how many amps of current the battery is rated to provide, times the number of hours it can provide that current.

    Unfortunately, the total amp-hours deliverable by a given battery depend on the load. For example, you might expect that a battery capable of delivering 5 amps for 12 minutes, or 1 amp-hour, could instead deliver 10 amps for six minutes, also 1 amp-hour. This isn't the case, however. High current draws reduce the total amp hours deliverable, while low draws increase them. The battery described might provide 10 amps for only three minutes, or 0.5 amp-hour, whereas it might allow a draw of 1 amp for 120 minutes, or two amp-hours.

    What this means in real terms is that you can't assume that a UPS that powers two servers for 10 minutes can power four servers for five minutes, even if the load is within the VA rating of that UPS. For example, the author's 600 VA unit is rated to supply a full 600 VA for only five minutes, but can supply 300 VA for 22 minutes. In other words, cutting the load in half more than quadruples the runtime.

    With most small UPSs, you take what you're given with respect to batteries. The batteries are built-in, and you have few or no choices about battery type or VA rating. Some larger UPS models-typically starting at 1,000 VA or more-use external batteries, allowing you to choose both the type and number of batteries to be used. Some modular systems allow you to add batteries in a daisy-chain arrangement, allowing whatever runtime you're willing to pay for.

    Sizing a UPS.

    Calculating UPS loads and runtimes is imprecise at best. Servers and other components, even when fully configured, usually draw considerably less power than the rating of their power supplies, which are rated for the maximum load the system can support. Even nominally identical components can vary significantly in their power draw. Loads vary dynamically as tape drives, Energy Star compliant monitors, and other components come on- and offline.

    Although you can use commercially available ammeters in an attempt to determine exactly what each of your components really draws and size your UPS accordingly, few system managers do so. Most elect to simply sum the component wattages (applying the power factor correction for components with switching power supplies), add a fudge factor for safety, and then buy the next size up.

    UPS Manageability.

    Manageability is another issue you need to consider when choosing a UPS. There are two aspects to UPS manageability, as follows:

  • Any but the most inexpensive UPS makes provision for interaction with the network operating system, allowing the UPS to signal the server to shut down gracefully when backup power is about to run out, and thereby avoiding disk corruption and other problems that result from a sudden unexpected loss of power.
  • Windows NT Server provides this function directly, but only for supported UPS models. Check the latest version of the Hardware Compatibility List to verify that your UPS is supported. Even if it isn't, the UPS manufacturer may provide software that runs on Windows NT Server 4.0 to add this functionality.
  • True remote management is provided by an SNMP agent included with the UPS. This allows the UPS to be monitored, tested, and controlled from a remote management client. If you're running such a remote management package to control other aspects of your network, make sure that the UPS model you choose is supported by your management software package.
  • From Here...

    This long chapter covered the three primary categories of hardware required to set up a Windows NT network: network cabling, active network components, and server systems. Ethernet was recommended as the media access method, 10BaseT and/or 100BaseT for cabling, ISA NICs for 10BaseT clients, and PCI NICs for 100BaseT clients and servers running either 10BaseT or 100BaseT. Guidelines for choosing Pentium- and Pentium Pro-based servers, along with recommendations for SCSI-2 drives and host adapter cards, were included. The chapter concluded with a description of the two basic types of uninterruptible power supplies.

    The following chapters provide further information on the topics discussed in this chapter:

  • Chapter 6, "Making the Initial Server Installation," guides you through Windows NT Server 4.0's setup application, with emphasis on preparing you to answer the questions posed in the setup dialogs.
  • Chapter 7, "Setting Up Redundant Arrays of Inexpensive Disks (RAID)," describes how to choose between software and hardware RAID implementation and how best to take advantage of Windows NT Server 4.0's software RAID capabilities.
  • Chapter 8, "Installing File Backup Systems," explains the tradeoffs between different types and formats of backup systems, and how to manage server backup operations to ensure against lost data in case of a fixed-disk failure.
  • Chapter 18, "Managing Remote Access Service," explains how to select and install modems, and configure Windows NT Server 4.0's RAS component to support dial-in access to shared network resources by mobile users.

  • Previous chapterNext chapterContents