6 Steps to a Simpler Network
There's a saying in IT that
"complexity is the enemy of security." It's also the enemy of efficiency, troubleshooting
and other critical network functions. Here are six ways to untangle that crowded web
- By Bill Heldman
- May 01, 2005
Has your single LAN of the '90s evolved into a gargantuan enterprise? If your shop is like most, it started out with a handful of
Windows NT, Unix and Novell servers on a little
network. Now you're awash in a sea of servers (for which
you might have little solid software and hardware inventory information); you're reasonably certain some percentage
of your equipment has little to no fault-tolerance or
redundancy protection associated with it; bandwidth usage
is out of control; you're nowhere near level-set in terms
of your end-user computers' OSes, Office and miscellaneous
application installations, not to mention BIOS versions; and you're vulnerable to the virus du jour. On top of it all, your mobile and wireless users are increasing at an astronomical rate.
Sound familiar? If so, you're probably wondering how to make sense of it all—or if that's even possible at this point. Well, here are some practical steps you can take to simplify your network.
Figure 1. This building has an unwieldy and overly complex subnet structure, with multiple subnets per floor and limited IP addresses per subnet. This will eventually lead to problems. (Click image to view larger version.)
Start with the Subnets
First, take a look at your subnet structure, because nowhere can things get more kludged than a poorly engineered subnet plan. It can start with a wonderful idea like the 10-dot private addressing scheme. Then you add a bizarre subnet mask to it, assign a subnet to each little handful of users in various corners of the building, and
wind up with a rat's nest. To top it off, you associate the whole thing with switched VLANs. Poorly engineered TCP/IP subnet plans are difficult to understand (especially at 3:00 a.m. when you're trying to figure out the problem with your network), and might needlessly stress network switch and routing gear. If this is you, re-invent your subnet plan. Use standard subnet masks, and break things out into logical divisions. The subnets will fall right out at you.
Take a look at Figure 1, and note that Floors 1 and 3 (we can presume the other floors as well) have a 255.255.128.0
subnet mask, meaning that each subnet has half the available IP addresses that
it normally would. (For simplification and clarity, avoid using anything other than a straight Class A, B or C mask.) Further, the second octet is incremented, and the third octet is the same in all
subnets. While this works, it's messy and confusing because there are eight subnets per floor. As you go up the floors, you have to remember which grouping of subnets belongs to which floor.
Figure 2. The re-engineered subnet plan is less confusing, more logical and simpler. As you can see, there is one subnet per floor, and double the number of IP addresses available per subnet. (Click image to view larger version.)
Now look at the revamped subnet structure in Figure 2, in which the first floor's eight subnets are isolated with a normal Class C subnet mask. It's much easier to tell at a glance which floor you're dealing with now, and you don't run the risk of running out of IP addresses for a given subnet. Whether you keep the VLANs is a networking decision, but in either case you'll have to go in and tweak the closet switches on each floor to reflect the new addressing scheme.
A big offender in adding unnecessary complexity to the network is the proliferation of WINS and DNS boxes. By keeping a multitude of name servers in your environment, you run the risk of an amateur administrator keying a static record into the database, preventing Windows from automatically discovering and creating records for the device (which happened to me in one of my jobs). Also, you increase the chance of errors due to replication latency, and the complexity of the installation confuses people that have to follow your lead. Besides all that, you simply don't need a bunch of name servers on your network.
A well-architected name server implementation requires only a handful of servers for even the largest of enterprises. In the case of name server quantities, less equals more. Here are some of the most important considerations:
- If you have to maintain WINS, no more than three WINS servers is a pretty good rule of thumb, regardless of the size of the organization.
- If you can avoid it, do not use the LMHOSTS file on the local client computer or on servers, as this
creates even more complexity and difficulty in troubleshooting.
- If you use an image to install clients, disable LMHOSTS lookup in your client network configuration. In cases like this, LMHOSTS is blank. If a computer tries to find a host and resorts to LMHOSTS, the LMHOSTS lookup will fail, of course, but the computer wasted time performing a useless exercise.
- If you can get by without WINS, do so, sticking strictly with DNS for name resolution. However, realize that unless everything is up-to-date—all applications, servers and users—it may be tough to dispense with WINS, at least for the next several years.
- Try to keep your internal DNS
environment to three servers.
I'm not a fan of forest administrators keeping a secondary DNS server, as this, too, adds complexity. However, I understand why an admin would want to maintain his own DNS server. The trick here is to have one or two top people (keepers of the root) architect and manage the DNS deployment, and communicate on a routine basis what's happening, so that it's understood how DNS will roll out. Otherwise, the servers will procreate like rabbits and no one will be able to resolve a name. It is vital that someone own the DNS implementation, lock, stock and barrel.
Simplification Through "Stream"-lining
Suppose you were told you could package all of your users' apps with a simple, wizard-driven product, store them on a server as a file and send the resulting application icons to a designated set of users. When a user clicks on one, a small percentage of the app streams to the user's computer, then launches.
This is the idea behind "streaming applications." The app acts like it's running locally, but in fact nothing is installed on the user's desktop—no Registry entries, no files. That certainly simplifies your network, but it goes even further than that: the program isn't even installed on the server. The idea revolves around the packaging software watching an app install itself, then creating a file that represents the app to the server and to the user. The app thinks it's running in the regular framework for which it was written, but in reality, the user is simply utilizing a cache file on his computer.
In this scenario, the user clicks an application and part or all of it—depending on whether it's a desktop or mobile user—is streamed to his computer, as opposed to running directly from the server, as in the Citrix/Terminal Services model. The program instead runs from the app-streaming server. The app-streaming servers represent the apps to your Citrix or Terminal Services servers and they, in turn, represent them to the user. You don't even have to have a Citrix or Terminal Services box to use streaming app server software. Two major players in this space, AppStream and Softricity, both allow you to host the apps without Citrix or Terminal Services.
When it comes to Total Cost of Ownership (TCO), one of the worst things you can do is maintain an installed base of every version of Windows and Office under the sun. By level-setting your users' OSes and application versions, you gain some important simplification benefits:
- You avoid having to carry around a bevy of CDs
- Support costs are greatly reduced
- Upgrades are easier ("Let's see, is it SP4 for Win2K and SP1 for XP or vice-versa?")
- Training is easier
- You don't have to cope with software glitches spread across four or five version levels.
I've seen shops with Windows 3.11, 95, 98, ME, NT, 2000 and XP—even a couple of old DOS machines. There are shops where a small percentage of the user-base insists on staying with WordPerfect instead of joining the rest of the Office crowd (or vice-versa). One time, my CFO was adamant that he would not migrate to Outlook calendar from his "Act!" program—never mind that the rest of the enterprise was scheduling meetings in Outlook he wouldn't show up for because he didn't know he was invited.
The same thing goes for servers—keep them level-set for greater efficiencies. One trend starting to take hold in the server world is the idea of "automatic provisioning." You have a rack of "bare metal" servers sitting in your data center, just waiting for loads to increase. When they do, your management software is smart enough to provision (some call it "inflate") a new server for the need, regardless of where the need is. This sort of provisioning technology might require standardization, at least in terms of the OS and associated service packs and security updates.
Savvy administrators know how important automation is to making, or keeping, a network simple. And they get help from today's management tools like SMS/MOM, Altiris, NetIQ, LANDesk and others, which have come a long way from the days of SMS 1.0. One overlooked area of automation, though, is in configuration management. If you've ever had to go through and change the subnet mask on a couple hundred closet switches all over your company, you'll love this class of software.
Suppose, in the example above, that you have 250 network switches sitting in 25 different closets around your company and decide to re-engineer your subnets, as advised in Step 1. Without automated configuration management, you'll have to either Telnet, or HTTP, into each switch to make the configuration change, or visit each switch with a laptop and null modem cable to make the change on a per switch basis.
Configuration management software discovers the managed devices. Once it does, you set up the subnet change and issue the command to all 250 switches at once. Cool, huh?
Simplify Your Printing
Question: What procreates faster than warm, moist yeast?
In a 12-story building of about 900 users, guess how many printers my shop supported? 900! The printer insanity has to stop.
To simplify this grotesque situation, consider leased, networked, enterprise-class Multi-Function Devices (MFDs) that can print in color and black and white, fax, scan and copy. (Some of them make espresso and heat up your morning bagels, too.) Several strong vendors play in this space including Ricoh, Canon and Xerox. These devices can be centrally managed, they're rugged and aren't
subject to breakdowns like the little
ink- and laser-jet units are. Users can send a variety of jobs to them—whether it's scanning a document on the platen to send to the desktop or sending a
500-page report from the desktop to hit the three-hole paper bin.
Because of the tremendous duty-cycle these MFDs can handle, you can design an implementation that strategically locates them around the building—instead of in every nook and cranny in your office. Best of all, with the right leasing plan, support is handled by the leasing company, freeing you up for more important duties.
Don't Put It Off
Many of these tips take time to implement. Some, like the subnet, require a great deal of preparation and testing. You may feel like you don't have the time and resources to undertake some of these changes, but consider the alternative: having an inefficient, needlessly complex network that slows you down every day. In the end, the extra effort you spend now will save you much effort in the future, not to mention money that you can spend on something other than aspirin.
More InformationHere are four more ways to simplify your network, plus handy links to the vendors mentioned throughout the article:
7. Storage Simplification
8. Simplify Your Backups
Server Simplification Through Virtualization
Simplify Your Phone Network
Links to Vendors
7. Storage Simplification
There are disks everywhere—every time you order a new server, you provision at least one RAID 5 controller and lots of storage, don't you? But industry insiders suggest you're using only 20 percent to 30 percent of the storage you bought, while the remaining 70 percent to 80 percent is going unused. It's hard to imagine a shop with storage capacity problems not using every bit of space available.
There are a number of reasons for this:
- You assume more is better when provisioning servers. It's easy to reconcile in your mind that $200 more for an extra 20GB to 40GB of disk space isn't going to break the bank. Why not order extra just in case you run out later on and need some?
- You probably have uses in mind for all that space—it might be a good place, for example, to back up users' PST files.
- You may have been burned once by running out of disk capacity for something.
Given the above, it's easy to accumulate dozens of servers, all of which might have an extra 20GB to 80GB of storage sitting on them. Then you don't take the time, or don't know how, to thread all that disk space together for some practical purpose. (You might not be able to use all the disk even if you want to, if your servers are in different locations. Keeping track of the data you've put up on each little disk segment can get really confusing. Worst of all, the disk sits idle, even though you've paid for it.) I'm certainly not saying all shops are like this, but industry averages show that most have a lot more disk than they're using.
Several leading storage vendors can run a utility that will examine each of your servers and can give you a complete picture of how much goes unused. Unused disk is a resource that you've paid for. It's like buying 20 gallons of paint when you know you're only going to use 10 to paint the house, but rationalizing that you can do something with the other 10 gallons later on.
If you're struggling with storage issues, it's time to hop aboard the Storage Area Network (SAN) and Network Attached Storage (NAS) trains. Besides consolidation, these storage types eliminate the hassle of using the distributed file system (Dfs) to chase down files in exotic locations. Sophisticated SAN/NAS implementations allow you to copy data in real-time to more than one source for safekeeping and work by different groups.
The idea behind SAN and NAS is that a majority of user data (including databases) represented on the SAN/NAS is centralized—highly-available, highly-fault-tolerant and easy to back up. Lots of technology companies are jumping on this game -you should learn all there is to know about this extensive world.
SAN/NAS technologies can provide a central place for user, server and database files. With NAS, you can share out the space so that users can connect to it over the network (using NetBIOS or NFS). SAN is space dedicated to servers directly connected to it, typically by fiber-optic cabling using Fibre Channel cards (called Host Bus Adapters—HBAs). There are numerous SAN benefits:
- High fault tolerance. Often these systems use "phone-home" technology, alerting the vendor's service department to a problem—usually before you know about it. Repairs are made lickety-split and your users never even see a blip on the radar screen
- The device constantly monitors disk activity and moves data off of detected bad sectors before it can become corrupted. The bad sectors are marked as such
- Easy backup, since all data is in one place
- Devices can be connected together across a segment, whether from one campus building to another or one geographical location to another
- Writing technology can vary. You can set up a disk array so that one write is made to two or more devices at the same time, or perform a batch write (write to one device, accumulate changes, then write to another device at a preset time)
- You can take a snapshot of the data and copy it to part of the array's storage for faster backups, data warehousing or other operations when you need a point in time snapshot
- Storage operations can be proactively managed using state-of-the-art management tools written specifically for the array. You can apply some Key Performance Indicators (KPIs) to your storage, providing some key metrics regarding storage usage and functionality
- Clustering can happen across a wide-area network (In EMC's case, you can use GeoSpan software in tandem with Microsoft Clustering Service [MSCS] for a clustered environment across Windows servers and geographically disparate storage arrays)
Big players in the SAN/NAS industry include:
If you get into the bigger gear and get involved with Fibre Channel switches, you'll find there are two main players:
Here are some companies targeting SMB-size with their SAN/NAS gear:
8. Simplify Your Backups
Backups are the bane of every administrator. They're tricky to set up, require care and feeding to keep going, work slowly, and are scary when you're ready to try a restore. (If you've ever been on the admin side of a restoration that bombed because the data you thought was there wasn't there, you know what I'm talking about.)
But they don't have to be intimidating. One aspect of simplifying your network is getting things done faster, and moving from stodgy old tape backups to an ATA-class disk array gives you the ability to get the job done much quicker, along with the capacity to grow the array to match your storage needs. Backups that used to take hours take much less time on ATA. In certain array settings, you can even take a snapshot of the data at a point in time, then back that up instead of backing up the live date. If your backups grow from hundreds of gigabytes to terabytes or more, rather than finding room for more tape devices, you just add more disk to the array and off you go.
You also won't have to throw out your present backup software to make the change to ATA. Tape-virtualization software tricks your backup software into thinking it's talking to a tape device, when in reality it's talking to disk. The job is done faster and more reliably, restores are easier and more reliable, and you can at last have some peace of mind with regard to your backups.
9. Sever Simplification Through Virtualization
You were planning on buying a replacement server for that outdated, tired old box anyway, right? VMWare and Microsoft's Virtual Server 2005 allow you to host multiple server environments, all on one computer. Yes, you have to buy an enterprise class computer to host a virtual server environment—you can't use those old Pentium 500s your boss got on the cheap through eBay—and there's some additional care and feeding required, but the trade-off is in consolidating your server farm into manageable chunks.
Suppose you decide to get into the virtualized server game. Here's how the whole thing would play out:
- Buy an enterprise class server (or repurpose one from a different use)
- Decide how many multiple NICs to use or, alternatively, to stack IPs on a single NIC (I prefer the former to the latter for fault-tolerance and increased speed)
- Install the virtual server software
- Tell the software how many partitions you want (and in VMWare's case, what kind—Linux or Windows)
- Install the OS on each of the partitions and configure them
- Test everything in an exactly-duplicated network
- Your users work exactly the same way they had previously—the difference is that the apps are now hosted on virtual server sessions
10. Simplify Your Phone Network
I remember reading an article in a 1995 or so edition of Computer Telephony Magazine in which the main article was titled, "Voice over IP—The Yellow Brick Road!" The gist was that the technology was vibrant, exciting and would quickly take over the conventional telephony world. OK, so it took 10 years, but that's all right.
Here's the deal: You've got a ton of money wrapped up in proprietary Private Branch Exchange (PBX), switches and key system telephone systems, not to mention the expensive software to go along with them. The problem is that the ongoing maintenance and support costs for these beasts is eating you alive. By switching out to a VoIP system, you get yourself out of yesterday's "big iron" and into today's server environment—the one all of us have come to know and love. Advantages include:
- Possibly saving hundreds of thousands of dollars in new equipment and ongoing maintenance and support costs (depending on the size of your network)
- Cheap server-based, rather than expensive PBX-based, technology
- Software runs on well-known platforms (Windows, Unix), thus relieving you of having to have a staff of PBX subject matter experts
- Interfaces with your office automation and e-mail software for a more streamlined user experience
- Runs on today's very fast network gear
- Brings a ton of different software add-ons with it—fax, voicemail as an attachment, and so on
- Has interfaces for Microsoft's Systems Management Server (SMS), wireless and handheld technologies
Today's VoIP systems from folks like Cisco and Avaya are awesome. The call quality is clear and crisp, the phone sets (although expensive) are high-tech and exciting, there's a ton of software you can add and, best of all, it's technology you're familiar with—servers, routers, TCP/IP. Get to know your VoIP vendors well, and ask them to do a no-nonsense ROI/TCO survey demonstrating the benefits of a switch.
A caution here: You're probably not going to pull off a complete switch-out in just a few months' time. Changing out to VoIP takes planning with a lot of people, strategizing, coming up with a project plan, and most importantly, buy-in from the stakeholders that can authorize you to go forward.