In-Depth
Systems Engineering: Stressing Out over IIS
Test with Microsoft Web Stress Tool to tune IIS <i>before</i> it hits the wall. Then use Performance Monitor to measure the improvements.
- By Pat Filoteo
- January 01, 2000
Microsoft Internet Information Server (IIS) 4.0 is one
of the fastest and most scalable Web solutions available.
For most activities, such as running intranets or informational-type
Web sites, the out-of-the-box configuration needs little
optimization. On the other hand, a high stress (load or
availability) environment, such as an e-commerce, search,
or mission-critical intranet, will probably require tuning
and benchmarking to be successful. If you’re eyeing a
Windows 2000 and IIS 5.0 installation any time soon, be
assured that the stress testing process and tools I describe
here will remain the same.
How can you create an environment that allows your solution
to grow with increasing traffic or complexity using IIS?
This article discusses how to tune IIS, along with why
these changes are necessary. I describe the use of Performance
Monitor in conjunction with the Microsoft Web Stress Tool,
so you can measure the improvement. The article primarily
targets IIS administrators, rather than developers. Developers
looking for improved performance (and curious administrators)
should review the MSDN documentation available at www.msdn.microsoft.com
(no subscription necessary).
Before tuning IIS, it’s important to first understand
how it’s set up. When IIS was first released, a Pentium
Pro 200 (single processor) server was considered very
fast. So, most of the settings for IIS are optimized for
a less capable machine. Faster hardware and processors
hide this inefficiency; but to maximize the potential,
you need to make certain modifications.
Before touching your keyboard, read the complete article
and come up with a plan prior to implementing changes.
The order of change doesn’t matter, and some rebooting
(and time) can be saved with some forethought.
Change the Services
The first setting to change is the Server Service, located
in the Network Properties dialog. When you right-click
and choose Properties, there are three options: Network
Services, File Services, and Mixed. Select Network Services
and reboot to initialize the changes. This setting controls
the optimizations of the Windows NT operating system regarding
network listeners and memory for caching.
Change the Application
Next, modify the performance settings from within Microsoft
Management Console Internet Services Manager by right-clicking
on the Web site and choosing Properties | Performance.
Slide the performance bar to the far right (100,000-plus
hits). If the Web server is a dedicated machine, there’s
no reason to “tune it down.” This setting directly affects
the memory, threading, and listener threads that IIS allows
to be in use.
Next, make sure that the “keep-alives” box is checked.
This setting is one of the least understood performance
enhancements available. Without this setting, every GET
request requires a TCP handshake. Needless to say, that’s
a lot of overhead for your clients (not to mention the
server). Keep-alives are an HTTP 1.1 specification, so
older proxies and browsers will ignore it.
Close the dialog and right-click on the Machine Name.
Choose Properties | Edit WWW Master Properties | Home
Directory | Configuration | Process
Options. You need to modify two settings. First, change
the default script engines cached from 30 to 250. For
future reference, Microsoft increased this default setting
in IIS 5.0 to 250. Under a heavy stress, this allows IIS
to keep more interpreters available without unnecessary
loading or unloading. Next, change the “templates cached”
value from unlimited to a value equal to 25 percent of
the number (count) of ASP pages. A template is the machine
code version of an ASP page—basically, a precompiled version
of the ASP page, analogous to the caching of SQL Server
Stored Procedures.
The rest of the settings aren’t exposed via the interface
and will need to be modified in either the Metabase or
the Registry. It’s important to back these up in case
something goes wrong.
First, modify the Metabase with the following changes:
cscript adsutil.vbs set w3svc/
AspQueueTimeout 30
cscript adsutil.vbs set w3svc/
ServerListenBackLog 1000
cscript adsutil.vbs set w3svc/
MaxEndPointConnections 1000
If the default installation location was used, these
need to be run from the c:\winnt\system32\inetsrv\adminsamples
directory.
The AspQueueTimeout is set to infinity by default. This
code changes it to 30 seconds. Because very few users
will wait more than 30 seconds for a response without
hitting “Refresh,” there’s no reason to keep a page in
the queue for any longer. ServerListenBackLog and MaxEndPointConnections
must be set to the same value, which controls how many
simultaneous client requests IIS can provide. Keep in
mind, this doesn’t limit you to 1,000 clients—they won’t
all be accessing the machine at precisely the same time.
Instead, it allows for several thousand simultaneous users
(depending on content and network latency). Content latency
is the time it takes content to be served from the disk
or cache to the wire; network latency is the time it takes
a request to travel from the client to the server (or
vice versa).
By default, IIS allows 10 ASP worker threads per processor.
So, in a dual-processor machine, only 20 ASP pages may
execute at a time. This was done when IIS was first released
to keep it from saturating weaker machines. A Registry
parameter needs to be added to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
W3SVC\ASP\Parameters:
Dword: ProcessorThreadMax
Value: 14 (hex) (20 in decimal)
This doubles the maximum worker threads. Raise this value
further only after extensive testing—increasing the value
too high could actually slow the server down.
If your Web site makes connections to a database (and
how many big sites don’t?), the next setting might provide
some significant improvement. Microsoft Data Access Components
(MDAC), by default, allocates 7M of contiguous memory
for every recordset. The memory is released as soon as
the recordset is closed, but that’s still a big chunk
of memory to give up on a busy server.
HKEY_CLASSES_ROOT\CLSID\
{c8b522cb-5cf3-11ce-ade5-00aa00
44773d}\Flags
Dword: MaxBlock
Value: 00004000(hex)
This setting modifies the default memory allocation to
16K. Needless to say, there are many more 16K blocks than
7M blocks available.
Another database performance setting is the MaxPoolThreads.
The setting is actually for network requests from IIS
(among other things) and can, therefore, limit the number
of simultaneous links (connections) to a remote database.
The default setting is eight.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
InetInfo\Parameters
Dword: MaxPoolThreads
Value: 00000020 (hex, 32 in decimal)
A Note on Service Packs
Finally, the latest NT Service Pack (SP5 at the time
of this writing) should be applied. Service Pack 4 included
some significant improvements in the throughput and IP
stack in general. SP5 includes these improvements, plus
a new optimization for the ASP cache, memory handling
(user layer) by the operating system, and other sundry
bug fixes. Pursuant to Microsoft’s new policy regarding
service packs, SP6 released after most of this article
was written. SP6 has been designated an “optional” installation.
I would strongly recommend that you review KnowledgeBase
article Q241211 (SP6 list of fixes) before deciding to
not apply it. As always, you should test any software
before placing it on the production servers.
Unfortunately, most IT shops have a relatively delayed
SP rollout schedule due to the sheer number of environments
(DB, Web, Mail, and so on) that must be tested. I suggest
testing the Web servers separately to minimize delay because
the SPs also contain security updates as well as fixes.
Time to Test
Now that all the settings have been changed, it’s time
to stress test and measure how much traffic the server
can really handle. The tools to use include Performance
Monitor and the Microsoft Web Stress Tool (formerly known
as “Homer”). If you already have a stress tool, feel free
to use it.
For the first run, start Performance Monitor with these
counters, which I recommend you log and chart, to make
sure you don’t miss something:
From the ASP object:
- Requests per second
- Requests executing
- Requests queued
- Request execution time
From the Processor object:
- Percentage of processor time
From the Process object:
- Private bytes (inetinfo.EXE)
From the Web object:
After these settings are set up, they should be saved
to speed up later testing. Make sure to change the timing
on the chart to be at least six seconds between ticks.
A six-second delay makes the chart show the last 10 minutes
of activity. This should be plenty for most testing runs.
However, as a pre-rollout, you should perform a 12-hour
(or longer) test. After a few runs, you can remove the
unneeded counters to clear out the clutter. After testing
becomes a habit, it’ll become fairly obvious when a change
makes a difference in performance (beneficial or otherwise).
You can obtain the Web Stress Tool at http://homer.rte.microsoft.com,
where you’ll also find documentation on its use. The tool
creates a Web stress service, and if you’re in the administrators
group of multiple machines that have the tool installed,
you can create a load using many machines.
After the service is installed, record a script by browsing
your site. The more pages you record, the more realistic
the test. There will be times, such as during a troubleshooting
session, when a particular page will need to be stressed.
This is OK too, but in this article, you’ll be performing
a general test. After the script is prepared, start the
stress test.
With the stress test running, watch the counters to see
where potential bottlenecks might be occurring. Each counter
relays this information a little differently. Here’s a
generic interpretation of the counters and what a given
result might mean.
- Requests per second:
This is used as a gauge of how much stress the machine
is under. If this value is too low for the test you’re
attempting (say, one where you want to answer the question:
How many requests can this server handle?), you might
need to modify the script or stress engine settings.
- Requests executing:
By default, only 10 requests can execute simultaneously.
Earlier you changed this value to 20 (14 hex). If the
counter is maxing out at 20, but the processor isn’t
reaching 100 percent utilization, then increasing the
count might help. Anther suggestion is to streamline
the ASP code so it executes faster, having fewer lines
to interpret (include files add greatly to the length).
If the processor is already pegged at 100 percent, increasing
this value won’t help.
- Requests queued: If
this value is constantly increasing, the stress test
is overwhelming the server. It isn’t unusual for a request
to queue, but a constant increase means that the server
isn’t keeping up. This indicates a poor execution time
or a script that artificially generates more requests
than is realistic. Real-world usage typically shows
that it takes 10 simultaneous users to keep one ASP
page executing.
- Request execution time:
This value is measured in milliseconds. If it’s more
than 2K (2 seconds), an improvement needs to be made.
ASP pages (or any request type) should ideally finish
in less than 1.5 seconds. Perhaps the database isn’t
keeping up or the logic in the ASP is too complex for
a quick interpretation. This indicates that more in-depth
testing is needed (single page, execution of query outside
IIS, and so on).
- Percentage of processor time:
This shows the CPU load. If the CPU is maxed out at
100 percent, increasing the load won’t make the server
display more pages. Most test boxes perform double duty
as database servers, lacking the horsepower of the production
machines. Try to minimize differences between actual
production and testing situations to improve the correlations.
Changes that degrade performance will still show up,
but it’s easier to get a one-to-one comparison if the
environment is the same.
- Private bytes (inetinfo.EXE):
This value dances around as objects are loaded and unloaded.
The important item to note is whether there’s an upward
trend, which indicates a memory leak. I just completed
a 1.5 million hit stress test against IIS 4.0 (SP5),
and no leaks showed. If it’s leaking, find out what
else is installed and try to isolate the problem.
- Current connections:
Like the pages executing, this is a general reference
to the load you’re placing on the Web server.
Consider this stress testing primer as a place to start.
Other considerations not discussed include the network,
I/O, and Web farm scenarios (stress the firewall, DNS,
load balancer, etc.). As I said earlier, with IIS 5.0,
which will ship with Windows 2000, the process (and tools)
will remain the same. Microsoft will be making some additional
improvements, such as removing all settings from the Registry
(100 percent Metabase), improved SMP support, and additional
intrinsic objects. Performance Monitor is getting a facelift
to become more user friendly. However, the testing procedures
covered here (although not the actual settings) will be
the same and should provide good insight into what works
and what doesn’t.
About the Author
Pat Filoteo, MCSE, is a network engineer currently working in the Pacific Northwest. He’s been implementing NT solutions for about six years.