In-Depth

Systems Engineering: Home-Built Intrusion Detection

With a bit of SQL and the NT Resource Kit, you can create a log file management system to stay on top of potential intrusions.

If you’re like me, you’re constantly under pressure to answer questions like: Are we experiencing any Internet attacks? Are internal or external users probing us for vulnerabilities? What machines are people trying to hack into? Of course, if you have sufficient logging enabled, the answers to some of these questions can be found in the log files of your servers or border routers. But let’s be honest—with hundreds of Windows NT workstations and servers all over the place, who has time to dig through a sea of log files to find some trace of intrusion? What we need is a way to consolidate all of these logs into a central location, preferably where we can run all kinds of queries against this data. That’s exactly what we’re going to do in this article using two tools: Microsoft SQL Server 7.0 (SQL 6.5 or Microsoft Data Engine should work fine as well) and the Windows NT Server Resource Kit (if you don’t have it, get it—now!).

I’ll focus on two types of logs: NT Server event logs and IIS HTTP logs; keep in mind that the log types you import are ultimately up to you and your imagination. I’ll discuss ways to optimize data collection and reporting techniques. I should note that there are several commercial packages for doing this type of log analysis; however, who ever got a promotion by signing a purchase order? Let’s show our self-reliance and maybe we’ll think of something the Major Players haven’t.

Population Control
If you’re lazy like me, then you don’t want to create a table and key in all those machine names. So create the following batch file and run it with domain administrator privileges on a machine with the Windows NT Resource Kit and osql (SQL Server or MSDE) installed.

@echo off
setlocal
for /f %%I in (‘netdom /NOVERBOSE member’) do call :insertsvr "%%I"
endlocal
goto end
:insertsvr
osql -E -S yourSQLserver -Q "use LogRepository insert servers (servername) values ('%1')"
:end

This batch file uses the NETDOM utility in the Resource Kit to obtain a list of all domain members. It cycles through the list and adds every member to the SERVERS table. We could also add another osql statement at the beginning to delete all records in the servers table and schedule this batch file to run on a regular basis. This would give us the ability to pull logs off of servers as soon as they’re added to the domain without having to periodically repopulate the SERVERS table manually.

Obviously, you need to replace “yourSQLServer” with the actual name of your SQL Server. I used a trusted connection, but you could also use the –U and –P switches to provide SQL Server authentication.
—Chip Andrews

Preparing SQL Server

We’ll begin by creating a SQL Server database called “LogRepository,” making sure that plenty of free space exists for us to start collecting data. I’ll show all SQL commands for creating the necessary database objects, but you’re welcome to do all of this using the Enterprise Manager if you’re so inclined. Of course, if you’re using MSDE, using osql.exe and the commands below may be your only option. First, let’s create the database:

USE master
GO
CREATE DATABASE LogRepository
ON
( NAME = LogRepository,
FILENAME = 'c:\mssql7\data\logrepository.mdf',
SIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = 'LogRepository_log',
FILENAME = 'c:\mssql7\data\logrepository_log.ldf',
SIZE = 5MB,
FILEGROWTH = 5MB )
GO

Keep in mind that you may need to change drive letters or directory names for your configuration. I’m only being specific to help those who may want to get started with a simple configuration and learn from there. Also, you may want to change the TRUNCATE LOG ON CHECKPOINT and SELECT INTO/BULKCOPY flags for speed—and so that we don’t fill up the transaction log on an application that isn’t mission critical. Use Books Online to see the details on this option if you’d like a full description of why we’re doing this.

sp_dboption ‘LogRepository’,’trunc. log on chkpt.’ ,
true
GO
sp_dboption ‘LogRepository’,’select into/bulkcopy’ ,
true
GO

Next, we need a list of all of our servers so we know where to tell SQL Server to start looking for all this data. Create a table using the following code:

CREATE TABLE servers (servername varchar(17))
GO

Now add your NT servers and workstations to the table using Enterprise Manager or do them one at a time with insert statements:

INSERT servers (servername) values (‘myserver’)
GO

Now create your log tables:

CREATE TABLE [NTSecurityLogs] (
[logDate] [datetime] NULL ,
[logTime] [varchar] (50) NULL ,
[logEventType] [int] NULL ,
[logEventCategory] [int] NULL ,
[logEventID] [int] NULL ,
[logEventSource] [varchar] (20) NULL ,
[logUser] [varchar] (50) NULL ,
[logBuffer] [varchar] (1) NULL ,
[logComputer] [varchar] (50) NULL ,
[logStrings] [varchar] (2000) NULL
)
GO

CREATE TABLE [HTTPLog] (
[ClientHost] [varchar] (255) NULL ,
[Username] [varchar] (255) NULL ,
[LogTime] [datetime] NULL ,
[Service] [varchar] (255) NULL ,
[Machine] [varchar] (255) NULL ,
[ServerIP] [varchar] (255) NULL ,
[ProcessingTime] [int] NULL ,
[BytesRecvd] [int] NULL ,
[BytesSent] [int] NULL ,
[ServiceStatus] [int] NULL ,
[Win32Status] [int] NULL ,
[Operation] [varchar] (255) NULL ,
[Target] [varchar] (255) NULL ,
[Parameters] [varchar] (255) NULL
)
GO

We’re ready to get started. We have our raw database and now all we need is to gather the data. It might be noted that we didn’t create any indexes on the table; so I suggest that once you start importing data, you create indexes on the fields you repeatedly query against if performance becomes a problem during your reporting.

NT Event Logs: Log Clog

The NT Server event log has been the bane of many an administrator. Logging occurs on every single server and workstation across the enterprise without any facility to consolidate these logs. The Resource Kit provides a utility to help us create our own solution. DUMPEL (DUMP Event Log) allows you to save the event log (system, security, or application) as a delimited text file. We’ll import this file into our database.

Figure 1. DUMPEL lets you save the event log as a delimited text file, which you can then use to populate your database. (Click on image to view it in full size.)

Our method of import will be a stored procedure that steps through all of the servers in our SERVERS table and imports their security logs into the database. For the purposes of speed and simplicity, we’ll use the BULK INSERT statement to import the data. You may wish to use Data Transformation Services or BCP depending on your security requirements (BULK INSERT requires sysadmin role) and expertise. Let’s examine the stored procedure that will bring in our data:

CREATE PROCEDURE sp_getSecurityLogs AS

-- Declare our variables
DECLARE @current_server varchar(15),
@cmdline varchar(100)

-- Suppress record counts
SET NOCOUNT ON

-- Wipe the current log. You can omit this if you want to use
-- LogRepository as your permanent log dumping ground.
DELETE from NtSecurityLogs

-- Here we will loop through all the servers in our SERVERS table and
-- import their logs. Change the location of c:\logfile.out if you wish
DECLARE Server_Cursor CURSOR FOR
SELECT servername
FROM servers
OPEN Server_Cursor
FETCH NEXT FROM Server_Cursor
INTO @current_server
WHILE @@FETCH_STATUS = 0
BEGIN
set @cmdline = ‘dumpel -f c:\logfile.out -s ‘ +
@current_server + ‘ -l security -t’
PRINT @cmdline
exec master..xp_cmdshell @cmdline
BULK INSERT LogRepository.dbo.NtSecurityLogs
FROM ‘c:\logfile.out’
FETCH NEXT FROM Server_Cursor INTO @current_server
END
CLOSE Server_Cursor
DEALLOCATE Server_Cursor

PRINT ‘Log Importing Complete’

Simply execute sp_getSecurityLogs as a scheduled task or on demand whenever you wish to do an audit. The security logs of all servers in the SERVERS table we created earlier will be imported into the NTSecurity-Logs table. (Note: Make sure the DUMPEL program is in your search path or specify the exact location in the stored procedure.)

You may notice that the xp_cmdshell command is used for this task. Generally I recommend that if this extended stored procedure is enabled, you take special care to lock this machine down. See www.sqlsecurity.com for tips on doing this.

IIS Log Files: Immediate Gratification

If you’ve ever been the target of a security audit/scan, you know Web servers are prime targets for potential exploits. They’re generally designed to be used by anonymous users, seldom monitored for probes, and full of potential holes if not properly secured. Too often the evidence of a probe will remain in the log files of the affected server without any warning being fed to the administrator.

We could go through almost the same steps with the IIS HTTP logs as we did with the NT event logs, but let’s try something new. IIS supports direct ODBC logging, so it’s possible to send all log info to the SQL Server database directly and avoid the lag time that comes from periodic imports. Using this type of logging we can analyze data as an attack occurs, so there might still be time to react. A periodic query might be run against, say, a list of known exploits that could send an alert to the security administrator.

To configure ODBC logging, start the Internet Service Manager through the start menu or Microsoft Management Console (MMC). Then right-click on the Default Web Site (or whatever site you wish to log) and change the Active Log Format to ODBC logging. Clicking properties beside that will take you to a screen where you can choose the DSN table, username, and password for accessing the SQL Server table we’ve created. Don’t forget to set up the System Data Source Name (DSN) under the “ODBC Data Sources” icon in your control panel.

Of course, you may need to monitor performance to make sure this doesn’t adversely affect your application and to watch for locking issues if you want to send multiple server logs to a single SQL Server log repository. The old saying stands here as well: Test, test, test, and then test some more. If all else fails, bring the logs over via the same method as the NT Event logs and don’t forget to alter your table structure depending upon the logging method you choose (Microsoft IIS Log File Format, NCSA Common Log File Format, or W3C Extended Log File Format).

Figure 2. To tap an IIS log, the first step is to configure ODBC logging.

Reporting and Alerts

Finally, we can get to the fun part of this log collecting exercise: analysis. OK, so it doesn’t sound like a lot of fun, but it is—I promise. Here’s where you get to show off your SQL skills (or develop some) for the purpose of extracting information from all of this raw data. We’ll begin with some NT event log analysis reports and finish up with IIS reporting and alerting techniques.

Here are some sample queries we can run against our NT security logs:

Number of failed login attempts per server due to bad username or password select logComputer, count(*) as FailedAttempts from Ntsecuritylogs where logEventId = '529' group by logComputer
Last 10 attempts to access disabled accounts select top 10 logDate, logTime, logComputer, logStrings from Ntsecuritylogs where logEventId = 531 order by logDate desc, logTime desc
Attempts to access locked-out accounts by IP address of client select logDate, logComputer, logStrings from Ntsecuritylogs where logEventId = 539 order by logDate

Obviously, we could create similar reports for the HTTP logs to give us most requested URLs, most requests per client, and requests by date/time. However, I recommend setting up a table of possible probe signatures. We can use this table to check the contents of the client requests for these signatures and let that be our signal that a vulnerability probe is taking place.

A good tool for finding probe signatures is in the freeware utility Whisker, available at www.wiretrip.net/rfp/bins/whisker/whisker.tar.gz. (Be aware that the utility requires the scripting language Perl, freely available at www.activestate.com.) Simply fire up Whisker and probe your own Web server. Then, check your HTTP log and examine the different types of requests made against your servers. For brevity, we’ll create only two signatures to show how this might be done, but I’m sure you’ll want more signatures than this.

CREATE TABLE signatures (signaturename varchar(30),
signature varchar(50))
GO
INSERT signatures (signaturename, signature) values
(‘RDS Probe’,’%msadcs.dll’)
GO
INSERT signatures (signaturename, signature)
values (‘Showcode Probe’,’%showcode.asp’)
GO

Now we can run a query such as the one below to show us the who, what, when, and where of possible probes to monitor these addresses or store the information for prosecution if the attacker should later commit a crime. A simple (and not terribly efficient) example query might be:

SELECT h.ClientHost, h.Username,
h.LogTime, h.Service, h.Machine,
h.ServerIP, h.ServiceStatus,
h.Operation, h.Target,
h.Parameters
FROM HTTPLog h inner join signatures s on h.target LIKE s.signature

That should be enough to get you started and on your way to a robust, customizable, and free (aside any SQL Server licensing issues) intrusion detection system. Of course, it’s best to make a more presentable reporting interface for these queries.

WinDump: Cleaning Up

There’s no way I could end this article without pointing out how to take this tool to the next level. If you’d like to capture network traffic and analyze it with your newfound collector, then you’ll want to use a utility called WinDump (a Windows port of the popular Unix utility TCPDump). If you don’t already have WinDump, point your favorite browser to http://netgroup-serv.polito.it/windump and let the fun begin. This nifty tool (which deserves an article unto itself) allows users to sniff traffic on their network and save it to a file. Now you can monitor incoming and outgoing traffic for volume, trends, probes, and exploits. Did I mention it’s free? The work comes in analyzing the data it collects. I would hope that by using the techniques I’ve shown in my examples, you’ll be more than capable of logging almost any type of activity that occurs in your realm. The scripts used above as well as my progress on WinDump logging can be found at a site I participate in, www.sqlsecurity.com.

Do the Job

I hope you’ll feel confident enough now to get a handle on security. As your organization grows, you may not have the time to continue to upgrade and develop your new custom intrusion detection solution. At that point you may need to consider a commercial package with real-time network monitoring capabilities and hundreds of canned reports. Until then, you’ve got the tools to do the job. So let’s go out there and get our bosses the information they crave.

Featured

  • How Microsoft MSPs Can Build Their AI Value Proposition

    For MSPs to build a successful generative AI practice, they need to know their customers' data. This requires hiring a data analyst -- or becoming one yourself.

  • Nerdio Hands Microsoft MSPs an Assist with 'Modern Work' Upgrades

    Neridio is rolling out new improvements to its Manager for MSP platform aimed at simplifying partner deployment of multiple Microsoft services, including Azure Virtual Desktop (AVD), Microsoft 365 and Microsoft Intune.

  • Image of a futuristic maze

    The 2024 Microsoft Product Roadmap

    Everything Microsoft partners and IT pros need to know about major Microsoft product milestones this year.

  • Windows Server 2025 Now Generally Available

    Microsoft's next-gen server platform, Windows Server 2025, is now ready for production environments, the company announced Monday.