Users Report Frustration with Redmond's Recent Security Record
- By Scott Bekker
- June 17, 2001
By all accounts, Microsoft Corp.
suffered through a rocky stretch in May when a serious vulnerability was discovered in IIS 5.0 - only days after the software giant finally shipped its long-awaited Service Pack 2 release for Windows 2000.
It doesn't look like June will offer much of a respite for Microsoft. Just last week, for example, Redmond confirmed the existence of a dangerous new vulnerability in its Exchange 5.5 and Exchange 2000 messaging and mail platforms. To date, Microsoft has issued three - yes, that's three - patches to correct it. Why the embarrassment of patches? Because versions 1.0 and 2.0 of the security update that Microsoft originally released (and subsequently re-released) to fix the problem actually shipped with an undocumented and to a large extent undesirable "feature" in tow: The ability to crash an Exchange server.
And this just in -- despite patching Exchange Server on three separate occasions over the course of a mere eight days (that's once every 2.66 days for those of you who keep track of these things), Microsoft yesterday found time to confirm and patch a newly detected vulnerability in its SQL Server 7.0 and SQL Server 2000 databases. No word yet on whether or not Microsoft's latest patch boasts any undocumented "features" of its own.
Is security in the Windows NT and Windows 2000 worlds really the Keystone Kops-esque proposition that it's appeared of late to be? Or perhaps it's users who are asking too much of Microsoft if they demand both quick and reliable fixes to software bugs and security breaches?
A Universal Phenomenon
Although Microsoft garners its fair share of negative publicity when it comes to the frequency with which security shortcomings are detected in its software, the reality is that new security vulnerabilities of all kinds are discovered on an almost daily basis for all platforms and with varying degrees of severity.
According to the CERT Coordination Center, security vulnerabilities were discovered at a clip of about three per day in 2000. Indeed, with the exception of a two year stretch from 1997-1999, during which time the number of certified vulnerabilities actually declined from 345 to 311 (1997) and then to 262 (1998), security vulnerabilities have been on the rise of late - and are now more than doubling in year-over-year comparisons. In 1999, for example, CERT recorded 417 vulnerabilities - up 31 percent from 1998's totals; in 2000, however, CERT logged 1,090 vulnerabilities - 62 percent more than in 1999 and by far the most since it began tracking vulnerabilities in 1995. And thus far this year, we're on pace to smash 2000's dubious record: CERT recorded 633 vulnerabilities in Q1 2001 alone.
Needless to say, Microsoft can't be blamed for all or even for most of these issues. In February, for example, a worm attack called "sadmind/IIS" exploited vulnerabilities in both IIS and in a tool (called "sadmind") that ships with the Solstice systems and network management environment from Sun Microsystems Inc.. If nothing else, Microsoft had the luxury of sharing the negative publicity on this occasion. And in April 2000, Linux kingpin RedHat Software Inc. - then in the
midst of an initiative touting the security of its Linux distribution as a secure platform for e-business - was forced to patch an embarrassing vulnerability that, if exploited, could give an attacker complete control over a compromised system. Sun and RedHat are by no means alone in this respect; check the annals of Bugtraq, CERT and others and you'll find dozens of similar such vulnerabilities affecting a variety of different platforms.
Towards a New Understanding of Security?
According to Christopher DeMarco, an Unix and Windows NT administrator with sysadmin outsourcing specialist Taos, it's naive to expect any platform to be completely secure and free of potential vulnerabilities in today's hyper-wired world. Rather, DeMarco argues, the degree to which a platform can be called "secure" is often a function of the rapidity with which potential bugs and vulnerabilities are identified and patched - and also of the reliability that characterizes these software patches once they're in place.
"Microsoft has gotten much better about acknowledging potential problems, but they still seem bloody boneheaded about resolving and fixing them once they're discovered," he suggests. "This sort of thing [flawed security patches] just doesn't happen in the open source [software] community."
For his part, Roger Seielstad, a senior network administrator with consulting and infrastructure management specialist Peregrine Systems Inc., suggests that security should be re-cast as a "team effort" that comprises the best efforts and practices of vendors and IT organizations alike. "Each member of the team has to add their resources to the team, and the combination should be able to provide a reasonably secure system," he affirms. "From the vendor's perspective, code quality is key. Vendors need to provide audited code with few bugs to their customers."
Not surprisingly, it's in the area of code quality - especially as it relates to software patches and to security updates - that Microsoft has traditionally most bedeviled IT organizations. In the last three years alone, the software giant has botched two of its Service Pack releases; two of its Office service release updates; and several of the hotfixes it's issued to quickly patch bugs and other vulnerabilities in Windows NT and Windows 2000. That kind of track record can try the patience of even the most reasonable of IT managers, maintains Edward Ko, a network coordinator with the Pennsylvania State University's College of Communications.
"I like to think that I'm realistic because I don't expect any system to be completely secure out of the box, and I also don't expect it to be secure over the life of its use," he observes. "But when problems do occur, I expect a software vendor to provide me with timely fixes that are thoroughly regression-tested and completely reliable. I definitely don't want them to have to make three attempts to get it right. That's inexcusable."
It's more than inexcusable, contends Gavin Burris, a network administrator with Penn State's digital media and visualization group, it's dangerous. As Burris sees it, security today is predicated to a large extent upon a pragmatic assessment of risk - i.e., all systems are vulnerable - which means it's more important than ever that IT organizations trust the sources of the software that they're deploying.
"You know when you build a system that it's not completely secure and that it probably can't be made completely secure," he explains. "But you have to trust that the people who designed it know what they're doing and will be able to reliably support it when problems arise. But do you think that there's anyone out there who can honestly say that they trust Microsoft after what just happened?"
Although in many ways things don't look like they can get any better, Peregrine Systems' Seielstad sees room for optimism.
"Microsoft still focuses their development on features, in terms of enhancing the user experience, over quality. These features are one of the reasons they have become the dominant software vendor in their markets," he says. "Increasingly, corporations will demand better quality over flashy features."
A sea change of the kind predicted by Seielstad is already happening, argues Steve Lipner, a manager with Microsoft's Security Response Center, who points to the software giant's much-hyped Secure Windows Initiative - launched in March with the rallying cry of a "War on Hostile Code" - as a case in point.
"Like Microsoft's initiatives to promote Internet development and manageability, the 'War on Hostile code' is fueled by customer demand," Lipner says. -- Stephen Swoyer