Microsoft Lets Orgs Test Their AI Systems by Attacking Them

Microsoft has released an open source tool that lets organizations use attack-testing methods on their artificial intelligence (AI) software solutions.

"Counterfit" is now available as as an open source project on GitHub, Microsoft announced on Monday. Microsoft enlisted testing support from partners, organizations and government agencies to build Counterfit, which is a command-line interface tool for conducting automated attacks at scale on AI systems. It works across AI models used on-premises, in the cloud or at the edge, regardless of the type of data used.

Microsoft built it as part of its own "red team" attack-testing efforts. Organizations can use this tool to attempt to try to "evade and steal AI models," Microsoft indicated. It has a logging capability that provides "telemetry" information, which can be used to understand AI model failures.

Counterfit, which is similar to other attack tools, such as "Metasploit or PowerShell Empyre," can hook into "existing offensive tools" as well, Microsoft indicated.

On that front, Microsoft recommends using Counterfit with its other tool, the Adversarial ML Threat Matrix solution, which is described as "an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems."

Microsoft uses Counterfit to attack its own AI systems that are in production to find vulnerabilities. The tool is also being "piloted" by Microsoft for use in the AI software development phase to "catch vulnerabilities in AI systems before they hit production," the announcement indicated.

The announcement pointed to a bunch of resources that organizations can use to understand machine learning failures. There's also a "Threat Modeling" guide for developers of AI and machine learning systems. This document pointed out that the greatest threat to machine learning systems today is "data poisoning" simply because it's hard to detect. Attackers can do things like force e-mails to be labeled as spam, add inputs that lead to misclassifications and "contaminate" training data.

Microsoft is planning to talk more about Counterfit in a May 10 webinar by Ann Johnson, corporate vice president of security, compliance and identity business development, and Dr. Hyrum Anderson, a Microsoft Principal Architect. Sign-up for the webinar can be found at this page.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.