News

Microsoft Lets Orgs Test Their AI Systems by Attacking Them

Microsoft has released an open source tool that lets organizations use attack-testing methods on their artificial intelligence (AI) software solutions.

"Counterfit" is now available as as an open source project on GitHub, Microsoft announced on Monday. Microsoft enlisted testing support from partners, organizations and government agencies to build Counterfit, which is a command-line interface tool for conducting automated attacks at scale on AI systems. It works across AI models used on-premises, in the cloud or at the edge, regardless of the type of data used.

Microsoft built it as part of its own "red team" attack-testing efforts. Organizations can use this tool to attempt to try to "evade and steal AI models," Microsoft indicated. It has a logging capability that provides "telemetry" information, which can be used to understand AI model failures.

Counterfit, which is similar to other attack tools, such as "Metasploit or PowerShell Empyre," can hook into "existing offensive tools" as well, Microsoft indicated.

On that front, Microsoft recommends using Counterfit with its other tool, the Adversarial ML Threat Matrix solution, which is described as "an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems."

Microsoft uses Counterfit to attack its own AI systems that are in production to find vulnerabilities. The tool is also being "piloted" by Microsoft for use in the AI software development phase to "catch vulnerabilities in AI systems before they hit production," the announcement indicated.

The announcement pointed to a bunch of resources that organizations can use to understand machine learning failures. There's also a "Threat Modeling" guide for developers of AI and machine learning systems. This document pointed out that the greatest threat to machine learning systems today is "data poisoning" simply because it's hard to detect. Attackers can do things like force e-mails to be labeled as spam, add inputs that lead to misclassifications and "contaminate" training data.

Microsoft is planning to talk more about Counterfit in a May 10 webinar by Ann Johnson, corporate vice president of security, compliance and identity business development, and Dr. Hyrum Anderson, a Microsoft Principal Architect. Sign-up for the webinar can be found at this page.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured

  • Report: Cost, Sustainability Drive DaaS Adoption Beyond Remote Work

    Gartner's 2025 Magic Quadrant for Desktop as a Service reveals that while secure remote access remains a key driver of DaaS adoption, a growing number of deployments now focus on broader efficiency goals.

  • Windows 365 Reserve, Microsoft's Cloud PC Rental Service, Hits Preview

    Microsoft has launched a limited public preview of its new "Windows 365 Reserve" service, which lets organizations rent cloud PC instances in the event their Windows devices are stolen, lost or damaged.

  • Hands-On AI Skills Now Outshine Certs in Salary Stakes

    For AI-related roles, employers are prioritizing verifiable, hands-on abilities over framed certificates -- and they're paying a premium for it.

  • Roadblocks in Enterprise AI: Data and Skills Shortfalls Could Cost Millions

    Businesses risk losing up to $87 million a year if they fail to catch up with AI innovation, according to the Couchbase FY 2026 CIO AI Survey released this month.