Bekker's Blog

Blog archive

Microsoft Contest To Pit Security Machine Learning Models Against Each Other

It's time to let the security machine learning (ML) models punch it out.

Microsoft on Monday unveiled an ML contest to run later this summer that will pit security defenders against attackers. With the "Machine Learning Security Evasion Competition," Microsoft is hoping to engage both ML researchers and security professionals to develop cutting edge machine learning models related to security.

The idea builds on a contest held last summer at DEF CON 27, where contestants attacked a white box containing static malware ML models.

For its part, Microsoft, along with partners CUJO AI, VMRay and MRG Effitas, will run a two-stage contest with ML playing a part in each stage. First comes a Defender Challenge running from June 15 through July 23. Participants must provide novel countermeasures that will be judged based on their ability to detect real-world malware without triggering too many false positives.

A few weeks later is an Attacker Challenge. Unlike the DEF CON competition, the Attacker Challenge will be a black-box model. Attackers will have API access to hosted antimalware models, including models developed in the Defender Challenge. That part of the competition will go from Aug. 6 to Sept. 18.

Winners of each challenge will get $2,500 in Azure credits with a runner up earning $500 in Azure credits.

By combining defense and attack and bringing together different groups of experts, Microsoft hopes to improve the maturity of machine learning in security and make security professionals more aware of the potential, and threat, of machine learning.

"One desired outcome of this competition is to encourage ML researchers who have experience in evading image recognition systems, for example, to be introduced to a threat model common to information security," Hyrum Anderson, principal architect for Enterprise Protection and Detection wrote in an entry on the Microsoft Security Research Center blog. "Concurrently, security practitioners can gain deeper insights into what it means to secure ML systems in the context of a domain they are already know."

Posted by Scott Bekker on June 01, 2020


Featured

  • Microsoft Dismantles RedVDS Cybercrime Marketplace Linked to $40M in Phishing Fraud

    In a coordinated action spanning the United States and the United Kingdom, Microsoft’s Digital Crimes Unit (DCU) and international law enforcement collaborators have taken down RedVDS, a subscription based cybercrime platform tied to an estimated $40 million in fraud losses in the U.S. since March 2025.

  • Sound Wave Illustration

    CrowdStrike's Acquisition of SGNL Aims to Strengthen Identity Security

    CrowdStrike signs definitive agreement to purchase SGNL, an identity security specialist, in a deal valued at about $740 million.

  • Microsoft Acquires Osmos, Automating Data Engineering inside Fabric

    In a strategic move to reduce time-consuming manual data preparation, Microsoft has acquired Seattle-based startup Osmos, specializing in agentic AI for data engineering.

  • Linux Foundation Unites Major Tech Firms to Launch Agentic AI Foundation

    The Linux Foundation today announced the creation of a new collaborative initiative — the Agentic AI Foundation (AAIF) — bringing together major AI and cloud players such as Microsoft, OpenAI, Anthropic and other major tech companies.