Microsoft Contest To Pit Security Machine Learning Models Against Each Other
It's time to let the security machine learning (ML) models punch it out.
Microsoft on Monday unveiled an ML contest to run later this summer that will pit security defenders against attackers. With the "Machine Learning Security Evasion Competition," Microsoft is hoping to engage both ML researchers and security professionals to develop cutting edge machine learning models related to security.
The idea builds on a contest held last summer at DEF CON 27, where contestants attacked a white box containing static malware ML models.
For its part, Microsoft, along with partners CUJO AI, VMRay and MRG Effitas, will run a two-stage contest with ML playing a part in each stage. First comes a Defender Challenge running from June 15 through July 23. Participants must provide novel countermeasures that will be judged based on their ability to detect real-world malware without triggering too many false positives.
A few weeks later is an Attacker Challenge. Unlike the DEF CON competition, the Attacker Challenge will be a black-box model. Attackers will have API access to hosted antimalware models, including models developed in the Defender Challenge. That part of the competition will go from Aug. 6 to Sept. 18.
Winners of each challenge will get $2,500 in Azure credits with a runner up earning $500 in Azure credits.
By combining defense and attack and bringing together different groups of experts, Microsoft hopes to improve the maturity of machine learning in security and make security professionals more aware of the potential, and threat, of machine learning.
"One desired outcome of this competition is to encourage ML researchers who have experience in evading image recognition systems, for example, to be introduced to a threat model common to information security," Hyrum Anderson, principal architect for Enterprise Protection and Detection wrote in an entry on the Microsoft Security Research Center blog. "Concurrently, security practitioners can gain deeper insights into what it means to secure ML systems in the context of a domain they are already know."
Posted by Scott Bekker on June 01, 2020 at 3:49 PM