The history of antimalware security solutions has shown that malware detection is like a cat-and-mouse game. For every new detection technique, thereās a new evasion method. When signature detection was invented, cybercriminals used packers, compressors, metamorphism, polymorphism, and obfuscation to evade it. Meanwhile, API hooking and code injection methods were developed to evade behavior detection. By the time security solutions started using machine learning (ML)-based detection technologies, it was already expected that cybercriminals would develop new tricks to evade ML.
To be one step ahead of cybercriminals, one method of enhancing an ML system to counter evasion tactics is generating adversarial samples, which are input data modifiedĀ to cause an ML system to incorrectly classify it. Interestingly, while adversarial samples can be designed to cause ML systems to malfunction, they can also, as a result, be used to improve the efficiency of ML systems.
Read More