Date of Award
5-21-2020
Publication Type
Master Thesis
Degree Name
M.Sc.
Department
Computer Science
Keywords
Interpretability, Machine Learning, Malware Analysis, Malware Detection, Neural Networks
Supervisor
Saad Sherif
Rights
info:eu-repo/semantics/embargoedAccess
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Abstract
The ever increasing processing power of modern computers, as well as the increased availability of large and complex data sets, has led to an explosion in machine learning research. This has led to increasingly complex machine learning algorithms, such as Convolutional Neural Networks, with increasingly complex applications, such as malware detection. Recently, malware authors have become increasingly successful in bypassing traditional malware detection methods, partly due to advanced evasion techniques such as obfuscation and server-side polymorphism. Further, new programming paradigms such as fileless malware, that is malware that exist only in the main memory (RAM) of the infected host, add to the challenges faced with modern day malware detection. This has led security specialists to turn to machine learning to augment their malware detection systems. However, with this new technology comes new challenges. One of these challenges is the need for interpretability in machine learning. Machine learning interpretability is the process of giving explanations of a machine learning model's predictions to humans. Rather than trying to understand everything that is learnt by the model, it is an attempt to find intuitive explanations which are simple enough and provide relevant information for downstream tasks. Cybersecurity analysts always prefer interpretable solutions because of the need to fine tune these solutions. If malware analysts can't interpret the reason behind a misclassification, they will not accept the non-interpretable or black box detector. In this thesis, we provide an overview of machine learning and discuss its roll in cyber security, the challenges it faces, and potential improvements to current approaches in the literature. We showcase its necessity as a result of new computing paradigms by implementing a proof of concept fileless malware with JavaScript. We then present techniques for interpreting machine learning based detectors which leverage n-gram analysis and put forward a novel and fully interpretable approach for malware detection which uses convolutional neural networks. We also define a novel approach for evaluating the robustness of a machine learning based detector.
Recommended Citation
Brigugilio, William Ryan, "Machine Learning Interpretability in Malware Detection" (2020). Electronic Theses and Dissertations. 8331.
https://scholar.uwindsor.ca/etd/8331