Date of Award

9-25-2024

Publication Type

Thesis

Degree Name

M.Sc.

Department

Computer Science

Keywords

Adversarial Attacks;Ensemble Models;Evasion Attacks;Moving Target Defense;Poisoning Attacks

Supervisor

Sherif Saad

Supervisor

Saeed Samet

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Abstract

This thesis addresses the critical challenge of safeguarding machine learning models against adversarial attacks, which pose significant risks to applications such as autonomous driving, healthcare, and cybersecurity. The first part of the thesis explores the practical feasibility of adversarial attacks across various machine learning models, revealing the gap between theoretical attack strategies and their real-world applicability. Factors such as attack complexity, attacker knowledge, and resource constraints are shown to significantly influence the success of these attacks, highlighting the need for more realistic approaches in adversarial research. Building on these findings, the second part of the thesis introduces HybridMTD, a novel defense strategy that combines Moving Target Defense (MTD) with ensemble neural network models. HybridMTD dynamically selects subsets of models from a diverse pool and employs majority voting, increasing the unpredictability of the defense mechanism and enhancing model robustness against a wide range of adversarial attacks. Extensive experiments demonstrate that HybridMTD consistently outperforms traditional single-model defenses, maintaining high accuracy and resilience across different datasets and attack scenarios. The research contributes to the ongoing efforts to secure machine learning systems, providing valuable insights into both the vulnerabilities of these systems and the potential of advanced defense strategies.

Available for download on Monday, March 24, 2025

Share

COinS