Description
The field of cyber-defenses has played catch-up in the cat-and-mouse game of finding vulnerabilities followed by the invention of patches to defend against them. With the complexity and scale of modern-day software, it is difficult to ensure that all known vulnerabilities are patched; moreover, the attacker, with reconnaissance on their side, will eventually discover and leverage them. To take away the attacker's inherent advantage of reconnaissance, researchers have proposed the notion of proactive defenses such as Moving Target Defense (MTD) in cyber-security. In this thesis, I make three key contributions that help to improve the effectiveness of MTD.
First, I argue that naive movement strategies for MTD systems, designed based on intuition, are detrimental to both security and performance. To answer the question of how to move, I (1) model MTD as a leader-follower game and formally characterize the notion of optimal movement strategies, (2) leverage expert-curated public data and formal representation methods used in cyber-security to obtain parameters of the game, and (3) propose optimization methods to infer strategies at Strong Stackelberg Equilibrium, addressing issues pertaining to scalability and switching costs. Second, when one cannot readily obtain the parameters of the game-theoretic model but can interact with a system, I propose a novel multi-agent reinforcement learning approach that finds the optimal movement strategy. Third, I investigate the novel use of MTD in three domains-- cyber-deception, machine learning, and critical infrastructure networks. I show that the question of what to move poses non-trivial challenges in these domains. To address them, I propose methods for patch-set selection in the deployment of honey-patches, characterize the notion of differential immunity in deep neural networks, and develop optimization problems that guarantee differential immunity for dynamic sensor placement in power-networks.
First, I argue that naive movement strategies for MTD systems, designed based on intuition, are detrimental to both security and performance. To answer the question of how to move, I (1) model MTD as a leader-follower game and formally characterize the notion of optimal movement strategies, (2) leverage expert-curated public data and formal representation methods used in cyber-security to obtain parameters of the game, and (3) propose optimization methods to infer strategies at Strong Stackelberg Equilibrium, addressing issues pertaining to scalability and switching costs. Second, when one cannot readily obtain the parameters of the game-theoretic model but can interact with a system, I propose a novel multi-agent reinforcement learning approach that finds the optimal movement strategy. Third, I investigate the novel use of MTD in three domains-- cyber-deception, machine learning, and critical infrastructure networks. I show that the question of what to move poses non-trivial challenges in these domains. To address them, I propose methods for patch-set selection in the deployment of honey-patches, characterize the notion of differential immunity in deep neural networks, and develop optimization problems that guarantee differential immunity for dynamic sensor placement in power-networks.
Download count: 7
Details
Title
- The What, When, and How of Strategic Movement in Adversarial Settings: A Syncretic View of AI and Security
Contributors
- Sengupta, Sailik (Author)
- Kambhampati, Subbarao (Thesis advisor)
- Bao, Tiffany (Youzhi) (Committee member)
- Huang, Dijiang (Committee member)
- Xue, Guoliang (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2020
Subjects
Resource Type
Collections this item is in
Note
- Doctoral Dissertation Computer Science 2020