Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
Details
Title
- Moving Target Defense: Defending against Adversarial Defense
Contributors
- Mazboudi, Yassine Ahmad (Author)
- Yang, Yezhou (Thesis director)
- Ren, Yi (Committee member)
- School of Mathematical and Statistical Sciences (Contributor)
- Economics Program in CLAS (Contributor)
- Barrett, The Honors College (Contributor)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2019-05
Resource Type
Collections this item is in