Artificial Intelligence’s facial recognition programs are inherently racially biased. The programs are not necessarily created with the intent to disproportionately impact marginalized communities, but through their data mining process of learning, they can become biased as the data they use may train them to think in a biased manner. Biased data is difficult to spot as the programming field is homogeneous and this issue reflects underlying societal biases. Facial recognition programs do not identify minorities at the same rate as their Caucasian counterparts leading to false positives in identifications and an increase of run-ins with the law. AI does not have the ability to role-reverse judge as a human does and therefore its use should be limited until a more equitable program is developed and thoroughly tested.
Details
- Artificial Biasedness: How Inherent Biases in the Creation of Algorithms make Artificial Intelligence Facial Recognition Programs Racially Biased
- Gurtler, Charles William (Author)
- Iheduru, Okechukwu (Thesis director)
- Fette, Donald (Committee member)
- Economics Program in CLAS (Contributor)
- School of Politics and Global Studies (Contributor)
- Barrett, The Honors College (Contributor)