The Influence of Error on Perceptions of Machine Learning vs. Clinician-Based Risk Assessments
Document
Description
Risk assessments are key legal tools that can inform a number of legal decisions regarding parole sentencing and predict recidivism rates. Due to assessments being
historically performed by humans, they can be prone to bias and have come under various
amounts of scrutiny. The increased capability and application of machine learning
technology has lead the justice system to incorporate algorithms and codes to increase
accuracy and reliability. This study researched laypersons’ attitudes towards these algorithms
and how they would change when exposed to an algorithm that made errors in the risk
assessment process. Participants were tasked with reading two vignettes and answering a series
of questions to assess the differences in their perceptions towards machine learning and
clinician-based risk assessments. The research findings showed that individuals lent more trust
to clinicians and had more confidence in their assessments when compared to machines, but
were not significantly more punitive when it came to attributing blame and judgement for the
consequences of an incorrect risk assessment. Participants had a significantly more positive
attitude towards clinician-based risk assessments, noting their assessments as being more
reliable, informed, and trustworthy. Participants were also asked to come to a parole decision
using the assessment of either a clinician or machine learning algorithm at the end of the study
and rate their own confidence in their decision. Results found that participants were only
significantly less confident in their decision when exposed to previous instances of risk
assessments with error, but that there was no significant difference in their confidence based
solely on who conducted the assessment.