A new study suggests that a commercial software widely used to predict which criminals will commit crimes again is no more accurate than untrained people, at foreseeing recidivism. Previous research has suggested that the criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, which incorporates 137 distinct features to predict recidivism, appears to favor white defendants over black defendants, underpredicting recidivism for the former group. While the debate over COMPAS's algorithmic fairness continues, Julia Dressel and Hany Farid sought to explore a more fundamental question: whether algorithms like the one used in COMPAS are any better than untrained humans at predicting recidivism. They conducted an online survey among people of varying ages and education levels. Participants, presumably none of them criminal justice experts, saw a description of a defendant that did not include their race. With considerably less information than COMPAS (only 7 features compared to COMPAS's 137), these individuals predicted whether each defendant would recidivate within two years of their most recent crime. The researchers report that the human predictions were approximately as accurate as COMPAS's -- rightly predicting the likelihood of a defendant to recidivate in roughly 65% of cases. False positives (when a defendant was predicted to recidivate but did not) were similar between COMPAS and humans, and were equally unfair to black defendants, the authors say.
Notably, in work to understand how sophisticated the COMPAS algorithm is, Farid and Dressel ultimately found that a simpler classifier based on only 2 features -- age and total number of previous convictions -- is all that is required to yield the same prediction accuracy as COMPAS.