COURTS ARE USING AI TO SENTENCE CRIMINALS. THAT MUST STOP NOW | WHAT REALLY HAPPENED


COURTS ARE USING AI TO SENTENCE CRIMINALS. THAT MUST STOP NOW

Currently, courts and corrections departments around the US use algorithms to determine a defendant’s “risk”, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date. These algorithmic outputs inform decisions about bail, sentencing, and parole. Each tool aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources.

Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or “black boxed”, meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs.

Webmaster's Commentary: 

The alleged American "correctional system" generally does not correct criminal behaviour; by the time the convict has served their time, and is released into the general population, they are generally worse off than when they began their sentence, and as recidivism rates in this country show, are very likely to re-offend.

Adding predictive algorithms to the mix, without either a clarity as to how those algorithms have been created, or the kind of rehabilitation one finds in more enlightened prison systems, like those of Sweden, just adds salt to the wounds of incarceration, and does not help the judges come to any reasonable assumptions on what the predictive outcome of their sentencing of convicts can be.

Comments

SHARE THIS ARTICLE WITH YOUR SOCIAL MEDIA