Pages

Tuesday, August 01, 2017

Machine Bias: There’s software used across the country to predict future criminals, and it’s biased against blacks


From Pro-Publica:
ON A SPRING AFTERNOON IN 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid’s blue Huffy bicycle and a silver Razor scooter. 
Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs. Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, “That’s my kid’s stuff.” Borden and her friend immediately dropped the bike and scooter and walked away. 
But it was too late — a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80. 
Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store. 
Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile. 
Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.
By the way, check out the discussion in the following Tweet:

4 comments:

  1. Two people out of three hundred million. Yes, seems like a large enough sample to draw conclusions.

    ReplyDelete
  2. My understanding is, this is a common problem with AI.

    And the reason I said go read the comments is because the comments section is full of coders talking about what the problem is. They are all familiar with it. The consensus seems to be it's not going to be solved any time soon.

    ReplyDelete
  3. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

    Based on what? Maybe the comments at Twitter will explain. Off I go to check.

    ReplyDelete
  4. Over at Twitter:

    It’s not an easy solution at all, models can still learn them even if they aren’t explicit inputs

    and

    Regs should not focus on the algorithm but what data should be used, algo transparency, and accountability for potential disparate impacts

    Because AI can research data bases of crime stats?

    There's also this:

    FB's AI robots shut down after talking to each other in their own language.

    Twilight Zone, anyone?

    ReplyDelete