An Allegheny County data analysis program received an in-depth look in a New York Times Magazine story in which writer Dan Hurley asked: Can an Algorithm Tell When Kids Are in Danger?
The analysis program is helping the Pittsburgh child protection agency make better judgment calls on the threat level of a complaint.
Hurley was given behind-the-scenes access to sit in with call screeners in an 18-month investigation of the Allegheny Family Screening Assessment, even accompanying social workers on family visits.
As calls come into the hotline department of the county’s Child and Youth Services, the operators must make immediate assessments of the risk level for the children in question. The Family Screening Assessment gives call screeners an extensive evaluation within minutes by working through the maze of county data that would have taken hours to untangle.
In August 2016, Allegheny County became the first jurisdiction in the United States, or anywhere else, to let a predictive-analytics algorithm — the same kind of sophisticated pattern analysis used in credit reports, the automated buying and selling of stocks and the hiring, firing and fielding of baseball players on World Series-winning teams — offer up a second opinion on every incoming call, in hopes of doing a better job of identifying the families most in need of intervention.
This kind of predictive data programs have their critics, but according to Hurley, the Allegheny County assessment is working in the right direction.
“Given the early results from Pittsburgh, predictive analytics looks like one of the most exciting innovations in child protection in the last 20 years,” says Brett Drake, a professor in the Brown School of Social Work at Washington University in St. Louis. As an author of a recent study showing that one in three United States children is the subject of a child-welfare investigation by age 18, he believes agencies must do everything possible to sharpen their focus.
Even as a new tool, the algorithm continues to be updated and retooled to continue to improve its effectiveness. The program’s accuracy at predicting bad outcomes has already been raised to more than 90 percent from around 78 percent at its beginning.
Read the complete report here.