Data analysis technology is being used to make predictions in areas as diverse as dog racing, airfare costs and the success of Hollywood movies. Credit card companies monitor their customers’ purchasing habits to identify patterns and project trends. The National Security Administration analyzes millions of bits of information to identify potential terrorists. And some researchers now think they have found a way to apply this technology to safeguard the welfare of children.
Ira Schwartz has more than 30 years’ experience in the child welfare field. The former provost of Temple University and dean of the University of Pennsylvania School of Social Work, Schwartz has conducted research into the uses of technology in child welfare cases. His more recent work has convinced him of the potential benefits of artificial neural networks, which are based on “smart” and adaptable software. He says these tools can make a marked improvement in the decision-making process in child welfare cases by providing more accurate information to workers in the field, such as caseworkers at New York City’s Administration for Children’s Services (ACS).
Using historical data, neural networks run complex algorithms that identify the key variables that can predict outcomes. As more data is entered into the system, the software updates its predictions. “It is a dynamic system that can learn on a daily basis based on the information put into it,” Schwartz says. So in the credit card industry, for example, a neural network might identify a series of aberrant purchases – say three mink coat purchases in as many days – and flag the account. In that case, the system would have analyzed historical data and found the recent activity to be significantly out of character for the customer, essentially inferring the credit card had been stolen.
In two separate studies, published in Children and Youth Services Review in 2004 and in the Temple Law Review last year, Schwartz and colleagues analyzed data from thousands of cases of child abuse and neglect nationwide. A neural network was asked to predict which cases would meet the “harm standard,” the most serious classification of abuse. Ninety percent of the time, the system accurately predicted risk – which the researchers knew because of the actual outcomes – with very few false positives or false negatives. In other words, the neural network was able to determine which variables were most closely associated with child abuse, then identify the cases matching those variables. The first study concludes, “Neural networks…are tools that could help to increase accuracy, reduce errors, and facilitate more effective decisions in child welfare and child protective service organizations.”
In an actual child welfare case, the neural network would be able to analyze a number of variables and see how they relate to each other before determining the level of risk. Some of the factors taken into account in such a risk assessment are the age of the child, the age of the child’s parents or guardians and whether they are employed, and whether there is an unrelated adult living in the household. These and many other factors would each be assigned a statistical weight, and the algorithm would calculate what the factors mean individually and in concert.
Risk assessment is a critical step in child welfare evaluations. Inaccurate assessments can put children at risk of abuse or neglect on the one hand, or can lead to unnecessary removals from home on the other. Many child protective agencies across the country rely on a consensus-based approach to risk assessment, in which decisions are made based on criteria that local child welfare authorities deem important. “When a lot of subjectivity under the cloak of professional judgment goes into the making of life-or-death decisions about children,” Schwartz says, “it is tantamount to playing Russian roulette.” But he does not advocate for technology completely replacing human decision-making. Rather, he thinks neural networks should be used to enhance the work of child welfare agencies, with each agency deciding how best to employ these tools.
Another problem with today’s risk assessment generally is that there are incentives for an investigator to allow outside matters unrelated to the home environment to affect a decision. According to Richard Wexler, executive director of the Alexandria, Va.-based National Coalition for Child Protection Reform, what risk assessment sometimes boils down to is a caseworker thinking, “Am I at risk of landing on the front page tomorrow if I leave this child at home and something goes wrong?”
But some are not yet convinced that neural networks are the answer to the risk assessment challenge in child welfare. For one, there is not enough research that definitively shows that this is the best way to do risk assessment, according to Aron Shlonsky, an associate professor of social work at the University of Toronto, recently of the Columbia University School of Social Work, who specializes in risk assessment instruments in child protection cases. He is not sure that neural networks will be more accurate when compared to other methods of risk assessment, but says, “We should be testing whether this is the case.”
Others are concerned that even if neural networks are the most accurate risk assessment tools, they will still leave room for subjective error. Wexler, who read Schwartz’s research, notes that the technology would make its predictions based on what caseworkers say they saw and heard, but “the case files are notoriously inaccurate, and often biased.” He worries that neural networks would not be able to eliminate such mistakes.
Andrew White agrees that much of the challenge of child protective work is in data collection, which the neural network does not help, at least as much as data analysis. White, who edits the NYC journal Child Welfare Watch, asks, “How is a computer going to make a better decision if the data’s not there?” Well-trained people can make good decisions, said White (a former City Limits editor), calling social work “a human competency.” But unlike police investigations, which seek to discover the reality of an event that already happened, child protection work is largely predictive. “In that sense, they’re on to something. Prediction is what this is about,” he said. White has not read Schwartz’s academic papers, however.
In response to several high-profile child welfare cases in 2005 and 2006, including the death of 7-year-old Nixzmary Brown, ACS has
implemented changes to its abuse protection and prevention services. One of the most far-reaching reforms ACS is putting into place is ChildStat, a numbers-based tracking and accountability tool modeled after the NYPD’s successful CompStat initiative. Under the new program, ACS holds weekly meetings with leaders from every division of the agency during which zone performance data, such as caseload averages and percent of investigations completed on time, is analyzed and an open child protective case is reviewed. According to ACS Commissioner John Mattingly’s written testimony to City Council’s general welfare committee, “Since ChildStat began in July 2006, more than 100 cases involving approximately 300 children have been reviewed.”
ACS also has been trying to strengthen its investigations by hiring hundreds more child protective staff and 20 new “investigative consultants” with law enforcement investigative experience. Mayor Bloomberg recently authorized the hiring of an additional 100 investigative consultants. They will join ACS caseworkers on home visits and will assist in the risk assessment process.
“There is a very specific, very detailed risk assessment process that caseworkers use while conducting an investigation,” said ACS spokeswoman Sharman Stein, who declined to comment on the potential use of neural networks specifically. If a report of abuse is accepted, ACS assigns the case to a child protective specialist, who then must contact the child’s family within 24 hours and complete an investigation within 60 days. Current ACS risk assessment protocol guides caseworkers through the investigative process, Stein said.
During home visits, investigators will observe a child physically, speak to the child and other household members individually, and examine the home for dangerous or unsanitary conditions. Investigators must also speak with “collateral contacts” such as other family members, teachers, doctors, and the individual who initially reported the abuse. They may also look at the child’s school attendance records, consider any previous ACS investigations into the household, and request medical examinations and drug tests. Caseworkers will then discuss their findings with a supervisor and decide whether abuse or neglect is indicated, whether the family should receive voluntary or required support services, and whether the child needs to be removed from the home. Overall, the process relies heavily on the quality of investigative training and the expertise of caseworkers.
Across the country, child protection agencies are hesitant to switch from traditional approaches to newer technology-assisted tools. Some see that as a typical reluctance to change that spans many professional fields. Ian Ayres, a Yale Law School professor and author of the new book, “Super Crunchers: Why Thinking-By-Numbers is the New Way to Be Smart,” has studied the use of data analysis technology in a variety of fields—from sales to civil rights to gun control—and in each case, he says, “Traditional experts tend to resist yielding power and control to statistical prediction.” They think the introduction of new technologies will invariably limit their own ability to make key decisions.
As Shlonsky points out, however, the real-world knowledge that caseworkers possess is essential in child welfare investigations. It helps determine how best to work with a family and how to create a plan of action to address the issues that brought the family to child welfare in the first place. “Risk assessment instruments and clinical assessment skills are two entirely different but necessary components of high-quality child welfare services,” Shlonsky says. The technology would supplement rather than replace the work of child welfare experts.
Schwartz, now working outside academia, would like to see agencies begin to test neural networks as part of the risk assessment process. He estimates it might cost up to $1 million for a locality to phase in a preliminary system. Neural networks have yet to be tested in the field, but as more research is done, Schwartz hopes that others will see the logic of the system. If implemented properly, he argues, neural networks could cause a sizable drop in the rates of re-abuse and childhood fatalities of children known to the system. In a city like New York, where thousands of cases are investigated each year, and 44 children “known to the system” died in 2006, such changes could make a lasting difference. The bottom line, he says, is that “More accurate risk assessment tools can be developed using technology that is readily available and already being used in many other fields.”
This story has been corrected to accurately reflect that Richard Wexler reviewed the academic papers in question before commenting for the story.