This is The Marshall Project’s Closing Argument newsletter, a weekly deep dive into a key criminal justice issue. Want this delivered to your inbox? Sign up for future newsletters.
In July, Tesla fans lined up for hours in Los Angeles to check out the new “retro-futuristic” diner and charging station opened by Elon Musk. Among the attractions was the company’s “Optimus” robot, which served popcorn to hungry customers near the humans grilling Wagyu burgers. Fifty miles east in Chino, Delinia Lewis, the associate warden of the California Institution for Women, hopes to one day put AI-powered machines like these to work in her prison doing far more important jobs than slinging snacks. As staffing shortages continue to plague prisons around the country, Lewis believes AI could help close the gap.
“Medicine distribution, cell feeding, security searches, package searches for fentanyl, all the hazardous and routine tasks that staff don’t want to do,” said Lewis. “Why not let the robot do it? Then staff can focus on more intricate parts of the job.”
Lewis has written about the use of AI in corrections, and said she is forming a business to produce AI-driven robots for use in corrections settings. While she hopes the tech could be employed within the next 10 years, the state’s budget crisis makes acquiring cutting-edge AI tools tough.
“Who knows when California will be back in the green,” Lewis said of the state’s budget, “but we are losing staff at a record rate, so the bridge has got to break, and we’ve gotta really take advantage of technology.”
Robots behind bars may be a ways off, but prisons and jails have been rapidly adopting other AI and machine-learning tools. Advocates critical of the technology are concerned about opaque data collection processes, privacy violations and bias.
Prison telecommunications companies were some of the first to dip their toes in AI technology. In 2017, LeoTech began marketing Verus, a phone surveillance tool to record and monitor calls. The company uses Amazon’s cloud and transcription services to flag keywords that might alert staff to “valuable intelligence.” At least three states used the tool to monitor phone calls for mentions of coronavirus during the pandemic, in an attempt to track outbreaks, according to The Intercept. While tools like Verus were originally marketed as add-ons to existing phone services, many prison telecommunications giants have since made AI call monitoring a default part of their services.
“Given Securus and Global Tel Link are now providing it, it means it’s going to be a lot more accessible in a lot more places,” said Beryl Lipton, an expert on law enforcement and prison surveillance tools at the Electronic Frontier Foundation.
The use of these tools has led to serious breaches of attorney-client privilege. Over the last five years, lawsuits have been filed in several states against Securus, alleging that the company recorded privileged calls. Securus has settled some of the lawsuits and has denied purposely recording protected calls. The controversy hasn’t stopped corrections departments from using the technology, or vendors from marketing it. LeoTech has been lobbying in Ohio, where lawmakers passed a budget this year that includes $1 million for the state’s prison system to pay for software that will “transcribe and analyze all inmate phone calls” beginning next year, according to Signal Ohio. Florida inked a deal with LeoTech in 2023.
Lipton’s primary concern with the AI tools in prisons and police departments is how the data they gather is stored, retained, and later fed into other systems.
“Law enforcement and the companies helping them do this are very interested in collecting all the information they possibly can collect on somebody, because they think this is going to aid them in solving or preventing a future crime,” said Lipton.
While some AI technology is making its way into the system, in some ways, the U.S. is playing catch-up with other countries. Last month, the United Kingdom’s Ministry of Justice laid out its plan to embed AI across prisons, probation services and courts. Some of the agency’s goals include integrating AI transcription and document processing tools for probation officers, and the creation of a “digital assistant…to help families resolve child arrangement disputes outside of court.”
But the star of the announcement is a new “AI violence predictor” that promises to prevent prison violence by analyzing data, including an incarcerated person’s age and previous involvement in violent incidents. If this sounds familiar, you might be thinking of risk assessment tools that have long been used across the U.S., which ProPublica documented nearly 10 years ago to be rife with racial bias and “remarkably unreliable in forecasting violent crime.” The older tools generally assess risk by considering a set of weighted variables — such as age and prior convictions — either manually or by using an algorithm. AI-driven “predictors” are like risk assessment tools on steroids, drawing on much larger datasets.
While today’s AI-driven tools are more sophisticated in some ways, the risk for bias and error is still there, and the efficacy of predictive tools has repeatedly been called into question.
“A lot of these predictive tools can create unintended errors where certain communities are underserved or misunderstood because of how the model missed or wrongly accounted for individuals’ risks in that community,” said Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, who has studied AI surveillance in prisons.
In addition to predicting violence against others, some correctional staff are looking to use “biometric behavioral profiling” tools in combination with AI to prevent in-custody deaths and medical emergencies. The Maricopa County Sheriff’s Office, in Arizona, wants to buy wearable technology to track heart rate, body temperature, and other “key indicators,” according to AZ Central. Jails in Colorado, Alabama, and elsewhere in Arizona have already begun using similar tools.
Lewis, the associate warden in California, is well aware of the ethical concerns that come with AI tools, and believes criticism will ultimately produce better outcomes.
“I welcome concerns, because that gives us an opportunity to do more research and resolve those concerns,” said Lewis. “I don’t think it’s going to inhibit us, I think it’s just going to help us make a more advanced and a better product.”