A federal judge has raised concerns about immigration agents using artificial intelligence to write use-of-force reports, warning that the practice could lead to inaccuracies and erode public trust in law enforcement.
According to The Associated Press, U.S. District Judge Sara Ellis highlighted the issue in a two-sentence footnote in a 223-page opinion last week.
She noted that at least one agent reportedly asked ChatGPT to compile a report narrative after providing a brief description and several images, a process Ellis said “may explain the inaccuracy of these reports.”
She also pointed out discrepancies between body camera footage and the official account of the events.
Experts say using AI in this way is deeply problematic.
Ian Adams, an assistant criminology professor at the University of South Carolina and a member of the Council on Criminal Justice AI task force, called it “the worst of all worlds.”
“Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” Adams said.
Few law enforcement agencies have clear policies on AI use, particularly for high-stakes reports like those justifying the use of force.
Adams stressed that courts rely on the specific perspective of the officer when evaluating whether force was reasonable.
“We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force,” he said.
AI-generated reports also raise privacy concerns. Katie Kinsey, chief of staff at the Policing Project at NYU School of Law, noted that uploading images to a public AI tool may result in the data entering the public domain, potentially exposing it to misuse.
“You would rather do things the other way around, where you understand the risks and develop guardrails around the risks,” Kinsey said.
Some tech companies, including Axon, are offering AI tools integrated with body cameras, but these systems typically focus on audio rather than visuals, which experts say are difficult for AI to interpret accurately.
Andrew Guthrie Ferguson, a law professor at George Washington University, said, “It’s about what the model thinks should have happened, but might not be what actually happened. You don’t want it to be what ends up in court, to justify your actions.”
As law enforcement continues to explore AI, experts emphasize that transparency, strict policies, and careful safeguards are essential to prevent errors and protect both officers and the public.














Continue with Google