Federal Judge to ICE agents: Don't ask ChatGPT for help, as it will …

A federal judge has criticized ICE agents for using AI tools like ChatGPT to draft use-of-force reports, warning it undermines agent credibility and leads to inaccuracies. Evidence showed an agent inputting minimal details and images, resulting in AI-generated narratives that contradicted body camera footage, raising serious concerns about accuracy and privacy.
Federal Judge to ICE agents: Don't ask ChatGPT for help, as it will …
A federal judge has condemned the practice of US Immigration and Customs Enforcement (ICE) agents using artificial intelligence (AI) tools, such as ChatGPT, to draft official use-of-force reports. She warned that the practice “undermines the agents’ credibility and may explain the inaccuracy of these reports.” The condemnation has been issued by US District Judge Sara Ellis in a two-sentence footnote tucked within a 223-page opinion last week regarding law enforcement responses to immigration protests in the Chicago area.

What the judge said on factual discrepancies in reports

The judge highlighted specific evidence found in body camera footage, noting that at least one agent was observed asking ChatGPT to compile a narrative for a report after providing the program with only “a brief sentence of description and several images.”
Dangers Of AI: Is ChatGPT Quietly Harming Your Mental Health? | Global Pulse
She highlighted factual discrepancies between the official narratives generated by the AI and what the body camera footage actually showed, as per a report in Fortune. Experts warn that the failure to use an officer’s actual experience raises serious concerns about accuracy and privacy in high-stakes legal documentation.Experts are calling the incident a severe breach of protocol, especially concerning reports that justify law enforcement actions.
“What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” Ian Adams, an assistant criminology professor at the University of South Carolina and a member of the Council for Criminal Justice’s AI task force, was quoted as saying.“We need the specific articulated events of that event and the specific thoughts of that specific officer... That is the worst case scenario, other than explicitly telling it to make up facts,” he added.Meanwhile, Katie Kinsey, tech policy counsel at the Policing Project, noted that use of public AI tools also creates critical privacy risks. She highlighted that the agent may have unwittingly violated policy by uploading images to a public version of ChatGPT, potentially making them part of the public domain.

author
About the AuthorTOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article
Follow Us On Social Media