Case Explained: DOJ ramps up AI for legal work, crime predictions, surveillance, inventory shows  - Legal Perspective

Case Explained:This article breaks down the legal background, charges, and implications of Case Explained: DOJ ramps up AI for legal work, crime predictions, surveillance, inventory shows – Legal Perspective

From litigation to federal prisons to criminal investigations, artificial intelligence appears to have touched nearly every corner of the Department of Justice in the past year. 

Just two years ago, the DOJ reported four use cases of AI at the agency. In its most recent 2025 use case inventory, the agency logged 315 cases, a 31% increase from last year. The use cases varied widely in function, though technology and privacy experts took particular note of instances where AI was deployed at the agency for crime prediction, public surveillance, and litigation. 

Of these cases, 114 were deemed “high-impact” by the agency. Under the latest guidance, high-impact AI includes models that could have “significant impacts” when deployed, including for decisions or actions with a “legal, material, binding or significant effect on rights and safety.” 

Jay Stanley, a senior policy analyst with the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told FedScoop that the DOJ’s 2025 inventory provides a “snapshot” of how the federal government “is aggressively seeking to test and exploit a wide variety of AI algorithms and sifting through data on ordinary people.” 

The deeper look into the DOJ’s day-to-day work with the emerging technology comes amid heightened scrutiny of the agency, as it helps carry out the Trump administration’s aggressive enforcement priorities, including immigration. 

Predictive AI

From intake to release, the Federal Bureau of Prisons uses AI across several stages of the federal imprisonment process. Among the most controversial cases is the use of predictive AI to analyze inmate behavior, with experts expressing concerns about the biases and real-life consequences of such technology. 

In one case, called “BRAVO Classification,” the FBOP uses “statistical techniques to predict potential for misconduct for newly admitted inmates,” and assigns appropriate security levels in turn, according to the inventory. The DOJ division is also using AI to predict rates of recidivism for current inmates and sex offenders. 

“Any sort of tool that determines, that aims to predict criminality or recidivism, is going to make inferences not based on the actual future behavior of somebody, because nobody can tell the future — not even AI — that’s going to be making inferences on a number of factors, many of which lead to bias,” Jacob Hoffman-Andrews, a senior staff technologist at the Electronic Frontier Foundation, told FedScoop. “Even if you tell us specifically not to, AI is very good at learning correlated variables and figuring out how to be biased.” 

In another high-impact case, the DOJ’s Office of Justice revealed it is using agentic AI for a “prisoner assessment tool targeting estimated risk and needs,” or PATTERN. Like the FBOP cases, this tool is designed to predict the risk of recidivism for incarcerated adults. The tool, which has been in use since July 2025, is intended to reduce the likelihood of re-engagement with the justice system, according to the inventory. 

“That may sound nice and fluffy … but of course, that can also have the flipside of this AI essentially deciding who has access to certain programming and services,” Hoffman-Andrews added. “The concept of an AI that predicts the future in terms of whether somebody is going to commit a crime is fundamentally problematic.”

Notably, PATTERN is one of four agentic AI use cases across the entire 2025 inventory. The previous year’s inventory did not report any agentic uses.

The DOJ’s Bureau of Alcohol, Tobacco, Firearms, and Explosives is also leveraging AI for predictions, such as through its airline travel intelligence program. According to the inventory, the ATF is using classical or predictive machine learning to analyze travel data and identify “atypical” routes or passenger movements more quickly. 

But as AI models get trained on what an agency perceives as “suspicious,” Stanley suggested it is “hard to think of something that would do more to encourage conformism and fear of breaching norms than that.” 

“Any person, because [of] some AI algorithm that nobody understands how it works, can identify you as suspicious and bring you to the attention of law enforcement,” he added. 

Turning to AI in litigation

Behind the Federal Bureau of Investigation, the DOJ’s Civil Division (CIV) saw the second-highest increase in AI use, jumping from one case in 2024 to 17 in 2025. Of those 17, seven were marked as high-impact, serving different functions, such as reducing the backlog at the Office of Immigration Litigation to detecting fraud in large-scale data collections or compensation claims programs. 

Several cases aim to improve the workflows of Civil Division teams by reducing time-consuming tasks, such as legal research and project management, as identified in the inventory. In one high-impact case titled “AI evidence and claim consolidation,” the DOJ said it is exploring how AI can be used to synthesize records, summarize expert reports and depositions, and identify duplicate claims. 

While AI can help legal teams work more quickly and access more data, Hoffman-Andrews noted that its use in a legal setting raises questions about the “balance of power.” 

“If the prosecuting side has access to tools that allow them to analyze a ton of evidence and come up with the most useful stuff for them, the question is, does the defense have the same ability to synthesize all the possible evidence and find all the possible best cases for the defendant to be exonerated?” he said. 

In another non-high-impact case, the DOJ said it is exploring how AI can be used to detect other AI-generated content, stating federal litigators “face increasing challenges with AI-manipulated evidence and documents from opposing parties.” 

Of note, only one of CIV’s use cases listed a vendor — Salesforce — which was a recurring use case from 2024. 

The agency’s Executive Office for United States Attorneys, which supports states attorneys across the country, had just one use case. That use case lists data analytics and software giant Palantir as its vendor. 

The high-impact use case is classified as generative AI and is used for “integration and analysis of case information” and to reduce the time needed to update and maintain an accurate case management system, per the inventory. The tool was deployed in June 2025. 

Palantir, a major government contractor that has drawn controversy for its work with immigration efforts at the Department of Homeland Security, did not immediately respond to FedScoop’s request for comment. 

Multiple DOJ divisions continue using facial recognition tech

Among the 315 reported use cases, facial recognition technology drew significant attention from privacy experts. In addition to the FBI’s deployment of the tool for multiple uses, the U.S. Marshals Service (USMS) is also engaging with the technology. 

AWS Rekognition, an image- and video-analysis software offering facial recognition capabilities that appeared in the FBI’s 2023 use case inventory, is now being considered for use in USMS, the latest inventory shows. The use case is marked as high-impact and is in the pre-deployment phase, meaning it is in development or acquisition. 

The DOJ said the product would be used to prevent duplicate records in USMS’s Capture System — the division’s information system that manages prisoner and operational information and processes. The agency left all of the risk management categories blank for the use case. 

“The application would index faces already in the Capture system then search new entries against the existing database to flag and help determine if a possible existing record exists for a new subject,” the inventory states, adding that the application would give the agency “higher data quality, lowered risk of duplicate FID [federal ID number] and faster intakes.” 

According to the AWS website, Rekognition can detect “face liveliness” and distinguish real users from bad actors, along with face detection and analysis, custom labels, text and video segment detection, and more. 

USMS’s use of Rekognition follows Amazon’s 2020 decision to place a moratorium on police departments’ use of the tool in connection with criminal investigations. Amazon previously told other media outlets its moratorium on providing facial recognition to police was extended indefinitely. It is not clear whether USMS’s use case would violate this moratorium, and AWS did not provide a comment on the question. 

The inventory further showed that Clearview AI, a facial recognition company that has scraped billions of images from the internet for its database, is still being used at USMS, which first reported the use case in 2024. 

The DOJ said the technology “assists with the possible identification of an investigative subject,” but only serves as an “investigative lead” and is “never grounds for law enforcement actions.”

“All leads generated with this AI use must be corroborated with additional law enforcement techniques before actioned,” the agency wrote in its inventory. 

But for some experts, this is not enough reassurance, as AI presents new opportunities for bias and privacy violations. 

“There has always been a long-standing history of racist and xenophobic surveillance in the U.S.,” Will Owen, communications director at the Surveillance Technology Oversight Project, told FedScoop. “This moment is bringing a major acceleration in that use of surveillance technology.” 

Researchers have warned over the past two decades that AI will be used for surveillance and the monitoring of people at scale, Stanley noted, adding that these predictions are playing out in real time. 

“Maybe you have a facility that has 200 video cameras for 20,000 hours of video. Nobody’s gonna pay to watch all that video,” Stanley said. “You can get an AI to watch all that video … sift through it, and it becomes [a] much more powerful surveillance tool.” Stanley said. 

Today’s surveillance, he added, is “more powerful” than during the first Trump administration. 

Neither the DOJ nor Clearview responded to FedScoop’s request for comment by publication time. 


Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Public Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.