Case Explained:This article breaks down the legal background, charges, and implications of Case Explained: Inside India’s courts, AI’s growing role sparks concern – Legal Perspective

The case seemed straightforward enough: a land dispute in the southern Indian state of Andhra Pradesh, a commissioner appointed to survey the property, and a set of objections to be ruled on.

The judge resolved it by citing four legal precedents. There was just one problem. None of those precedents existed.

All four had been generated by an AI tool — plausible-sounding judgments, complete with case names and legal reasoning, conjured from thin air. The error surfaced only on appeal, climbing all the way to India’s Supreme Court, the country’s highest judicial authority.

The top court did not treat it as an honest mistake. In late February, a bench declared that a ruling built on fabricated AI citations was not simply “an error in the decision-making process.” It was, the bench said, “misconduct.” Notices went out to India’s attorney general, solicitor general, and the Bar Council of India — the statutory body that licenses the country’s roughly 1.8 million lawyers.

“It is not a question of whether we should integrate AI or not but it is the question of how far the due diligence should be,” said Sindoora VNL, a lawyer who represents the defendants. “The court indicated this might be a question of misconduct. Now we have to see how far they are willing to take it.”

ChatGPT in the courtroom

India is not the only country asking that question. Across the world, in courtrooms that range from the sophisticated to the severely underfunded, artificial intelligence is quietly being woven into the machinery of justice, often faster than anyone has figured out how to govern it.

For instance, in 2023, a judge in Colombia included a transcript of a conversation with ChatGPT in a ruling involving an autistic child’s medical treatment, saying the tool had helped “assist, not replace” his reasoning. Months later, two lawyers in New York City were sanctioned after submitting a legal brief that cited six cases invented by the chatbot — precedents that had never existed.

Predictive policing: When AI predicts criminal activity

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

India’s most striking courtroom AI moment came not from a scandal, but from a judge who chose to be unusually candid.

In March 2023, a judge of the Punjab and Haryana High Court — the court with jurisdiction over two Indian states and one federal territory — stopped a bail hearing to type a question into ChatGPT. The man before him had been charged with murder. The judge wanted broader context on bail jurisprudence when assault involved cruelty.

The judge denied bail. But he was transparent enough to say, in his written order, that he had consulted the chatbot.

The transparency itself became the story. Legal advocates warned that ChatGPT was prone to inventing facts and encoding biases from its training data.

“AI cannot replace human conscience in justice delivery,” said Mimansa Ambastha, founder of Starlex Consultants and a strategic counsel on AI and cybersecurity in India. “The danger is that the balance between assistance and deference can slip. And when it slips in a bail hearing, a person’s liberty is at stake.” 

Bail in India is not a procedural formality. Hundreds of thousands of people are held as undertrial prisoners — accused who have not been convicted but spend years behind bars while their cases inch forward.

To understand the desperation that drives these AI misadventures in India, a single number helps: 55 million. That is roughly how many cases are currently pending across India’s judiciary from the Supreme Court in New Delhi down to district courts in small towns where a judge may be managing hundreds of active files simultaneously.

More than 180,000 cases have been unresolved despite being in trial for over 30 years.

The consequences of such delays can stretch across generations. Last year, for instance, the High Court in the northern Indian state of Uttar Pradesh acquitted three men who had spent 38 years in prison for a 1982 murder. By the time the verdict came, nearly four decades had passed since their arrest.

A 2018 government paper estimated it would take 324 years to clear the backlog at then-current rates.

It is into this crisis that AI has walked, offering the seductive promise of speed.

Ambastha said the crisis creates exactly the wrong conditions for unchecked AI adoption. “But the judiciary must always choose surety over speed,” she told DW.

Even the country’s top judges acknowledge the complications. During recent hearings, India’s Chief Justice Surya Kant observed from the bench that AI is paradoxically adding work because court staff must now verify whether AI-generated legal citations actually exist before proceedings can continue.

An Indian police officer prepares to close one of the gates at Tihar Jail, the largest complex of prisons in South Asia, in New Delhi, India
Hundreds of thousands of people in India are held as undertrial prisoners — accused who have not been convicted but spend years behind bars while their cases inch forwardImage: Saurabh Das/AP Photo/picture alliance

When algorithms reflect old biases

The deeper concern is not only that AI can invent facts; it can also inherit the biases embedded in the legal data used to train it.

Legal datasets are built from decades of judgments, police records and legal filings. Those records reflect the inequalities of the societies that produced them. When algorithms learn from them, experts say, they can quietly reproduce those patterns in new decisions.

“AI systems do not create bias out of thin air. They replicate what they are trained on,” said Ambastha. “If historical data contains discrimination, the model will absorb it and present the output as if it were objective.”

That risk is particularly troubling in criminal cases, where algorithmic assessments could influence decisions about bail, sentencing or recidivism. In India’s overcrowded prisons, where the majority of inmates are undertrial prisoners awaiting judgment, even small shifts in how risk is interpreted could affect thousands of lives.

The country’s prison data already reflects deep social disparities. According to data compiled by the National Crime Records Bureau, people from marginalized communities — including Dalits, tribal groups and Muslims — make up a disproportionately large share of inmates compared with their share of the population.

Muslims constitute about 14.2% of India’s population but account for roughly 18.7% of undertrial prisoners. Dalits, who make up about 16.6% of the population, account for about 21% of undertrials in Indian jails.

For experts studying AI in law, those patterns illustrate the danger. If predictive systems are trained on historical policing or incarceration data, they may treat those disparities as indicators of risk rather than evidence of structural inequality.

Research on large language models used in India has already found that such systems can reproduce stereotypes related to caste and religion present in their training data.

“If we are not careful, algorithms can end up reinforcing the same social hierarchies that the justice system is supposed to correct,” Ambastha said.

AI reproduces human bias in US justice system

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

Matheus Puppe, a Brazilian lawyer and researcher who studies the intersection of artificial intelligence and law, says the danger lies in how easily algorithmic outputs can appear authoritative. Judges and lawyers may treat machine-generated analysis as neutral simply because it is computational.

“The concern is that AI may reproduce structural distortions embedded in legal systems,” Puppe told DW. “Once those patterns are translated into algorithms, they gain a veneer of scientific legitimacy.”

The warning carries particular weight in countries like Brazil, which has moved aggressively to integrate AI tools into its courts to manage large caseloads. Puppe said that while such systems can help sort documents or flag relevant precedents, they also risk encoding the inequalities already present in judicial data.

The concern is not hypothetical. Studies of algorithmic risk-assessment tools used in the United States have found that some systems were more likely to label Black defendants as high risk than white defendants accused of similar crimes. Legal scholars say similar patterns could emerge anywhere AI models rely on historical criminal justice data.

For that reason, experts say AI must remain a strictly assistive tool inside courtrooms.

“Technology can help organize information or identify precedents,” Ambastha said. “But the moral judgment of the law cannot be delegated to a machine.”

AI is already helping

Despite these concerns, the legal system is not rejecting artificial intelligence outright. Instead, it is cautiously carving out limited roles where the technology can assist without influencing judicial reasoning.

In India, that experimentation is already underway. Sudipto Ghosh, founder of judicial large language model InLegalLLaMA, said his model is designed to work on Indian legal corpora, including statutes, judgments and procedural law, allowing it to retrieve relevant case law, generate structured summaries and assist in drafting basic legal arguments.

“The system is trained to understand the structure of Indian law,” Ghosh told DW. “It can map a query to applicable statutes and precedents, which is where much of the time in litigation is actually spent.”

That distinction matters in a system burdened by paperwork. Much of a lawyer’s and judge’s time goes not into adjudication, but into locating, organizing and interpreting past rulings — a process that AI can accelerate without formally entering the decision itself.

AI is revolutionizing policing but at what cost?

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

The Indian judiciary has been exploring similar tools from within. SUPACE, an AI-based research assistant developed under the Supreme Court’s e-committee, is intended to help judges sift through large volumes of case law, extract relevant passages and present them in an accessible format. Officials have emphasized that the system does not make recommendations or decisions, but functions as a backend aid to improve efficiency.

Other jurisdictions have moved further. In Brazil, where courts handle millions of repetitive cases each year, AI systems are already being used to group similar petitions, identify patterns in litigation and automate routine procedural steps.

Ricardo Augusto Ferreira e Silva, who studies the use of AI in Brazil’s judiciary, said these tools are most effective in high-volume environments. “They are designed to deal with scale, especially where cases follow predictable structures,” he said. “But their role has to remain operational, not decisional.”

Even among proponents, the caution is consistent. Ghosh said systems like InLegalLLaMA are still vulnerable to generating confident but incorrect outputs if used without verification. “You can reduce time, you can improve access,” he said. “But you cannot outsource judgment.

This article was supported by the Tarbell Center for AI Journalism.

Edited by: Srinivas Mazumdaru