Inside India’s courts, AI’s growing role sparks concern
Inside India’s courts, AI’s growing role sparks concern
Artificial intelligence is becoming more prevalent in Indian courtrooms, raising alarms about potential amplification of bias and systemic inequities. A seemingly simple case in southern India involved a land dispute, a property surveyor, and a series of objections. The judge relied on four legal references to reach a verdict, but the issue arose when those references were entirely fabricated. Generated by an AI tool, they appeared credible with case names and reasoning, yet had no basis in reality. The mistake was uncovered during an appeal, reaching the Supreme Court, India’s apex judicial body. The court did not dismiss it as an innocent oversight. In late February, a panel labeled the decision as not just an error, but “misconduct.” They issued notices to the attorney general, solicitor general, and the Bar Council of India, which oversees approximately 1.8 million legal professionals nationwide.
A Case of Fabricated Precedents
The ruling’s use of AI-created citations highlighted the risks of overreliance on technology. Sindoora VNL, a defense lawyer, emphasized that the court’s stance underscores the need for careful scrutiny. “It is not about whether we should embrace AI, but how thoroughly we should vet its use,” she stated. “The court’s warning is clear: they expect accountability.” This incident is part of a larger trend, with AI being integrated into judicial processes globally, often outpacing regulatory frameworks.
Global AI in Courtrooms
Similar scenarios are unfolding worldwide, from well-resourced courts to those with limited resources. In 2023, a Colombian judge referenced a ChatGPT conversation in a case involving a child’s medical care, noting the tool aided rather than replaced his judgment. In New York, two attorneys faced penalties after citing six fabricated cases in a legal brief. Meanwhile, in India, a judge in Punjab and Haryana took a different approach. During a murder bail hearing, he paused proceedings to consult ChatGPT, seeking clarity on legal standards. The judge later acknowledged the AI’s input in his written order, drawing attention to its role in decision-making.
Legal experts warn that AI can distort facts and reflect biases from its training data. Mimansa Ambastha, founder of Starlex Consultants, argued that the technology cannot substitute human judgment. “The risk lies in blurring the line between assistance and substitution,” he said. “When AI influences critical decisions, like bail, it could threaten a person’s freedom.” This concern is amplified by India’s extensive case backlog, a challenge that drives the adoption of AI for efficiency.
The Backlog Crisis
In India, bail is not a mere formality. Many individuals remain incarcerated for years without conviction, navigating complex legal processes. The sheer volume of unresolved cases—over 180,000 lingering for more than three decades—reveals the system’s strain. Some cases, like those in Uttar Pradesh, have taken decades to resolve. Last year, three men were acquitted after serving 38 years for a 1982 murder. A 2018 government report estimated that the current backlog could take 324 years to clear. Amid this, AI offers a tempting solution, promising to expedite justice. However, experts caution that the judiciary must balance innovation with due diligence.