The increasing use of artificial intelligence technologies in Indian courts raises concerns about exacerbating biases and structural flaws within the legal system. Instances have been revealed where AI tools were used to produce fictitious legal precedents, resulting in serious legal complications. In one case involving a land dispute in the state of Andhra Pradesh, the judge relied on four legal precedents that did not exist, which were entirely generated by an AI tool.
The issue was only discovered upon appeal, reaching the Supreme Court of India, which deemed that a ruling based on fabricated citations constitutes misconduct rather than merely a decision-making error. The court issued notices to the Attorney General of India and the Bar Council of India, highlighting the growing concern over the use of AI in the judicial system.
Details of the Incident
In March 2023, the Supreme Court in Punjab and Haryana witnessed an unconventional situation, where a judge paused during a bail hearing to consult ChatGPT for additional information regarding legal principles related to bail in assault cases. Although the judge denied bail, he referenced his consultation with the AI, raising questions about the reliability of such tools in legal decision-making.
The concern is not limited to India alone but extends to other countries where AI is being integrated into courts. In Colombia, a judge used a conversation with ChatGPT in a ruling concerning the treatment of a child with autism, while two lawyers in New York were penalized for submitting a legal document based on fictitious precedents created by ChatGPT.
Background & Context
India is facing a significant legal crisis, with an estimated 55 million pending cases in the judicial system, leading to long delays. For instance, three men were exonerated in Uttar Pradesh after spending 38 years in prison for a murder that occurred in 1982. These delays raise serious concerns about justice, as many defendants are held in jails for extended periods without trial.
Reports indicate that the use of AI in these circumstances may seem like a quick fix, but it carries substantial risks. AI-based tools may reinforce existing biases in the legal data used to train them, leading to unfair outcomes.
Impact & Consequences
The greatest concern revolves around the potential for AI use to reinforce social biases and discrimination. Data indicates that marginalized communities, such as Dalits, tribal groups, and Muslims, represent a higher percentage of prisoners compared to their proportion in society. These challenges require the judicial system to be cautious in how AI is utilized, as algorithm-based decisions can impact the lives of thousands.
These circumstances necessitate that judges and lawyers remain aware of the potential risks associated with reliance on AI. There are also calls to define the role of AI as merely an assistive tool, rather than a substitute for legal decisions.
Regional Significance
The issue of AI use in courts is also significant for the Arab region. As technology increasingly integrates into judicial systems, Arab countries must be aware of the risks associated with social biases and discrimination. The experiences faced by India could serve as valuable lessons for Arab nations on how to responsibly integrate AI into legal systems.
In conclusion, the use of AI in courts requires a delicate balance between efficiency and justice. Human values and fairness must remain at the core of any technological application within the judicial system.