Image created by AI
A recent judicial proceeding has cast a spotlight on the ethical and professional boundaries of using artificial intelligence in legal practices. Surendra Singh and Associates, a law firm based in Pietermaritzburg, South Africa, is at the center of controversy after Judge Elsja-Marie Bezuidenhout criticized the firm for allegedly using AI and internet sources to cite non-existent legal cases in court documents.
The case unfolded when controversial KwaZulu-Natal politician Godfrey Mvundla challenged his suspension as Mayor of Umvoti, represented by Surendra Singh and Associates. During the court proceedings, Judge Bezuidenhout noticed irregularities in the case citations provided by Mvundla’s counsel, Ms S Pillay, who was briefed by the firm. Upon further examination, it was revealed that only two out of nine cited cases existed, and one citation was incorrect.
The scrutiny deepened when an articled clerk, tasked with drafting the notice of appeal, admitted to sourcing the citations through her academic portal at the University of South Africa (Unisa), although she could not specify the journals. The judge queried the possibility of using an AI program like ChatGPT, which the clerk denied. Subsequent checks and a court-ordered search for original documents from recognized law reports and databases like SAFLII proved fruitless, revealing an unsettling reliance on possibly AI-generated fictional cases.
The situation escalated when the clerk and Suren Singh, the law firm's owner, struggled to substantiate the cited cases, with Singh refusing to cover copying costs at the court library. Judge Bezuidenhout’s judgment expressed dismay over the firm's inability and unwillingness to verify the case references, which she considered "downright unprofessional" and a serious misrepresentation in legal practice.
This case not only highlighted the pitfalls of unsupervised AI use in legal settings but also raised questions about the integrity and diligence expected in legal representation. The judge referred the matter to the Legal Practice Council (LPC) for further investigation and possible disciplinary action against the involved parties, emphasizing the need for legal professionals to thoroughly check and ensure the validity of cited authorities.
In her condemnation, Judge Bezuidenhout underscored that the reliance on fabricated case authorities by legal professionals should not be taken lightly as it undermines the judicial system's trust and the very essence of justice being based on factual and verifiable references.
This incident acts as a cautionary tale for legal practitioners incorporating AI technologies into their practice, highlighting the importance of balancing technological advancements with ethical legal standards. It stresses the imperative for attorneys to maintain direct oversight and verify the accuracy of AI-generated content, thereby safeguarding the sanctity and reliability of legal proceedings.