Artificial Intelligence in the Legal Field
Does ChatGPT have a place in disputes?
AI-generated correspondence lacks the clear and persuasive language needed to resolve disputes, instead it mainly summarizes legal principles at the expense of properly engaging with the issues at hand. AI can assist with research and review when used responsibly and verified but it is no substitute for legal input.
AI tends to fabricate information in circumstances when it has a knowledge gap. This was the case in the recent case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others where a candidate legal practitioner drafted a supplementary notice of appeal which had referenced incorrectly citated case law, a few of which were non-existent and quoted out of context. The candidate states that these cases were obtained from law journals but upon further investigation and research requested by Judge E Bezuidenhout only 2 out of the 8 cases could be found to exist, albeit that the citation of one was incorrect. It was later determined that these cases were referred to in AI generated work and not reviewed for accuracy or indeed the existence of the caselaw by the candidate attorney or their superior.
Judge Bezuidenhout was of the view that the failure of supervision of the candidate and lack of disclosure of sources were of a very serious nature and that “relying on AI technologies when doing legal research is irresponsible and downright unprofessional”. Judge Bezuidenhout went on to order the Registrar to refer her judgement to the Legal Practice Counsel of KwaZulu-Natal for investigation and further action.
Further concerns can be seen in the decision of Parker v Forsyth NO and Others. This case relates to a matter where a legal practitioner (the plaintiff’s attorney) submitted a list of authorities to the defendant’s attorney who could not locate any of the cases referenced and requested to be provided with the source of the authorities. The plaintiff’s attorney ultimately admitted that they had neither accessed nor read the cases and could not source them. It came to light that the cases referenced had been sourced from an AI chatbot, namely ChatGPT.
In dealing with the issue of using artificial intelligence for research, the court stated:
“However, the Plaintiff’s legal team did not submit these cases to the court as binding authorities, they submitted them to the defendants’ attorneys as being the cases that they would rely on prior to realizing the error of their proposed actions. It seems to the court that they placed undue faith in the veracity of the legal research generated by artificial intelligence and lazily omitted to verify the research. Ordinarily, if the court was satisfied that the attorneys had attempted to mislead the court, the consequences would have been far more grave. Not only would it have attracted a cost order de bonis propriis against the relevant attorney, but the court would have been compelled to report the attorney’s conduct to the Legal Practice Council. As it happens, the court is quite confident that neither the plaintiff’s attorney nor her counsel attempted to mislead the court. It seems that the attorneys were simultaneously simply overzealous and careless. This incident serves as a timely reminder to, at least, the lawyers involved in this matter that when it comes to legal research, the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading.
In another recent case of Northbound Processing v SA Diamond Regulator AJ Smit noted that it came to his attention while drafting the judgement that the two cases cited in Northbound’s heads of argument for key propositions on the mandamus, that could have been dispositive of the matter if they applied, did not exist. When the parties were approached to clarify the position in relation to these non-existent authorities, junior counsel for Northbound, in his first response stated that an incorrect version of the heads was filed. In a response to a direct question from the court whether the incorrect citations constituted so-called “artificial intelligence hallucinations” counsel confirmed that they appear to be so. Counsel then went on to explain that due to the time-pressure caused by various factors, he used an online subscription tool called “Legal Genius” which claimed that it was “exclusively trained on South African legal judgements and legislation”. As a consequence, the same order was made as in the Mavundla Case, namely that the conduct of the applicant’s legal practitioners is referred to the Legal Practice Council for investigation.
While AI is becoming an increasingly useful tool, it cannot be used in place of legal expertise. While AI tools like ChatGPT can assist with legal research and document review, their uncritical use in legal disputes presents serious risks. These above cases underscore that AI cannot replace sound legal judgment and due diligence—failure to independently verify AI-assisted research is not only unprofessional but potentially unethical and harmful to legal proceedings.

