Shades of Grey - AI and Junior Lawyers

By Helen Driscoll, Associate, Rose Litigation Lawyers (Commercial Litigation) 

Photograph of robot hand and human hand reaching for each other

Generative AI and the wonders of our (startlingly so) evolving technology has taken us all by storm. 

 

While we continue to marvel, experiment, and criticise (ie, “that Concerns Notice we asked it to draft is not that on point, see - humans will always be better!”). One thing is for sure, the legal profession is starting to integrate and use this new learning technology more and more, for drafting, research, discovery and analysing tasks which usually take days and days and performed by those in the junior ranks. 

 

Those who are not yet dipping their toes in could be fast left behind. 

 

It is not all sunshine and rainbows (or algorithms and transformers), there are perils in blindly trusting our new found R2D2 (and I am not talking about the terminator coming to take over), I am talking about ethics, accuracy, and some pitfalls of relying on the machine for junior lawyers. 

 

Accuracy 

We all heard about (amongst others) the New York lawyer who had Chat GPT to generate an affidavit and court briefs which cited fake cases in a personal injuries case (Mata v Avianca). 

 

The plaintiff’s filing that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

 

The plaintiff’s solicitor later filed an affidavit attesting that ChatGPT not only provided the legal sources, but assured the solicitor of the reliability of the opinions and citation that the court has called into question.

 

When challenged further by the (human) solicitor, the chatbot said the cases cited “can be found in reputable legal databases such as LexisNexis and Westlaw.”

The cases were indeed, fully hallucinated by ChatGPT and never independently verified by the human user. 

I recently tried to obtain a summary of a semi-recent Queensland Supreme Court case through ChatGPT. ChatGPT produced an immaculate little summary of my case, complete with quotes from the ‘judgment’ - except it wasn’t actually what happened in the case at all. Fortunately I had read the case previously, and immediately knew that this was a new tale completely fabricated by my robotic pal. 

Moral here - always verify the information spat out to you!


Ethics 

Rule 9 of the Australian Solicitors’ Conduct Rules states that a solicitor must not disclose any information which is confidential to a client and acquired by the solicitor during the course of the retainer. 

 

We all know this rule well, but how does this work with machine learning AI? The output of generative AI is not necessarily programmed, but actually produced in response to the input of the users. It trains on data it is given, including how users have further prompted and reinforced when seeking output. 

 

Therefore, inputting confidential information of your client into a public generative system to generate a letter of demand with all the specs, is an obvious major pitfall. If using a closed system, it is critical for lawyers to understand the security and data protection measures for the system. 

 

Sum up 

The moral of the story is, it is great to put in the prompts, and use the ‘letter of demand’ generated as a template only, and if you ask for cases, make sure you search that citation and get the report version! Lastly, make sure you aren’t accidentally inputting client data into a system (like ChatGPT or similar models) which learns from everyone’s input. 

 

Arguably, there isn’t really a question of whether the profession should use technology such as generative AI anymore, but rather it is what technology, how to use it and the degree of use. 



 

*This article was not written with AI - just streams of human consciousness.