1. Introduction
1.1. Who should read this Guidance Statement?
This Guidance Statement is for solicitors and law practices.
1.2. What is the issue?
The purpose of this Guidance Statement is to provide Queensland legal practitioners with a framework for the responsible and ethical use of artificial intelligence (‘AI’) in legal practice.
1.3. Status of this Guidance Statement
This Guidance Statement is issued by the Queensland Law Society (‘QLS’) Ethics and Practice Centre for the use and benefit of solicitors.
This Guidance Statement does not have any legislative or statutory effect. By having regard to the content of this Guidance Statement, it may be easier for you to account for your actions if a complaint is later made to the Legal Services Commission.
This Guidance Statement is not legal advice, nor will it necessarily provide a defence to complaints of unsatisfactory professional conduct or professional misconduct.
This Guidance Statement represents a standard of good practice and is supported by the QLS Council on the recommendation of the QLS Ethics Advisory Committee.
1.4. Companion material
This Guidance Statement considers ethical considerations and risks that arise from AI use. Companion material providing implementation and risk management information can be found on the QLS Website.
1.5. Fundamental position on the use of AI in legal practice
The Queensland Law Society does not seek to discourage use of AI tools in legal practice, nor does QLS view such use as incompatible with ethical duties. AI has the potential to significantly enhance access to justice[1] and lessen the burden of routine work on practitioners.
However, such use must be carefully considered and undertaken within appropriate guardrails. At the date of publication, many AI applications and their providers are in their infancy. Start-up companies in a rapidly changing environment may not be able to offer the consistency and stability a legal practice commonly expects from software providers. Plan what you will do if a service becomes unexpectedly unavailable.
2. Ethical principles
ASCR Rules 4, 5, 9, 17, 19 and 37 apply to this Guidance Statement
These Rules impose a professional obligation on all solicitors.
Fiduciary obligations are also relevant, especially when permitting third party access to client data.
3. Useful AI Concepts
Note: these foundations are an important precursor to understanding a Practitioner’s ethical obligations.
Artificial Intelligence (AI): AI enables machines to perform tasks that typically require human intelligence, such as interaction using natural language, sorting data and solving problems. However, intelligence and understanding are only simulated, not replicated.
Large Language Model (LLM): a type of AI tool that interacts naturally with humans using human language rather than traditional computer code or instructions. It will typically be very good at simulating communication, but performance in undertaking research or performing calculations can be erratic.
Machine learning: programming a computer using a large body of relevant data (“training data”). The machine is told the objective but not how to achieve it. The machine uses trial and error to develop a model, which solves the problem it was set. The model self-corrects by checking whether it is “right” in comparison to outcomes reflected in the dataset (“model training”). The resulting model can often improve over time through further interaction and data ingestion (“model refinement” or “model tuning”).
Ingestion: the process by which data is used in machine learning to train or fine tune an AI model. The data may have been “tokenised”[2] (often de-identified and usually no longer legible to humans) and, arguably, therefore no longer fit within many common definitions of “confidential information”. Ingestion will replicate/absorb the patterns in training data but not necessarily replicate the data verbatim in a way that would be traditionally equated with copyright infringement or breach of confidentiality.
Generative AI: a type of AI technology that can create new content, such as text, images, or music. The content generated is novel, in the sense that the exact combination of words, pixels or notes may never have existed before, but the system will only be able to produce content that has structural patterns similar to the training data ingested. Generative AI output is very different to a search engine which only locates pre-existing information. A Generative AI system is predictive based on the training data and therefore subject to limitations in that data (see: Bias, Hallucination).
Hallucination: the propensity of a generative AI system to create output in which certain components appear plausible but have been invented to fill gaps in the surrounding material. Current AI does not understand the content it generates, and cannot be relied upon to validate its own output. For example, a generalist LLM might produce superficially convincing legal submissions, but fill it with fictional case references and invented judicial statements. Hallucination risk is likely to reduce as systems become more refined and specialised but is unlikely to be completely eradicated.
Bias: a tool created using machine learning will inevitably reflect limitations in training data. Where the training data does not represent the population or problem the tool is applied to, “bias” or higher rates of inaccuracy may occur. For example, facial recognition software may be less effective at identifying gender and racial differences depending on the training dataset. Software trained to negotiate contracts in one jurisdiction or language may be effective when used in another context, however this cannot be assumed without careful validation.
4. Issues
Due to the rapidly evolving nature of AI technology, any ethics statement is unable to cover all relevant issues. Even more than other ethics guidance, this resource should be considered as a statement of principle rather than any attempt at codifying obligations arising from AI use.
Some considerations include:
4.1. Competence and diligence / education
Ethical Duty: If a practitioner uses or permits use of AI technology in their practice, they must ensure that they and relevant staff possess, or have access to, the necessary skills to do so appropriately.
Context: AI systems have differing capabilities and limitations. Practitioners should not use such systems to undertake professional work without a realistic understanding of those limits and a process to mitigate risk.
Appropriate management may include:
- Ensuring relevant people within the law practice understand the fundamental capabilities, limitations, and appropriate application of the AI tools in use.
- The ability to detect when a service in use has an AI component.
- Sufficient knowledge of the relevant area of law and facts pertinent to each matter to detect any errors in the output.
Practitioners should be particularly aware of
- The frequent changes in AI technology likely to be used in their practice.
- The different types of AI technology that might be used by staff.
4.2 Confidentiality
Ethical Duty: Practitioners should take reasonable steps to ensure that any AI tool they employ does not misuse access to confidential data or unduly expose it to dissemination.
There are two aspects:
- avoiding dissemination of high-risk or Personally Identifiable Information (“PII”); and
- protecting commercial and other value inherent in client data.
Context: AI development is dependent on access to data, especially data recording how human experts have solved problems. Law firms store large quantities of such information, reflecting both their own work[3] and in data supplied by clients and counterparties. The greater the volume, expertise and effort required to produce a dataset, potentially the more valuable it will be. Privacy regimes designed to protect PII and copyright regimes designed to prevent reproduction of proprietary data are not well adapted to preventing misuse of information for AI training.
Reliance on the assumption that a respected brand will not act unlawfully is therefore insufficient.
If the client’s data contains a significant body of technical information, it may be damaging to their interests if it is used to train an AI tool in competition with the client’s business.[4] In addition, all the usual risks of cloud based storage and processing systems apply to an AI provider.[5]
The measures required to adequately protect confidentiality or privilege may depend on the nature of the information and circumstances of the client. Particularly sensitive or confidential matters may require enhanced protection beyond the capability of a particular provider or service.
Where data use and model training is not addressed in the user agreement, it is unlikely that a proper risk assessment will be possible and the tool should be avoided.
Appropriate management may include:
A risk assessment to consider (among other issues):
a. Data access and use:[6]
- Where is the data being processed?
- How and where is the data being stored?
- Who has access to the data and for what purposes?
- Does the provider give an express guarantee of confidentiality other than for uses permitted by the Terms of Service?
If the user agreement does not provide a clear answer to these questions, practitioners should be very hesitant before allowing any access to confidential data at all.
- If the provider uses third party services (often more than one), all issues relevant to the provider also apply to such third parties.
- The nature of data to be submitted and whether it contains privileged or significant technical information;
- Possible impact on Legal Professional Privilege:[7]
- Are the Terms of Service incompatible with an intention to keep the material confidential?
- Is the provider’s cybersecurity methodology certified or audited?[8]
- If anonymising the accessible data (for example, by not using party names in prompts or drafts for revision by AI) or running in a sandbox environment is feasible and would reduce confidentiality risks.
- Whether uploading data for processing via AI may violate your firm’s or client’s data use obligations. Examples include copyright,[9]Privacy Act1988 (Cth) data handling obligations, license restrictions, non-disclosure agreements and a party’s obligation not to misuse discovery materials.
- Be aware that many AI products (and online services generally) collect information about the practitioner as the user and may share the user’s personal information and metadata with unspecified third parties, without informing them.[10]
- Ideally, data obtained under compulsion should not be uploaded to third party AI tools unless the data owner has been informed and any objections considered.
- Determine whether the provider retains information submitted by the practitioner before and after the discontinuation of services or asserts proprietary rights to the information. Some AI providers retain ownership of the output data.
4.3 Transparency and Disclosure
Ethical Duty: Where feasible, clients should be advised if an AI tool will be used when performing their work. The more significant the AI contribution to a client’s work will be, the more stringent the obligation to bring this to their attention.
Context: Disclosure to clients is always a balancing exercise. Some clients, especially those with particularly sensitive or valuable information may have stringent policies or views concerning AI use. Others may have little interest in how you intend to deliver the services contracted.
There is no reason to suggest that, if selected and used appropriately, an AI tool cannot supply reliable work. However, particularly with emerging technology, any special risks or characteristics should be discussed.
Appropriate management may include:
- Disclosing AI tools the practice uses in the retainer agreement or letters of engagement. In many cases, this may be sufficient.
- Where the tool will be used to undertake substantial tranches of work that would ordinarily be done manually, the client should be appraised of this fact, the types of data to which the system will be given access and how relevant risks will be managed.
- In addition to compliance with applicable Practice Directions, if AI generated material is supplied during court proceedings (whether this originates from the client or the legal practice), reasonable efforts should be made to disclose the AI involvement and to outline controls in place to ensure accuracy if called upon to do so.
4.4 Supervision and accountability
Ethical Duty: AI tools should be used as an aid to, and not a replacement for, professional judgment. As with any legal work, legal practitioners remain responsible for supervising the use of AI technologies and ensuring alignment with ethical standards and client interests.
An error in a technical system may explain but not absolve negligent client service delivery or misleading a third party. The AI vendor is not the provider of legal service and the firm is likely to remain responsible for errors.
It may be prudent to check with your Professional Indemnity Insurer to ensure that contemplated use is covered.
Appropriate management may include:
Supervision: Each piece of discrete work produced using an AI system (or any other production method) should be checked. This does not necessarily require duplicating all work manually to confirm the result, although very close supervision initially and regular auditing of output is essential.
Analogous to supervising a junior employee, the level of appropriate supervision is informed by a variety of factors including:
- The complexity of the task;
- How long the tool has been in operation; and
- Whether the tool was specifically trained for the task or has been repurposed.
The practitioner checking AI system input and output must have sufficient knowledge of the area of law to be capable of detecting errors.
The product (or bundle of products) supplied are likely to undergo continual revision and updates. This may improve the output but may also have the opposite effect. Periodic auditing and evaluation of the system as a whole is an important aspect of ongoing supervision.
Control: AI tools are attractive, especially to those who see a shortcut in completing tedious, routine work. Each firm should clearly articulate that any use of AI in legal work or on firm systems must be undertaken only with approval, and what conditions of use apply to approved products. A template policy is available on the QLS Website.[11]
Accountability: The Legal Practitioner Director or Partners are responsible for the quality and nature of the work produced using AI systems. Such responsibility is wider than checking the accuracy of individual items, and will extend to such issues as:
- Duties to the Court and the Administration of Justice;
- Avoiding conflicts of interest;
- Detecting fraud and money laundering; and
- Exercise of professional judgement prior to making allegations or engaging the court’s processes.
Conflict: While AI use may ultimately change the approach to conflict of interest, in the short and medium term practitioners should treat services supplied via AI like any other for the purposes of current and former client conflict.[12] Conflict checks prior to commencing legal work remain important.
AI material supplied by a client: If a client is relying on evidence produced using AI this must (like all evidence) be scrutinised prior to being relied upon. Arguably, given the increased risk of error, a party is obliged to disclose the involvement of AI in creating business records, reports and similar materials.
4.5 Legal costs
Ethical Duty: Legal costs must be:
- fair and reasonable;
- disclosed if required by the Legal Profession Act 2007 (Qld); and
- calculated in accordance with the contract of retainer.
Appropriate management may include:
Client work vs practice overhead: Adopting and maintaining an AI tool in Legal Practice requires extensive due diligence and ongoing supervision. Most of this work, and any AI license fees charged on a “per user”, lump sum or annual license basis are a practice overhead that cannot be billed to a client.[13]
Time spent prompting an AI and checking the output with respect to a specific client’s matter can be recorded and billed to that client in the usual way if charging on a time basis.
An AI license fee (or other software) charged on a “per matter” basis can fit the usual definition of “disbursement” more easily.[14] To avoid uncertainty, such costs should be included in the definition of “disbursement” in the costs agreement.
Basis of charging: If the retainer provides for remuneration on a time basis, only the time spent undertaking work which is quantifiable and attributable to the specific matter can be charged. Time based billing entries must remain accurate and may not be adjusted upwards to reflect the time that doing the work manually would usually take.
The financial benefit of time saved using an AI tool are therefore the client’s, not the firm’s, if billing on a time basis. It is a common criticism of time-based billing that it discourages investment in law practice efficiency.
For this reason, fixed fee or hybrid charging may be more appropriate when pricing work that will include a significant AI component.
However, even a fixed fee must remain “fair and reasonable”. One of the factors traditionally considered is the time spent producing work.[15] It may be prudent to retain records of the indirect production costs of the AI system to justify why a substantial fee for work completed in seconds nevertheless justifies a substantial fee.
5. More Information
Solicitors are also referred to the Queensland Law Society, The Australian Solicitors Conduct Rules 2012 in Practice: A Commentary for Australian Legal Practitioners, Queensland Law Society (2014).
Further resources:
The QLS Guide to choosing and using cloud services.
For Lexon insured practices, the Lexon Software Risk Analysis & AI Last Check
For further assistance, please contact an Ethics Solicitor in the QLS Ethics and Practice Centre on 07 3842 5843 or ethics@qls.com.au or a QLS Senior Counsellor.[16]
* Updated 24 October 2024
[1] Colleen Chien and Miriam Kim, ‘Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap’ (Research Paper forthcoming, Berkeley School of Law, University of California, 11 April 2024).
[2] Broken into constituent parts. Tokenised data is often not readable by humans.
[3] The embodied “value” of which is arguably the firm’s, even if the documents themselves are the client’s.
[4] David Bowles, ‘Artificial intelligence: Do you have a usage policy?’, QLS Proctor (online, 24 April 2023).
[5] For more information, see Queensland Law Society, Choosing and using cloud services (Reference Guide, 7 December 2022) (‘QLS Guide: Choosing and using cloud services’).
[6] Analysing data use permissions may be challenging and requires careful examination of both the Terms of Service and the Privacy Statement. For more information, see the Companion Guide.
[7] See Glencore International AG & Ors v Federal Commissioner of Taxation & Ors [2019] HCA 26. Client legal privilege is a shield to resist disclosure, not a tool to restore confidentiality to information which is no longer so.
[8] For more guidance, see QLS Guide: Choosing and using cloud services (n 5).
[9] At the time of publication, there is extensive litigation in multiple jurisdictions exploring the intersection between AI training, web scraping and copyright law.
[10] Legal Practitioners’ Liability Committee, Limitations and risks of using AI in legal practice (Guidelines, 17 August 2023).
[11] Queensland Law Society, Artificial Intelligence Policy Template (Guide, 19 April 2023).
[12] Queensland Law Society, Australian Solicitors’ Conduct Rules (at 27 September 2023) rr 10,11.
[13] Legal Services Commission, Charging Outlays and Disbursements (Regulatory Guide No 1, July 2021).
[14] Equuscorp Pty Ltd v Wilmoth Field Warne (No. 4) [2006] VSC 28, (Byrne J) [53]-[58].
[15] Legal Profession Act 2007 (Qld), s 341(2)(d).
[16] ‘QLS Senior Counsellors’, Queensland Law Society (Web Page).