Back
Image Alt

The Singapore Law Gazette

Navigating the Risks – Generative AI and Legal Practice

In recent years, the legal industry has experienced notable transformation with the emergence of generative artificial intelligence (GenAI) based on large language model (LLM) systems such as ChatGPT, Bard, Bing and Copilot. These and other similar LLM tools present exciting prospects for streamlining legal research and creating documents.

While GenAI holds great potential for enhancing legal practice, its integration requires careful consideration. This article discusses the legal and regulatory implications of using LLM tools in legal practice.

What is a Large Language Model (LLM) Software?

A large language model (LLM) software refers to a type of software that utilises sophisticated algorithms and large amounts of data to understand, generate and process human language.

It is not a conventional research tool. It does not analyse and it is not capable of autonomous thought.

Instead, it functions as an advanced iteration of predictive text systems that we commonly encounter in emails and smartphone chats. The algorithm anticipates the next word likely to be used.

LLM machine learning algorithms are designed such that its responses appear as if they were written by a human being or as if the machine is capable of thinking for itself.

Key Risks When Using GenAI Tools1A detailed discussion of the risks of generative AI are set out in IMDA’s June 2023 discussion paper on “Generative AI: Implications for Trust and Governance”. Discussion_Paper.pdf (aiverifyfoundation.sg)

  1. Anthropomorphism 

    Due to the human-like characteristics features of LLM tools, users are often led to overestimate their capabilities which results in the following:

    1. false expectations about the AI’s ability to understand context, analyse information or make accurate assessments;
    2. erroneously assume that the AI comprehends concepts such as truth or accuracy, leading to reliance on potentially flawed or biased outputs;
    3. inadvertently trust the outputs of LLMs as if they were reliable sources of information, without recognising the limitations or biases inherent in the algorithms, leading to the propagation of misinformation or the reinforcement of existing biases.
  1. Hallucinations 

    LLMs are prone to “hallucinations”, where the outputs generated by LLMs may sound plausible but are either factually incorrect, unrelated to the given context or even fictitious, leading to information disorder and misinformation.The risk of relying on erroneous information leading to wrong legal advice or compromise of confidentiality is serious. The sanctions on lawyers in the 2023 New York case Mata vs Avianca Airlines Inc is an example of damage inflicted on a hard-earned reputation because of the Court being misled.

    The decision-making processes of LLMs are also opaque and LLMs hardly provide clear explanations of their outputs. Accordingly, never take the system’s output at face and always verify the output of AI LLM software.

  1. Bias in Training Data 

    Another key risk is in the manner in which an LLM is “trained”. Training data is usually trawled from the internet which results in LLMs producing content that may be biased, discriminatory or offensive. Such biases may include gendered and racist views.

  1. Erroneous or Confidential Training Data 

    LLMs use the inputs from users’ prompts to continue to develop and refine the system. As a result, anything that a user types into the system is used to train the software and this may be repeated verbatim in future results. Issues arise if the information typed into the system is incorrect, is confidential or subject to legal professional privilege.

  1. Legal Professional Privilege, Confidential Information and Data Protection 

    Be extremely vigilant not to share any legally privileged or confidential information (including trade secrets) or any personal data on any GenAI tool. The information provided is likely to be used to generate future outputs and could potentially be publicly shared with other users. Any such sharing of confidential information is likely breach Rules 5 and 6 of The Legal Profession (Professional Conduct) Rules 2015, which could also result in disciplinary proceedings and/or legal liability.

  1. Intellectual Property2Refer to the Singapore Law Gazette November 2023 issue for a discussion on intellectual property challenges of generative AI. Generative AI – The Singapore Law GazetteAlways assess whether content generated by LLMs violate intellectual property rights, especially third-party copyright. As a sizable amount of text data, such as books, papers and other written materials were used to train LLMs, it is possible that content produced may violate copyright or other IP rights in previously published materials.

    Further, be mindful not to use, in response to system prompts, words which may breach trademarks or give rise to a passing-off claim. Often the terms of service of a LLM give the company owning the LLM tools unlimited use of information inputted into the system.

Some Risk Mitigation Measures

With GenAI LLM technologies developing rapidly and new models and advances being introduced regularly, it is critical, before use, to understand the underlying model, acknowledge its limitations and be aware that the legal and regulatory landscape on the use of AI is subject to constant change.

Lawyers who wish to use GenAI LLM systems should make an effort to understand the strengths and weaknesses of these systems so that they can be used with control and integrity.

Lawyers and staff should receive comprehensive training on the use of GenAI and its legal and ethical implications. Proper understanding and responsible use of reliable AI tools are essential to ensuring accuracy and compliance in legal services.

Consider designating supervisors or setting up an ethics committee in the firm to oversee and monitor compliance, guide practitioners and address emerging ethical dilemmas.

Update or develop internal firm policies to deal with the risks of using GenAI tools in legal practice.

Practical Pointers When Using GenAI Tools

Users need to remember that GenAI tools are not adept as factfinders. They are best used as a supplement or starting point for your work product or summarising content, rather than as a definitive source.

GenAI tools excel at “thinking creatively” and producing content based on input and context provided by the user, also known as prompts.

Hence, do not rely on GenAI tools to find facts but use it to “think” and “produce content in a creative manner”, based on the input and text provided by you. Prompts should be detailed and tailored to specific needs.

The GenAI’s first response to a prompt is usually coherent and well written. It is tempting to cut and paste the text, run a grammar and spell check and send it out. This should be avoided.

Instead, you should prompt it a few more times to go deeper until you are satisfied you have the information you need. Even with that done, do not just cut and paste the text. Review, analyse, edit and rewrite the text. Use your judgment (the critical lawyer brain) to delete and amend the text. Put your “own voice” into the document.

When using GenAI tools, it is crucial to always bear in mind the following:

  1. Decide whether the use of GenAI tools is appropriate to the task at hand, bearing in mind the legal and ethical considerations outlined above. GenAI cannot be used as a substitute for proper exercise of a lawyer’s professional judgment and own work.
  2. Nothing is confidential. Make sure the prompts are drafted in a hypothetical context and without private or personal details.
  3. Verify responses. Responses from GenAI tools may not be accurate and may even produce false content. Sometimes, it could use segments of content from another source without permission.Avoid replicating text generated by GenAI without verifying accuracy, reliability and currency of information. Always ensure the information generated is consistent with your own knowledge and research.
  1. Understand and review the response from the GenAI tool. Determine whether it is merely flipping words and adding variations of the first sentence, typical of GenAI produced text. Do not hesitate to delete and edit meaningless filler and fluff. In particular, do not waste time editing the draft from GenAI to “preserve the writer’s intent” (the GenAI tool has no feelings).

Conclusion

Despite the challenges and frustrations associated with using GenAI tools for drafting, these tools are becoming increasingly entrenched in legal practice. Rather than avoiding them, it is important for legal professionals to learn how to utilise them effectively.

When employed appropriately and responsibly, GenAI tools can enhance legal services by offering innovative approaches. While they cannot replace the nuanced skills and judgment of a lawyer, GenAI has the potential to revolutionise how legal expertise is applied.

Integration of GenAI into legal practice is not just advantageous but essential for maintaining competitiveness. By carefully planning integration and implementing policies covering aspects such as confidentiality, data security, and ethical considerations, law firms can harness the benefits of AI technology while upholding ethical standards and regulatory compliance.

Endnotes

Endnotes
1 A detailed discussion of the risks of generative AI are set out in IMDA’s June 2023 discussion paper on “Generative AI: Implications for Trust and Governance”. Discussion_Paper.pdf (aiverifyfoundation.sg)
2 Refer to the Singapore Law Gazette November 2023 issue for a discussion on intellectual property challenges of generative AI. Generative AI – The Singapore Law Gazette

Angeline has more than 30 years’ experience in the legal profession, having practised corporate law before specializing in knowledge and practice management in law firms.