Lawyers of Tomorrow
Embracing Science and Humanity in the AI Age
Dear Learned Friends,
Congratulations on being called to the Bar!
Three years ago, when I was called to the Bar, the world around us was very different. While there was buzz about whether artificial intelligence (AI) will one day replace lawyers, no one could predict that a new disruptive technology would soon change the future of work.
ChatGPT was launched in November 2022. Just two months later, it had 100 million active users and became the fastest-growing consumer internet application in history.1https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/ ChatGPT gained traction because of its unprecedented2While the technology behind ChatGPT was not new, previous versions of the technology had not been pitched to the public. See https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ natural language processing (NLP) capabilities. The chatbot could provide detailed, instantaneous responses to questions, draft emails, and even write code3https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/ – never has the public seen a chatbot so close to passing the Turing test.4A machine passes the Turing test if an interrogator cannot correctly identify whether the response to a question had been provided by a human or a machine. With ChatGPT’s immense potential for further growth and development, the world soon realised that ChatGPT was more than just a fad.
The advent of ChatGPT underscores the importance of recognising that we live in a VUCA5The four components of a VUCA world are volatility, uncertainty, complexity and ambiguity./BANI6The BANI framework describes the world as brittle, anxious, nonlinear and incomprehensible. world and accepting that lifelong learning is now more than just a “slogan”7https://www.aei.org/articles/how-new-graduates-can-thrive-in-a-workplace-dominated-by-ai/ – it is a sine qua non. As young lawyers embarking on your legal careers, you must learn to embrace AI.
Embracing AI Tools for Legal Practice
Law firms in Singapore are increasingly adopting AI tools to automate routine and/or basic legal tasks.8https://www.judiciary.gov.sg/news-and-resources/news/news-details/chief-justice-sundaresh-menon–keynote-address-at-litigation-conference-2024 As junior lawyers, you will be at the forefront of this change. You should thus familiarise yourselves with the suite of AI tools available to help lawyers streamline legal practice. These tools include:
- Legal research tools such as Lexis+ AI, which summarises cases, answers legal questions and generates legal research memorandums.9https://abovethelaw.com/2024/01/inside-lexis-ai-lexisnexis-latest-research-tool/ Closer to home, the Infocomm Media Development Authority (IMDA) and Singapore Academy of Law (SAL) have also co-developed a new large language model (LLM), GPT-Legal, which summarises legal cases. GPT-Legal is slated to be deployed on local research portal LawNet from September 2024 onwards;10https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gpt-legal
- Project management tools such as Lupl, which allows lawyers to allocate tasks and track completion statuses, and reminds lawyers about upcoming deadlines;11https://lupl.com/for-lawyers/ and https://lupl.com/blog/lupls-enhanced-user-experience/
- Contract review tools such as Microsoft Spellbook, which helps lawyers review contracts and suggest new clauses based on prescribed requirements;12https://www.spellbook.legal/
- E-discovery tools such as Relativity aiR, which uses NLP and AI-powered predictions to conduct document reviews and identify relevant documents;13https://www.youtube.com/watch?v=iTlcPEZajF0
- AI-powered virtual assistants such as Microsoft Copilot, which takes call notes, generates meeting minutes and flags action items;14https://customers.microsoft.com/en-us/story/1765414330609650283-rajahtann-microsoft-365-copilot-professional-services-en-singapore and
- Speech-to-text software such as EpiqFAST, which uses AI to provide instantaneous transcription, speaker identification and separation and audio search functions to aid lawyers in the courtroom.15https://www.businesstelegraph.co.uk/bringing-singapore-born-ai-innovation-to-the-world-singapore-microsoft/
Adopting these AI tools may help you to significantly reduce the time spent on routine, time-intensive work and/or non-billable work, enhancing your efficiency and allowing you to focus your attention on conducting higher-level analyses of cases, building client relationships through business development and taking in more cases (or alternatively, pursuing work-life balance).
Understanding the Science Behind AI
The oft-unspoken caveat is that you will first need to have a firm understanding of how AI works, if you wish to use AI effectively. This is because AI is not magic – it is a science.
Take for instance the example of using AI to conduct e-discovery.16E-discovery refers to the process of reviewing and sorting electronic evidence into prescribed categories for the purposes of disclosure to a court or arbitral tribunal. Since lawyers have a “duty of involvement in and supervision of the disclosure process”,17Teo Wai Cheong v Credit Industriel et Commercial (2013) SGCA 33, (44). lawyers who intend to conduct e-discovery should be able to properly instruct e-discovery service providers and understand how relevant documents will be identified and sorted. Having a basic understanding of how AI works will aid you in this process. For instance:
- You will realise the importance of explaining the factual background of your case to the e-discovery service provider before they commence work on your case.
-
- Subject Matter: If your case concerns technical or niche areas of law such as construction law or maritime law, the service provider may be able to train or fine-tune the AI model to understand and recognise industry-specific jargon. This will improve the model’s ability to accurately identify relevant documents and sort them into prescribed categories; and
- Dramatis Personae: Providing the e-discovery service provider with a dramatis personae18A dramatis personae is a document that lists the key persons and entities involved in the dispute and explains the relationship between these persons and entities. can facilitate named entity disambiguation (NED) (i.e. the process of recognising entities and linking them to their corresponding/main entities).19https://towardsdatascience.com/improving-named-entity-disambiguation-using-entity-relatedness-within-wikipedia-92f400ee5994 NED allows the AI model to better understand the nature of the communications between key players and identify relevant information with greater precision, in turn improving the quality of the AI model. For instance, if a document makes reference to “the vessel” and the only vessel referred to in the documents is the “MV SHIP”, the AI model will understand that any reference to “the vessel” will likely be a reference to the “MV SHIP”.
- You will understand why the e-discovery service provider will need to know the file types of the documents provided. Depending on the file format, the service provider may need to use:
- A speech recognition tool to transcribe audio files;
- Computer vision techniques (such as facial recognition, entity recognition and image or video object segmentation) to identify the content of images and videos. This facilitates the sorting process by helping the service provider track specific persons or objects across images or multiple frames in the video; and/or
- Optical Character Recognition (OCR) to recognise handwritten or printed characters (through methods such as pattern matching or feature recognition)20https://aws.amazon.com/what-is/ocr and extract text from PDF documents and image files that contain text.
- You may consider ways to enhance the accuracy of the automated document identification and sorting process, in order to speed up the final human review of the documents.21Final oversight by lawyers is required (Simon Chesterman, Goh Yihan and Andrew Phang Boon Leong, Law and Technology in Singapore, (06.015)). For instance, you may request that the e-discovery service provider:
- Cull and clean the data through de-NISTing (i.e. by calculating and comparing the hash values of files to identify and filter out irrelevant documents such as temporary file logs and blank files) and deduplication (i.e. by running a deduplication script to identify and hide/remove duplicate files);
- Use Retrieval-Augmented Generation (RAG) to flag and retrieve relevant documents from the file database. For instance, if the Court has requested that your client disclose “all documents relating to the patent application on 1 January 2024”, RAG can be used to retrieve all documents containing the words “patent”, “application” and “1 January 2024”, and generate a list of these documents; and
- Fine-tune the AI model via reinforcement learning through human feedback (RLHF).
In addition, with an understanding of the science behind AI, you will be able to recognise AI’s limitations and take steps to mitigate the risks brought about by AI.
You may have heard of the recent case where two lawyers in the US were fined for submitting a memorandum that contained six fictitious cases generated by ChatGPT, some of which even contained bogus quotes and citations.22https://www.straitstimes.com/world/united-states/us-lawyer-sorry-after-chatgpt-creates-fake-cases-in-his-court-filing This case illustrates one of the most common risks of using Generative AI (GenAI) – that LLMs may hallucinate (i.e. they may invent new information that is inaccurate and/or nonsensical23https://www.ibm.com/topics/ai-hallucinations).
While it is impossible to completely prevent LLMs from hallucinating,24This is especially so from the perspective of users, since hallucinations usually occur because the LLM was not trained with quality data or was developed using faulty model assumptions or architecture. See https://www.digitalocean.com/resources/article/ai-hallucination users of LLMs can take steps to mitigate the risks of hallucination, for instance through:
- Using the SLM/LLM-as-Judge technique: You can use a second LLM or SLM25SLMs refer to small language models, which are streamlined versions of LLMs that are used to perform specific tasks. See https://thenewstack.io/the-rise-of-small-language-models/ to serve as a judge and conduct single answer grading. This means that the second LLM or SLM will assign a score to the response provided by the original LLM,26https://arxiv.org/pdf/2306.05685 allowing you to cross-check the veracity of the original LLM’s response.
- Prompt engineering: Prompt engineering techniques can be used to instruct LLMs to generate relevant and accurate responses, reducing the risk of hallucinations. For example:
- You should ensure that your prompt contains four elements:
- The instruction – the task that the LLM is required to perform;
- The context – background information to help the LLM provide tailored responses to your question;
- The input data – the specific question to be answered; and
- The output indicator – the type of output required to be produced.27https://www.promptingguide.ai/introduction/elements
- You should ask the LLM to cite its sources. Note that this will only be possible if the model uses RAG.
- You can use chain-of-thought prompting to guide the LLM in performing complex reasoning, by requesting that the LLM provide a step-by-step breakdown of its reasoning process instead of jumping straight into providing a final answer.28https://arxiv.org/abs/2201.11903
- You should ensure that your prompt contains four elements:
Having a foundational and up-to-date understanding of how AI works is thus crucial.
Your Human Edge
AI is transforming and will continue to transform how we work and what we do. As you embrace AI to future-proof your careers, do not forget to embrace your humanity.
In July 2016, the Institute for the Advancement of the American Legal System (IAALS) published a report titled Foundations for Practice: The Whole Lawyer and the Character Quotient. According to the report, lawyers ought to have “some threshold intelligence quotient” (IQ), “a favourable emotional quotient” (EQ) and “some level of character quotient” (CQ). In particular, good junior lawyers should possess not just hard legal skills such as legal research and writing skills but also professional competencies (like being able to use technology effectively) and characteristics such as integrity and trustworthiness.29https://iaals.du.edu/sites/default/files/documents/publications/foundations_for_practice_whole_lawyer_character_quotient.pdf
What sets you apart from machines and prevents you from being replaced by machines is the very fact that you are human. You think humanly and behave humanly. While a machine can be trained to become a legal powerhouse, it will (at least in the foreseeable future) always lack the human touch necessary for any lawyer to become a trusted advisor to clients. To stay ahead of the curve, find and demonstrate your added value as a human lawyer.
The Road Ahead
Once again, welcome to the legal profession. As you don your court robes and embark on this new chapter of your lives, remember to embrace both science30Please ensure that your use of AI complies with your firm’s AI policy and any applicable legal and ethical rules/guidelines, especially your obligations of due diligence and confidentiality under Rule 5(2)(c) and Rule 6 of the Legal Profession (Professional Conduct) Rules 2015. and humanity in this AI age.
The future of law is yours to create.
This article is written in the author’s personal capacity and all views expressed herein are the author’s own views. The author is grateful to Mr Gary Seet, Strategic Initiatives Lead at Amazon Web Services, for generously sharing his expertise in AI, which has added immense value to this article.
Endnotes
↑1 | https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/ |
---|---|
↑2 | While the technology behind ChatGPT was not new, previous versions of the technology had not been pitched to the public. See https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ |
↑3 | https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/ |
↑4 | A machine passes the Turing test if an interrogator cannot correctly identify whether the response to a question had been provided by a human or a machine. |
↑5 | The four components of a VUCA world are volatility, uncertainty, complexity and ambiguity. |
↑6 | The BANI framework describes the world as brittle, anxious, nonlinear and incomprehensible. |
↑7 | https://www.aei.org/articles/how-new-graduates-can-thrive-in-a-workplace-dominated-by-ai/ |
↑8 | https://www.judiciary.gov.sg/news-and-resources/news/news-details/chief-justice-sundaresh-menon–keynote-address-at-litigation-conference-2024 |
↑9 | https://abovethelaw.com/2024/01/inside-lexis-ai-lexisnexis-latest-research-tool/ |
↑10 | https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gpt-legal |
↑11 | https://lupl.com/for-lawyers/ and https://lupl.com/blog/lupls-enhanced-user-experience/ |
↑12 | https://www.spellbook.legal/ |
↑13 | https://www.youtube.com/watch?v=iTlcPEZajF0 |
↑14 | https://customers.microsoft.com/en-us/story/1765414330609650283-rajahtann-microsoft-365-copilot-professional-services-en-singapore |
↑15 | https://www.businesstelegraph.co.uk/bringing-singapore-born-ai-innovation-to-the-world-singapore-microsoft/ |
↑16 | E-discovery refers to the process of reviewing and sorting electronic evidence into prescribed categories for the purposes of disclosure to a court or arbitral tribunal. |
↑17 | Teo Wai Cheong v Credit Industriel et Commercial (2013) SGCA 33, (44). |
↑18 | A dramatis personae is a document that lists the key persons and entities involved in the dispute and explains the relationship between these persons and entities. |
↑19 | https://towardsdatascience.com/improving-named-entity-disambiguation-using-entity-relatedness-within-wikipedia-92f400ee5994 |
↑20 | https://aws.amazon.com/what-is/ocr |
↑21 | Final oversight by lawyers is required (Simon Chesterman, Goh Yihan and Andrew Phang Boon Leong, Law and Technology in Singapore, (06.015)). |
↑22 | https://www.straitstimes.com/world/united-states/us-lawyer-sorry-after-chatgpt-creates-fake-cases-in-his-court-filing |
↑23 | https://www.ibm.com/topics/ai-hallucinations |
↑24 | This is especially so from the perspective of users, since hallucinations usually occur because the LLM was not trained with quality data or was developed using faulty model assumptions or architecture. See https://www.digitalocean.com/resources/article/ai-hallucination |
↑25 | SLMs refer to small language models, which are streamlined versions of LLMs that are used to perform specific tasks. See https://thenewstack.io/the-rise-of-small-language-models/ |
↑26 | https://arxiv.org/pdf/2306.05685 |
↑27 | https://www.promptingguide.ai/introduction/elements |
↑28 | https://arxiv.org/abs/2201.11903 |
↑29 | https://iaals.du.edu/sites/default/files/documents/publications/foundations_for_practice_whole_lawyer_character_quotient.pdf |
↑30 | Please ensure that your use of AI complies with your firm’s AI policy and any applicable legal and ethical rules/guidelines, especially your obligations of due diligence and confidentiality under Rule 5(2)(c) and Rule 6 of the Legal Profession (Professional Conduct) Rules 2015. |