Back
Image Alt

The Singapore Law Gazette

Robots and Legal Reasoning: Thinking Like a Lawyer 2.0

This article provides a brief introduction to how artificial intelligence is likely to affect the raison d’etre of lawyers – the ability to exercise legal reasoning. With increased computing power and computational research, the way that lawyers and judges reason is likely to change significantly in the future. Lawyers and judges will need to upgrade their legal reasoning skills from 1.0 to 2.0 and make a paradigm shift in how to think like a lawyer.

Introduction

In 1949, in the aftermath of World War 2 and when the first modern computers were being developed, Professor Lon Fuller wrote a famous article for the Harvard Law Review entitled “The Case of the Speluncean Explorers” (“Speluncean Explorers case”).1 Lon R. Fuller, “The Case of the Speluncean Explorers” (1949) 62(4) Harvard Law Review 616. Set in the year 4300 in the fictional world of Newgarth, the case involved four cave explorers who were charged with wilfully taking the life of their colleague inside a cave. The entrance to the cave had been completely sealed by a landslide and the five explorers had gone without food for more than 20 days. In order to survive longer, they agreed to cast lots to ascertain who should be eaten by the remaining explorers. Before the roll of the dice, one of the explorers decided to withdraw from the agreement, but as luck would have it, he became the unfortunate victim of cannibalism. The four surviving explorers, who were later rescued, were convicted of the crime of murder and sentenced to the death penalty.

On appeal before the Supreme Court of Newgarth, the five judges came to different conclusions about the fate of the four defendants. Two judges voted to set aside the conviction, one voted to affirm the conviction, one decided to abstain and the final judge (the Chief Justice) preferred that that the defendants should seek executive clemency. Controversies over the proper role of the Executive and the Judiciary, whether the survivors’ conduct should be governed by the law of nature or the law of the land and what the proper purpose and scope of the criminal provision should be were all debated in the five intriguing judicial opinions. Professor Fuller’s essay has often been highlighted as illustrative of the breadth of legal reasoning.2 See e.g. Peter Suber, The Case of the Speluncean Explorers: Nine New Opinions (London: Routledge, 1998) at ix and 3.

Today, with the rapid rise of artificial intelligence (“AI”), one may wonder whether, in the year 4300, we will still have or need human judges or lawyers. Some may even say that we do not need to wait so long as robots may take over the practice of law by the year 2050!

In the past few years, there has been an explosion of literature on the possibility of robots replacing lawyers, or robots becoming lawyers. The frissons created by recent AI developments in the global legal industry have triggered speculations and concerns that lawyers will soon be made redundant. While a number of commentators have observed that robots may be able to automate many of the legal services that lawyers currently provide to their clients, what has been less examined is the more fundamental fear that robots will greatly undermine the raison d’etre of lawyers – the ability to exercise legal reasoning, or simply put, the ability to think like a lawyer.

This article provides a brief introduction to how developments in AI, which have already made a substantial impact in rule-based games such as chess, are likely to change significantly the way that lawyers and judges reason in the future.

The Complexity of Legal Reasoning

It has been observed that legal reasoning is a form of “expert reasoning”, with “its own terminology, its own universe of acceptable data, and its own rules”.3 Phoebe C. Ellsworth, ‘Legal Reasoning” in Keith J. Holyoak & Robert G. Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning (New York: Cambridge University Press, 2005) 685, at 700. In civil law systems, at its earliest conception during the Roman empire, legal reasoning was focused on whether a claimant had a legal remedy.4 Geoffrey Samuel, A Short Introduction to Judging and to Legal Reasoning (United Kingdom: Edward Elgar Publishing Limited, 2016) at 5-7. There were no legal rules as such, and the facts of each case determined whether a legal action existed.5 Ibid. Gradually, legal reasoning in civil law became more rule-based, as legal scholars and jurists formulated legal definitions and propositions.6 Ibid, at 15-19.  

In the English common law system, legal reasoning was traditionally concerned with whether the “right form of action had been brought”, for example, actions of debt or trespass.7 Ibid, at 23-25. However, since the 19th century, the common law system replaced the forms of action with doctrinal law such as contracts and torts.8 Ibid, at 28.   With the rise of the regulatory state, the interpretation and application of legal rules in legislation also became prominent.9 Ibid, at 29. Together, common law reasoning and statutory interpretation form the bedrock of what law students today typically learn in law school.

In view of the modern rule-based nature of law, law has often been compared with games governed by clear and identifiable rules, such as chess. Writers on the subject of legal reasoning have typically viewed legal rules as much more complex than the rules of chess because of the broad universe that the law occupies. In his classic primer on legal reasoning, Thinking Like a Lawyer: A New Introduction to Legal Reasoning (“Thinking Like a Lawyer”),10 Frederick Schauer, Thinking Like a Lawyer: A New Introduction to Legal Reasoning (United States of America: Harvard University Press, 2009). Professor Frederick Schauer noted that:

“… law cannot plausibly be seen as a closed system, in the way that games like chess might be. All of the moves of a game of chess can be found in the rules of chess, but not all of the moves in legal argument and legal decision-making can be found in the rules of law.”11 Ibid, at 5-6.

Similarly, it has been observed that legal rules are more flexible than the rules of chess in view of the greater number of permutations in organising legal rules and facts to reach a decision:

“… in complex cases there are often many possible rules and precedents from which to choose, and both the facts and the rules can be interpreted and reinterpreted in relation to each other until the judge is satisfied with the total combination – satisfied with the fitness or coherence of the overall picture, and satisfied that the decision is just”.12 Supra, note 3.

The complexity of legal reasoning is also demonstrated by the fact that different judges can have diverse plausible views on the same legal issue. While the Speluncean Explorers case mentioned in the Introduction is fictional, dissenting judgments are issued by judges from time to time in the real world. A recent example of a dissenting opinion in the Singapore courts is Attorney-General v Ting Choon Meng, where Chief Justice Sundaresh Menon disagreed with the majority view which held that the Government is not a “person” under section 15 of the Protection from Harassment Act that can obtain an order to prevent or stop another person from publishing false statements of fact.13 [2017] 1 SLR 373.

The Impact of AI on Chess

To understand the likely impact of AI on legal reasoning in the future, it is useful first to consider how AI has affected the game of chess in the past 60 years or so. Although the rules of chess are far less complex than legal rules, chess and law share a common characteristic in that chess players, as well as lawyers and judges, have to evaluate, reason and make decisions based on the information available.

In chess, such information is presented through the configuration of the pieces and pawns on the chessboard; in law, such information is obtained through the facts of the case and the relevant legal materials such as precedent cases and academic commentary. Chess players also refer to relevant materials (for instance, records of games played previously and chess books) before the game to help them in the decision-making process over the chessboard. What is notable is that before the advent of computers, chess players did not have a machine to tell them what the good or bad moves in a given position may be. These moves had to be worked out through a process of analysis, trial-and-error and experience, much like what any lawyer has to do in the practice of law. Today, any chess player with a computer can easily turn on a chess engine to find good moves and detect bad moves, some of which will even confound the strongest chess players in the world.

In his recent book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (“Deep Thinking”),14 Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (United States of America: Public Affairs, 2017). former World Chess Champion Garry Kasparov outlined the history of how machines eventually beat the strongest chess player (Garry Kasparov himself at that time) in the world. This process was not a straightforward one. As with many human endeavours, it took years of experimentation before the correct formula was found. Initially, in the aftermath of World War 2, computer scientists concentrated on developing a chess program based on what was called a “Type B” algorithm, where it tried to follow “the way a human player thinks by focusing only on a few good moves and looking deeply at those instead of checking everything”.15 Ibid, at 30. Because of the slow computer processing power at that time, this “intelligent search” technique was preferred to the “brute force” of a “Type A” algorithm, an “exhaustive search method that examines every possible move and variation, deeper and deeper with each pass”.16 Ibid.

With quicker hardware and improved programming over the next few decades, Type A programs overtook Type B ones such that by the 1980s, they were able to defeat strong chess players.17 Ibid, at 38-39. In 1996, the first face-off in a 6-game chess match between Garry Kasparov and Deep Blue (the strongest chess program developed by IBM) resulted in a 4-2 win for the human world champion. But the tables were soon overturned in the rematch in 1997, when he lost to a much-improved version of Deep Blue by just one point.

A similar revolution is likely to occur in the legal industry. In Tomorrow’s Lawyers: An Introduction to Your Future,18 Richard Susskind, Tomorrow’s Lawyers: An Introduction to Your Future (2nd ed, Oxford: Oxford University Press, 2017). Professor Richard Susskind, the well-known legal futurist, noted that AI is not attempting to mimic the reasoning of the best lawyers (Susskind calls this the “AI fallacy”),19 Ibid, at 187. which is analogous to using the outdated Type B chess algorithm. Rather, brute-force computing (similar to the Type A chess algorithm) will be used to conduct legal analysis:

“We saw this in 1997 when IBM’s Deep Blue system beat the world chess champion Garry Kasparov. It did so not by copying the thought processes of grandmasters but by calculating up to 330 million moves per second. So too in law – human lawyers will be outgunned by brute processing power and remarkable algorithms, operating on large bodies of data.”20 Ibid, at 187-88.

Can Lawyers and Judges Reason Better with AI?

At this point, the impact of AI on legal reasoning would appear to be at its infancy, despite the hype surrounding legal answering systems such as ROSS and chatbots such as DoNotPay. Nevertheless, significant progress has been made, through computational research, to model common law and statutory reasoning. As explained by Professor Kevin D. Ashley in Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age,21 Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (United Kingdom: Cambridge University Press, 2017).   the field of AI & Law has been studying how to develop computer programs which are able to “reason logically with legal rules from statutes and regulations”,22 Ibid, at 39. and “reason about whether [legal cases] are analogous to a case to be decided”.23 Ibid, at 73. In future, it may no longer be science fiction for computer programs to perform legal reasoning based on information extracted from legal sources.24 Ibid, at 4-5.   With the help of such programs, lawyers may be able to “post and test legal hypotheses, make legal arguments or predict outcomes of legal disputes”.25 Ibid. There is therefore cause for optimism that computational tools will help lawyers and judges to sharpen their legal reasoning skills, well before they step into a courtroom.

However, there may be limits to what AI can do to improve legal reasoning by lawyers and judges. Firstly, given the complexity of legal reasoning, it remains an interesting question whether AI can adapt legal reasoning to new scenarios which are not found in available legal sources. Professor Schauer presciently noted in Thinking Like a Lawyer that “law is inevitably and especially subject to the unforeseeable complexity of the human condition”.26 Supra, note 10, at 6.   He added that:

“[a]s the world continues to throw the unexpected at us, law will find itself repeatedly forced to go outside of the existing rules in order to serve the society in which it exists”.27 Ibid.

Secondly, AI may create the illusion that there is only one way of reasoning to solve difficult legal problems. But as seen from the Speluncean Explorers case, human reasoning is able to accommodate a wide range of plausible legal views. Lawyers and judges who treat AI as, in the words of Garry Kasparov in Deep Thinking, an “oracle”28 Supra, note 14, at 227. may find themselves over-reliant on what the computer says, which may ironically narrow their appreciation of the complexity of legal reasoning.

Finally, it cannot be presumed that there will only be one computer program that can conclusively determine the right way to think like a lawyer. In the chess world, multiple chess engines have been developed where different good moves may be found in a particular chessboard position. Likewise, lawyers and judges may have to analyse the legal reasoning outputs obtained from different computer programs and make their own conclusions as to which legal argument works best for their case.

Conclusion  

Legal reasoning has had a rich and varied tradition, going back all the way to Roman law. Lawyers and judges today should justly be proud to be part of a profession that has addressed the difficult problems presented by an ever-evolving society not through superficial analysis and off-the-cuff answers, but through the complex process of legal reasoning. The rise of AI should be viewed more as an opportunity to raise legal reasoning to the next level than as a threat to the livelihood of lawyers and judges. At the same time, the limitations of AI should be recognized, given our increasingly uncertain world. Still, just as AI has changed the way that the game of chess is played, so will AI likely transform the rules of the game in law and legal reasoning. The time has come for lawyers and judges to upgrade their legal reasoning skills from 1.0 to 2.0 and make a paradigm shift in how to think like a lawyer.

 

Endnotes   [ + ]

1. Lon R. Fuller, “The Case of the Speluncean Explorers” (1949) 62(4) Harvard Law Review 616.
2. See e.g. Peter Suber, The Case of the Speluncean Explorers: Nine New Opinions (London: Routledge, 1998) at ix and 3.
3. Phoebe C. Ellsworth, ‘Legal Reasoning” in Keith J. Holyoak & Robert G. Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning (New York: Cambridge University Press, 2005) 685, at 700.
4. Geoffrey Samuel, A Short Introduction to Judging and to Legal Reasoning (United Kingdom: Edward Elgar Publishing Limited, 2016) at 5-7.
5. Ibid.
6. Ibid, at 15-19.  
7. Ibid, at 23-25.
8. Ibid, at 28.  
9. Ibid, at 29.
10. Frederick Schauer, Thinking Like a Lawyer: A New Introduction to Legal Reasoning (United States of America: Harvard University Press, 2009).
11. Ibid, at 5-6.
12. Supra, note 3.
13. [2017] 1 SLR 373.
14. Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (United States of America: Public Affairs, 2017).
15. Ibid, at 30.
16. Ibid.
17. Ibid, at 38-39.
18. Richard Susskind, Tomorrow’s Lawyers: An Introduction to Your Future (2nd ed, Oxford: Oxford University Press, 2017).
19. Ibid, at 187.
20. Ibid, at 187-88.
21. Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (United Kingdom: Cambridge University Press, 2017).  
22. Ibid, at 39.
23. Ibid, at 73.
24. Ibid, at 4-5.  
25. Ibid.
26. Supra, note 10, at 6.  
27. Ibid.
28. Supra, note 14, at 227.

Director, Legal Research & Development
The Law Society of Singapore
E-mail: alvinchen@lawsoc.org.sg