Back
Image Alt

The Singapore Law Gazette

Liability Arising from the Use of Artificial Intelligence

Are There Difficulties in Applying Conventional Principles of Liability?

What happens when artificial intelligence does not perform as expected? This article explores the key features of artificial intelligence that may affect the application of conventional principles of liability, such as in tort and in contract. This is a developing area of law around the world, and we will look at the challenges faced and the proposed solutions.

As artificial intelligence (AI) is increasingly integrated into our lives, there have been guidelines and legislation promulgated across the world to reduce its risks to safety (e.g. to prevent death, injury and damage to property) and fundamental rights (e.g. to prevent discrimination, manipulation and a loss of privacy).1The EU distinguishes between “safety risks” and “fundamental rights risks” – see the Explanatory Memorandum to the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence. See also Christiane Wendehorst, “Liability for Artificial Intelligence – The Need to Address Both Safety Risks and Fundamental Rights Risks” (“Wendehorst”), available at: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/12A89C1852919C7DBE9CE982B4DE54B7/9781009207867c12_187-209.pdf/liability-for-artificial-intelligence.pdf in drawing a distinction between such risks. For example, a decision-making process using AI should be “explainable”, “transparent” and “fair”,2See the IMDA/PDPC’s Model Artificial Intelligence Governance Framework (2nd Edition) (“Model Framework”). there must be human oversight, there must be strong personal data protection and cybersecurity practices in place to safeguard the data, and there must be good data governance to ensure that biased, inaccurate and outdated datasets are not used in developing an AI system.

But despite all these safeguards, our reality is that things malfunction. For example, an autonomous vehicle may crash, a trading algorithm may cause financial loss, or an AI-enabled medical device may fail to diagnose a malignant tumor. In fact, AI systems3An “AI system” is the AI model that has been selected and deployed for real-world use. An AI model is created when algorithms analyse data, leading to an output or result that is examined and the algorithm iterated, until the most appropriate model emerges. An AI model is thus akin to what is “learnt” by an algorithm, after it has been trained on data and its parameters adjusted during training. The definitions are based on (3.20) and (3.21) of the Model Framework. face challenges that other products do not, because they make predictions based on the data they are trained on, but that data may not always be reflective of the real world, which can result in the system producing an outcome outside of what is expected.4Andrew D. Selbst, Negligence and AI’s Human Users (“Selbst”) at 1137.

Therefore, when our protective measures fail, we need to look into remedies for the parties affected. Are our existing laws sufficient, or will new legal principles need to be developed? This is a question countries around the world are trying to answer. The UK has recently stated (March 2023) that the issue of liability (for different actors in the AI life cycle) is a complex one, and it will not make any changes to existing liability rules before consulting a range of experts, including technicians and lawyers, to ensure that any intervention is proportionate, effective, fair and does not stifle innovation or adoption of AI.5A pro-innovation approach to AI regulation (published 29 March 2023), available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper at paragraphs 80 – 85. The European Commission published the AI Liability Directive and Revised Product Liability Directive in September 2022, to make it easier for plaintiffs to bring claims given the complexity of AI systems, as it has found that existing national liability rules are not suited to handling liability claims for damage caused by AI-enabled products and services.6See the Explanatory Memorandum to the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence, page 1.

We will thus discuss liability arising from the use of AI, in two parts:

  1. Part 1: What is AI, in order to understand what are its features that may make it more difficult for existing liability principles to apply;
  2. Part 2: Theories of liability – where we examine (1) negligence (fault-based liability), (2) product liability (liability independent of fault) and (3) contract, highlighting the issues (and potential solutions) in proving elements of each type of liability given the nature of AI.

Part 1: What is Artificial Intelligence and What are its Features that May Affect the Application of Conventional Principles of Liability?

AI is broadly defined as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation and/or classification)”.7Mode Framework at (2.15).

The most prevalent form of AI is “machine learning”, where statistical techniques are applied to identify patterns in large amounts of data.8Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (“TLIA”) at page 1. Unlike rule-based decisions, where a human must conceive the rules so there are predetermined responses to a set of conditions, with machine learning, the machine learning system is set up to “learn” its own responses to conditions under a training regime, and will get better with time (think of a learner driver getting better with practice).9TLIA at page 9.

There are three unique features of AI which will require us to evaluate whether our existing legal principles (common law and legislation) can still deal with the use of AI to effectively allocate risk between all involved persons:10Similar features are also identified by the UK Government in the “A pro-innovation approach to AI regulation” white paper at paras 39 and 40, published on 29 March 2023 and available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper. Briefly, AI has 2 characteristics that require a “bespoke regulatory response” towards it, namely that it is “adaptive” (i.e. when trained, it can perform new forms of inference not directly envisioned by human programmers) so that it is difficult to explain the intent or logic of the system’s outcomes, and that it has “autonomy” (making decisions without express intent or ongoing control of a human), making it difficult to assign responsibility for outcomes.

  1. AI is a “black box”/opaque (which affects how we can explain its workings) – because an AI system can learn its own rules and uncover hidden relationships in the data beyond what unaided human observation can do, it can be difficult to understand how or why the AI system reached a particular outcome.11TLIA at page 28. The type of model chosen will affect how easily its workings are understood, although there is no threshold for when a model becomes a “black box” — a simple model with easy-to-understand structures and a limited number of parameters (such as linear regression or decision trees) can be understood more easily compared to complex models like Deep Neural Networks with thousands of parameters.12https://engineering.dynatrace.com/blog/understanding-black-box-ml-models-with-explainable-ai/
  2. AI is self-learning/autonomous – with machine learning, the AI model has the ability to automatically learn and improve from experience without being explicitly programmed, which means that its behaviour may not be fully foreseeable in all situations even if you know what is the algorithm that directs the learning.13TLIA at page 121. This is further compounded in cases of AI systems that have continuous learning capabilities, which means they continue to learn outside of the training environment when deployed in the real-world, and can change their behaviour in response to real-world data input into them.14See (3.33) of the Model Framework; section 9.2 of HSA’s Regulatory Guidelines for Software Medical Devices – A Life Cycle Approach (April 2022); preamble (66) of the EU AI Act; and Max Versace, The Next-Generation AI Brain: How AI Is Becoming More Human, available at: https://www.forbes.com/sites/forbestechcouncil/2018/04/09/the-next-generation-ai-brain-how-ai-is-becoming-more-human/?sh=26340475733b. They are in contrast to AI systems that are “locked” once deployed, akin to a person’s knowledge being frozen at a point in time.
  3. Many persons involved in its development instead of a clearly identifiable person – from selecting the datasets, to training the AI, to designing the algorithm, monitoring the output, etc. – who should be held responsible in the event the AI output is not as expected? The person (procurer) who engages another person to develop an AI system for them could also have a role in constructing the AI system by selecting and inputting the relevant data into the system (and the procurer might even feed in erroneous or insufficient data), in contrast to ordinary computer systems that are immediately ready for use by the procurer who is not involved in its construction.15Ernest Lim, B2B Artificial Intelligence Transactions: A Framework for Assessing Commercial Liability ((2022) SJLS 46-74) (“Ernest”) at 47.

Part 2: Theories of Liability

We now turn to examine liability.

(1) Negligence (fault-based)

With its focus on establishing a duty of care, breach of that duty (falling below a standard of care), and that the breach caused the loss, proving negligence where the use of AI is involved can be difficult in the ways illustrated below.

However, it must be highlighted that it is not the case that the instant AI does not perform as expected means there is negligence (and conversely if it performs as expected means there is no negligence), but that the poor performance of the AI may suggest taking a closer look into whether there was negligence in designing, training and deploying the AI.16TLIA at 175.

1.1 Who to sue?

There are many parties in the AI life cycle, ranging from the person who procures the datasets for training, to the person who trains the AI model, to the person who selects which algorithms to use. Practically, the plaintiff could proceed against the person with the deepest pockets. However, it would be natural for that party sued to disclaim liability and blame another party upstream or downstream in the AI life cycle. In effect, everyone would be saying “it was someone else’s fault”, and the plaintiff would have to go through a lot of time and expense to find the right person.

To resolve this, the European Parliament17See the European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence. The European Parliament stated that it was “appropriate for this report to focus on civil liability claims against the operator of an AI-system; affirms that the operator’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car; considers that due to the AI-system’s complexity and connectivity, the operator will be in many cases the first visible contact point for the affected person”. and the European Law Institute18See the European Law Institute Innovation Paper: Guiding Principles for Automated Decision-Making in the EU (“ELI Paper”) recommend taking the position that the person who is the operator, i.e. “in control of the risks connected with the [AI system] and who benefits from its operation in the context of a particular activity”19ELI Paper at page 10. will play the role of gatekeeper to ensure that the system is fit for purpose, trustworthy, safe, etc.20As shared by Professor Teresa Rodriguez de las Heras Ballell, who authored the ELI Paper, at a seminar on 9 February 2023 titled The Use of Algorithms and Artificial Intelligence in Commercial Transactions: Guiding Principles for ‘Algorithmic Contracting. As the operator controls the AI system, it is in the best position to assess and manage the risks, and has the most incentive to prevent the risks too.21See Guiding Principle 7 in the ELI paper, page 20. If there is more than one operator, both would be jointly and severally liable but have the right of recourse proportionately against each other.22See the European Parliament Resolution of 20 October 2020 at paragraph 13. Therefore, in the event of any harm caused, the plaintiff only has to go after the operator.

1.2 What is the standard of care to apply?

First, the opacity of AI means that if it is difficult to explain how the AI system reached an outcome, it would be difficult to trace that behaviour back to a “defect” in the code or a shortcoming of the development process, and thus difficult to assess whether there was “reasonable care” in the programming of the software.23Wendehorst at p. 195

To resolve this, it has been suggested that the focus should shift to the beginning and the end of the process – to the selection of the data input provided to the machine learning software for it to “learn” from (e.g. ensuring the data is not biased), and to testing the output of the program to ensure the model is working as intended.24TLIA at 121. For example, testing the output could involve monitoring it against a benchmark, against a non-machine learning model, or even experimenting with the model, feeding it different data input to understand how the model makes its predictions.25TLIA at 122.

Second, the use of AI is still developing, so what the standard of care? Where new applications of the technology are involved, there may not be a substantial body of accepted practice to benchmark against in the first place.26Chan, Gary Kok Yew, Medical AI, standard of care in negligence and tort law at 179, available at: https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=5415&context=sol_research

Third, when the AI system used does not merely aim to replicate human capabilities but to exceed them, its user may be unable to determine in real time whether the AI is making an error, and thus prevent it.27Selbst at 1331.

Lastly, will complying with existing guidelines/legislation/standards on the use of AI28Such as the proposed EU Artificial Intelligence Act, or the Model Framework in Singapore. (e.g. ensuring data sets for training are representative) mean that the developer has met the standard of care and thus avoid liability? It is unlikely to be a shield against liability, but it will help AI developers show that they acted in accordance with a widely-accepted standard of care,29Gary Marchant, “Autonomous Vehicles, Liability and Private Standards”. and can certainly go towards mitigation.

1.3 Causation

Having shown that the conduct fell below the standard of care (i.e. a breach), it is also necessary to show that the breach caused the loss. However, the “but for” test may be difficult to apply in light of the varying levels of autonomy of AI. It could be argued that the autonomous nature of AI breaks the chain of causation but for public policy reasons the court is unlikely to accept a “machine did it” defence.

It is also possible for the AI solution provider to argue that the procurer’s input of inaccurate data into the AI system had broken the chain of causation leading to the procurer’s loss, so that the provider is not responsible.30Ernest at p. 65 – 66

To aid plaintiffs in bringing claims in light of the unique nature of AI that poses a problem for existing liability rules, the EU had proposed a new AI Liability Directive31The new proposed EU Directive may be found at https://commission.europa.eu/document/f9ac0daf-baa3-4371-a760-810414ce4823_en. which applies to non-contractual civil law claims for damages caused by AI-enabled products or services in fault-based liability regimes. First, the AI Liability Directive will ease the burden of proof for plaintiffs through a rebuttable presumption of causality between the defendant’s fault and the resulting damage arising from the AI system’s output or failure to produce an output. It addresses the difficulties victims face when having to explain in detail how the damage was caused by a fault or omission, in the case of a complex AI system.32https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807. This means that plaintiffs would not have to prove that the defendant was at fault if: (a) the defendant’s conduct did not meet a duty of care under EU or national law directly intended to protect against the damage that occurred; (b) it was reasonably likely that the fault has influenced the AI system’s output (or lack thereof); and (c) the AI system’s output (or lack thereof) gave rise to the damage. In other words.

Second, courts may order the disclosure of relevant evidence about specific high-risk AI systems suspected of having caused damage, so as to identify potentially liable actors and relevant evidence, saving time and costs for litigation.

(2) Product Liability

Singapore’s product liability laws are not like those in the UK (Consumer Protection Act 1987) or the EU. Our remedies are available under various statutes (e.g. Unfair Contract Terms Act 1977, Sale of Goods Act 1979, Consumer Protection (Fair Trading) Act 2003, and specific legislation such as the Health Products Act 2007) and the common law (e.g. claims in contract and tort). In contrast, their product liability laws are more traditionally understood as providing redress to consumers where consumers have to prove that —

  1. the product was defective (i.e. the safety of the product is not such as persons generally are entitled to expect);33The UK Consumer Protection Act 1987 elaborates (in section 3(2)) that in determining what persons are entitled to expect in relation to a product, all the circumstances shall be taken into account, including (a) the manner in which, and the purposes for which, the product has been marketed, its get-up, the use of any mark in relation to the product and any instructions for, or warnings with respect to, doing or refraining from doing anything with or in relation to the product; (b) what might reasonably be expected to be done with or in relation to the product; and (c) the time when the product was supplied by its producer to another, and nothing in this section shall require a defect to be inferred from the fact alone that the safety of a product which is supplied after that time is greater than the safety of the product in question. and
  2. the defect caused the damage (i.e. death or personal injury or the loss of or damage to any property including land),

following which liability is established (on the producer, person holding themselves out as the producer, or the original importer into the UK,34In the case of the UK CPA 1987, it is the person who imported the product into the UK or into the EU from a non-member State35In the case of the EU Product Liability Directive, it is the person who imported the product into the EU from a non-member State.) in relation to the damage without the need to prove fault.

However, there are limitations to the application of product liability laws to applications of artificial intelligence as —

  1. the CPA is unlikely to apply to claims in a commercial context;
  2. it is not settled whether “software” (not supplied on a physical medium like a disc) is a “product” under the CPA 1987;
  3. there is still a requirement to prove there is a defect (which poses the same difficulty as proving fault in negligence);36Singapore Academy of Law’s Law Reform Committee in the Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars (published September 2020) at (5.17) and (5.18).
  4. it is not available for pure economic loss;37See section 5 of the CPA 1987
  5. the statutory defences for the manufacturer may not always be applicable in the context of AI – for example, showing that the defect did not exist in the product at the relevant time is based on the assumption that the product will not have any unpredictable changes after it is supplied. However, AI systems may need continual updates to stay relevant, and some AI systems are continual learning models which means their parameters can change even after being deployed.

The EU has also proposed a revision of the Product Liability Directive, to adapt liability rules to the use of AI. Crucially, it makes clear that software, including AI systems, will be covered.

(3) Contract Law

With contract, parties would have the opportunity to pre-allocate risk. This may alleviate some of the issues of who is responsible given the opacity of AI and number of participants involved.

However, it is not always straightforward to show that the performance does not live up to the contractual promise. For example, if an AI system is giving unexpected results, it could be because the type of data it processes has changed since it was trained due to changing circumstances, and it has nothing to do with the wrong choice of algorithms, or poor programming.

3.1 Is there actually a breach?

Establishing a breach would thus depend on what the parties have agreed in the contract, including but not limited to the following:

  1. Whether there are verifiable and measurable standards that the AI system must meet, instead of just relying on generic quality and fitness terms, whether statutorily implied or otherwise.38TLIA at 157. In the same vein, parties should be careful not to overhype what the AI system can do to avoid allegations of misrepresentation,39The case of Tyndaris v VWM was settled out of court, where Li (of VWM) sued Tyndaris for misrepresenting what Tyndaris’ AI system could do. The AI system was supposed to make investment decisions by scanning through online sources (news, social media) to gauge investor sentiment and predict US stock futures. Tyndaris presented to Li simulations that showed the AI system making double-digit returns, but when Li agreed to let the AI system manage his money, it made losses, losing up to $20 million in a day. See the report at https://www.insurancejournal.com/news/national/2019/05/07/525762.htm. and should ideally set out the system’s limitations in writing to avoid dispute over what was said later.40Japan Governance Guidelines for Implementation of AI Principles (ver 1.1) at p. 30.
  2. The autonomous and black-box nature of AI makes it difficult to determine whether the (unexpected) outcome was due to a lack of care and skill on the part of the developer41Wendehorst at p. 195. – this is the same issue that arises for negligence;
  3. If there is a process set out in the contract for developing the AI system – e.g. that the developer must follow certain local or international guidelines or standards
  4. How the AI system compares with other similar AI systems (e.g. do they encounter similar performance issues, which shows what is the state-of-the-art/level of technology available).

3.2 Can terms of quality and fitness be implied?

Parties would likely have specified terms relating to the performance of the AI system in the contract – but if they did not, section 14(2) of the Sale of Goods Act42There is ongoing debate as to whether AI-based software (not embedded in hardware) is considered a good or service. If it is, the plaintiff can rely on the implied condition in the SOGA. (SOGA) provides that “[w]here the seller sells goods in the course of a business, there is an implied condition that the goods supplied under the contract are of satisfactory quality”.

Section 14(2C)(a) of SOGA provides that the condition implied by section 14(2) does not extend to “any matter making the quality of goods unsatisfactory which is specifically drawn to the buyer’s attention before the contract is made”. The extent to which the seller must warn the buyer in the context of AI is not yet tested by the courts. However, it is likely that the Court will require the seller to give the buyer enough information and specific information to make an informed decision, so merely drawing the buyer’s attention to the black box nature of machine learning will not exempt the seller from this requirement.43Ernest at p. 68.

3.3 Can liability be limited or excluded?

Exclusion and limitation of liability clauses in contracts relating to the development or use of AI have yet to be tested by the Singapore courts. However, parties would certainly want to include them — for example, AI developers may want to limit their liability for the outcomes of AI systems trained on incomplete or inaccurate data, if the data is supplied by another party. All such terms will be subject to the requirement of reasonableness under the Unfair Contract Terms Act 1977.

3.4 Requirement to mitigate?

As plaintiffs are required to mitigate their losses, this poses a dilemma for them as they may not always be able to stop using the AI system, especially when the system performs tasks which may not so readily (or practicably) be carried out manually. At the same time, the plaintiffs do not want to perpetuate errors by using an obviously flawed system!

Conclusion

With the increasing use of AI, it is crucial for us to understand what principles govern the use of AI, to minimize the occurrence of fundamental rights risks (e.g. failing to secure an interview because of discriminatory AI recruitment software), which the principles seek to prevent.44https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5791 (see Question 5). At the same time, we must be familiar with what to do where traditional safety risks (e.g. death, injury, property damage, financial loss) materialize. Understanding how liability arises will help anyone who uses AI to take preventive steps to protect themselves from such outcomes.

The views expressed in this article are the personal views of the author and do not represent the views of Drew & Napier LLC. The author would like to thank associate See Too Hui Min for her research assistance.

Endnotes

Endnotes
1 The EU distinguishes between “safety risks” and “fundamental rights risks” – see the Explanatory Memorandum to the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence. See also Christiane Wendehorst, “Liability for Artificial Intelligence – The Need to Address Both Safety Risks and Fundamental Rights Risks” (“Wendehorst”), available at: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/12A89C1852919C7DBE9CE982B4DE54B7/9781009207867c12_187-209.pdf/liability-for-artificial-intelligence.pdf in drawing a distinction between such risks.
2 See the IMDA/PDPC’s Model Artificial Intelligence Governance Framework (2nd Edition) (“Model Framework”).
3 An “AI system” is the AI model that has been selected and deployed for real-world use. An AI model is created when algorithms analyse data, leading to an output or result that is examined and the algorithm iterated, until the most appropriate model emerges. An AI model is thus akin to what is “learnt” by an algorithm, after it has been trained on data and its parameters adjusted during training. The definitions are based on (3.20) and (3.21) of the Model Framework.
4 Andrew D. Selbst, Negligence and AI’s Human Users (“Selbst”) at 1137.
5 A pro-innovation approach to AI regulation (published 29 March 2023), available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper at paragraphs 80 – 85.
6 See the Explanatory Memorandum to the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence, page 1.
7 Mode Framework at (2.15).
8 Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (“TLIA”) at page 1.
9 TLIA at page 9.
10 Similar features are also identified by the UK Government in the “A pro-innovation approach to AI regulation” white paper at paras 39 and 40, published on 29 March 2023 and available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper. Briefly, AI has 2 characteristics that require a “bespoke regulatory response” towards it, namely that it is “adaptive” (i.e. when trained, it can perform new forms of inference not directly envisioned by human programmers) so that it is difficult to explain the intent or logic of the system’s outcomes, and that it has “autonomy” (making decisions without express intent or ongoing control of a human), making it difficult to assign responsibility for outcomes.
11 TLIA at page 28.
12 https://engineering.dynatrace.com/blog/understanding-black-box-ml-models-with-explainable-ai/
13 TLIA at page 121.
14 See (3.33) of the Model Framework; section 9.2 of HSA’s Regulatory Guidelines for Software Medical Devices – A Life Cycle Approach (April 2022); preamble (66) of the EU AI Act; and Max Versace, The Next-Generation AI Brain: How AI Is Becoming More Human, available at: https://www.forbes.com/sites/forbestechcouncil/2018/04/09/the-next-generation-ai-brain-how-ai-is-becoming-more-human/?sh=26340475733b.
15 Ernest Lim, B2B Artificial Intelligence Transactions: A Framework for Assessing Commercial Liability ((2022) SJLS 46-74) (“Ernest”) at 47.
16 TLIA at 175.
17 See the European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence. The European Parliament stated that it was “appropriate for this report to focus on civil liability claims against the operator of an AI-system; affirms that the operator’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car; considers that due to the AI-system’s complexity and connectivity, the operator will be in many cases the first visible contact point for the affected person”.
18 See the European Law Institute Innovation Paper: Guiding Principles for Automated Decision-Making in the EU (“ELI Paper”)
19 ELI Paper at page 10.
20 As shared by Professor Teresa Rodriguez de las Heras Ballell, who authored the ELI Paper, at a seminar on 9 February 2023 titled The Use of Algorithms and Artificial Intelligence in Commercial Transactions: Guiding Principles for ‘Algorithmic Contracting.
21 See Guiding Principle 7 in the ELI paper, page 20.
22 See the European Parliament Resolution of 20 October 2020 at paragraph 13.
23 Wendehorst at p. 195
24 TLIA at 121.
25 TLIA at 122.
26 Chan, Gary Kok Yew, Medical AI, standard of care in negligence and tort law at 179, available at: https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=5415&context=sol_research
27 Selbst at 1331.
28 Such as the proposed EU Artificial Intelligence Act, or the Model Framework in Singapore.
29 Gary Marchant, “Autonomous Vehicles, Liability and Private Standards”.
30 Ernest at p. 65 – 66
31 The new proposed EU Directive may be found at https://commission.europa.eu/document/f9ac0daf-baa3-4371-a760-810414ce4823_en.
32 https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807. This means that plaintiffs would not have to prove that the defendant was at fault if: (a) the defendant’s conduct did not meet a duty of care under EU or national law directly intended to protect against the damage that occurred; (b) it was reasonably likely that the fault has influenced the AI system’s output (or lack thereof); and (c) the AI system’s output (or lack thereof) gave rise to the damage. In other words.
33 The UK Consumer Protection Act 1987 elaborates (in section 3(2)) that in determining what persons are entitled to expect in relation to a product, all the circumstances shall be taken into account, including (a) the manner in which, and the purposes for which, the product has been marketed, its get-up, the use of any mark in relation to the product and any instructions for, or warnings with respect to, doing or refraining from doing anything with or in relation to the product; (b) what might reasonably be expected to be done with or in relation to the product; and (c) the time when the product was supplied by its producer to another, and nothing in this section shall require a defect to be inferred from the fact alone that the safety of a product which is supplied after that time is greater than the safety of the product in question.
34 In the case of the UK CPA 1987, it is the person who imported the product into the UK
35 In the case of the EU Product Liability Directive, it is the person who imported the product into the EU from a non-member State.
36 Singapore Academy of Law’s Law Reform Committee in the Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars (published September 2020) at (5.17) and (5.18).
37 See section 5 of the CPA 1987
38 TLIA at 157.
39 The case of Tyndaris v VWM was settled out of court, where Li (of VWM) sued Tyndaris for misrepresenting what Tyndaris’ AI system could do. The AI system was supposed to make investment decisions by scanning through online sources (news, social media) to gauge investor sentiment and predict US stock futures. Tyndaris presented to Li simulations that showed the AI system making double-digit returns, but when Li agreed to let the AI system manage his money, it made losses, losing up to $20 million in a day. See the report at https://www.insurancejournal.com/news/national/2019/05/07/525762.htm.
40 Japan Governance Guidelines for Implementation of AI Principles (ver 1.1) at p. 30.
41 Wendehorst at p. 195.
42 There is ongoing debate as to whether AI-based software (not embedded in hardware) is considered a good or service. If it is, the plaintiff can rely on the implied condition in the SOGA.
43 Ernest at p. 68.
44 https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5791 (see Question 5).

Director
Drew & Napier LLC
[email protected]

Cheryl advises clients on a variety of artificial intelligence matters, from compliance with Singapore’s data protection laws when using artificial intelligence to process personal data, to intellectual property and liability issues arising from the implementation of a ChatGPT-like system in the workplace. In her previous role as a legislative drafter, she has drafted legislation across a wide variety of subjects, with a focus on transport (including autonomous vehicles), infrastructure, technology and civil procedure.