Back
Image Alt

The Singapore Law Gazette

Legal Issues in AI Deployment

The object of this article is to discuss the legal issues faced by companies when they embark on incorporating artificial intelligence (AI) technology into their work processes or to power functions in their products or services. To facilitate discussion of the issues, the article will take the perspective of a buyer of AI technology and services in the main, although the perspectives of suppliers of such technology and services will be discussed where relevant. The ensuring discussion is also relevant when an organisation undertakes the deployment of AI technology through its in-house IT department, because some of the technology (and possibly data) may be sourced externally.

What is AI?

AI is not a technology that is new, as the term can be traced to a 1956 workshop in Dartmouth College when it was first coined.1 See Agrawal, Gans & Goldfarb, Prediction Machines, “Why it’s called intelligence” (Chapter 4). Since then, AI has seen several “winters” due to limitation of compute power and public data availability then, leading to cessation of research funding and also based on expectations outstripping what the state of technology at that time could actually accomplish.2 See “History of Artificial Intelligence” <https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence> (last accessed 30 December 2018) for one version of the boom and bust cycle of AI over the decades. The current frisson of development and adoption, it has been said, can be attributed to recent advances in computer power (in general and recently especially in Graphics Processing Unit or GPU capabilities),3 See Janakiram, “In The Era Of Artificial Intelligence, GPUs Are The New CPUs” (Forbes, 7 August 2017) <https://www.forbes.com/sites/janakirammsv/2017/08/07/in-the-era-of-artificial-intelligence-gpus-are-the-new-cpus/#71bab1a15d16> (last accessed 7 January 2019). developments in Big Data, and increased availability of Open Public Data. There is a range of technologies that can be classified as AI, and the subset that has gained recent attention to enable certain AI solutions comes under the category of machine learning, and especially deep learning (which is the focus of this article). This set of technologies essentially applies compute power to large datasets in order to discern generalisations and patterns, which are then used to offer a prediction (in the statistical sense, and not clairvoyance) or to assist with categorisation. This is fundamental to understanding the legal issues that will be discussed subsequently.

For our purposes, we will adopt the handy, if not perhaps the most technically accurate, definition that AI can be classified into two broad categories of technologies that simulate three human cognitive processes: perception, reasoning and learning (or more precisely, generalisation). AI that can simulate perception are in use in myriad applications today, eg facial recognition, biometrics, voice recognition, etc. Perception based AI technologies enable driverless cars and electronic gantries controlling visitor ingress and egress. Reasoning AI powers chatbots that are increasingly common as a supplement to the more traditional keyword searches in online information like FAQs. These deploy natural language processing to understand queries through knowledge representation to suggest possible answers. LawNet will soon be able to provide enhanced search capabilities by deploying this type of AI to augment basic keyword searches.

What is Machine Learning?

Machine learning is the class of technologies that enables learning. These come in two forms: supervised, unsupervised and reinforcement learning. The former has found its way into document review tools like Luminance that the legal community is au fait with. Unsupervised machine learning is helpful in surfacing patterns within datasets that are bespoken by the data.

The important ingredients in that subset of AI technologies known as machine learning (hereinafter abbreviated to AI/ML) are data, model selection and training. A company’s AI/ML journey commences when it decides to introduce AI/ML into its operating environment or to enhance its product or service. There are internal corporate governance and risk management issues that are not within the intended scope of this essay that focuses on legal risks. Sufficient for the present purposes to say that addressing governance and risk is essential for the reason that the company will be potentially liable if a decision made by or with assistance of AI goes wrong, or if there is malfunction of the AI-powered function of its product or service. The focus of this essay is on how, having made the decision to incorporate AI/ML into its processes, products or services, a company going about it can pre-emptively identify and address legal risks.

What are the Considerations Around Commercial Deployment of AI/ML?

The first decision that the company has to make is how it chooses to deploy AI/ML. Recall that AI/ML is quintessentially a technology that offers a prediction. How that prediction comes about and is utilised is a conscious design decision. For example, recommender engines offer prediction of books to read, songs to listen to, authors and singers to follow. The design decision that had been made was to deploy AI/ML to offer a recommendation function that enhances the user experience in an online bookstore or music app. The decision to act upon the recommendation is still very much left to the end user. On the other side, the application of AI/ML to an internal decision-making process, eg for assessing and deciding upon an insurance claim, can yield positive results in assisting to achieve consistency with past decisions but may lead to a dissatisfied insured person challenging the insurer’s decision to decline a claim. Equally, a decision to approve an application for a financial product for someone with the wrong risk profile can have dire consequence for both the individual and the financial institution.

Legal issues can arise in the performance of the AI/ML engine (see discussion below), how the decision to reject the claim or approve the application was made: eg was the decision made autonomously without human review, or was AI/ML used to offer a predictive inference (or prediction based on past data) but it was a human agent that made the final decision. Different legal issues can arise that have to be analysed through the lens of product or vicarious liability respectively. The appropriateness of each decision-making approach has to be assessed with reference to the impact of the decision on the individual affected by it. These are issues that are traversed in the Personal Data Protection Commission’s “Proposed Model AI Governance Framework”.4 Accessible at <https://www.pdpc.gov.sg/Resources/Model-AI-Gov> (last accessed 26 January 2019).

Having selected an appropriate decision-making approach for the deployment of AI, the next phase entails the selection of a dataset that is used to train the selected AI/ML model. Training of AI/ML models is an iterative process that commences with the selection of an appropriate model, putting it through training which eventually, after validation and testing, yields a trained model fit for the intended use case.5 There are three steps in this process: first, selecting the right model family; next, selecting the right model form (or mathematical expression of the model); and third, the fitted model that emerges after the training process have optimised the parameters that can be estimated from the training dataset such that the trained model can be used to make predictive inferences. See Wickham, Cook & Hofmann, “Visualising Statistical Models: Removing the Blindfold” Statistical Analysis and Data Mining: The ASA Data Science Journal, vol. 8, no. 4, pp. 203–225, 2015; at section 2.1 <http://vita.had.co.nz/papers/model-vis.html> (last accessed 7 January 2019). The trained or fitted model is then deployed into the intelligent system to enable the specific functions that rely on it. The remainder of this essay will delve into the legal issues in detail, but before we dive into those depths, a few observations ought to be made.

First, how AI/ML is to be deployed in decision-making determines the feature set of the intelligent system. For example, recommendations to customers of an online bookstore or music app implements a human-over-the-loop decision-making approach that leaves the decision to the customer; automating decisions to approve travel insurance applications in an online service implements a human-out-of-the-loop approach; and using AI/ML to ensure consistency in insurance claims decision processed by members of staff implements a human-in-the-loop approach. All of these decisions result in differences in how the AI/ML prediction is presented (eg to the customer or member of staff) and what action follows (eg automated processing in the travel insurance approval application or human intervention in the other scenarios). The legal principles that can be deployed to allocate legal risks and liability differ: eg vicarious liability for the insurance claims processing scenario.

Second, the implementation of AI/ML is to enable a specific function or at an identified juncture in a process flow. AI/ML is not a genie that roams out of control once it is released. It carries out the task that it is set to; there is a task master who decides the nature of the task and its scope. This involves a design decision that ought to be made by someone appropriately empowered within the corporate structure. Flowing from this is the secondary point that design decisions can be made as to what types of scenarios ought to be handed over to the AI/ML engine and which ones ought not. This is the present author’s answer to the trolley problem. It makes for piquant post-prandial discussions but in real world systems design, if a scenario can be predicted, its occurrence assessed to be sufficiently likely and the risks associated with its occurrence sufficiently dire (or non-trivial, depending on risk appetite), then it is beholden on the owner of the intelligent system or process to design appropriate safeguards or responses and to take the scenario out of the AI/ML engine’s hands. For the purposes of discussion, driverless cars can be programmed to travel at a reduced speed in areas proximate to schools during specific times when school children are expected to be out and about, which will enable it to come to a complete halt without having to make moral choices. Whilst it is impossible to cater for all scenarios, some of the more likely or higher risk ones are foreseeable and the standard of care that the tort of negligence demands may require the owner of the intelligent system or process to design specific responses.

Issues Surrounding the Use of Data for Model Training

Data is the stock feed of AI/ML. The legal issues around the use of data for model training can be analysed in two spheres. First, whether data – particularly personal data – can be used for the intended purpose; and second, whether there have been sufficient measures implemented to identify and address the risks associated with the particular dataset.

Issues Relating to Provenance

When the dataset includes or comprises personal data, the first question to consider is whether the model training can be equally effective when the dataset is anonymised. Anonymisation is a technique that removes individually identifying features from the dataset. The key consideration is whether the technique of anonymisation, together with contractual (if from a third-party source), process and administrative controls are sufficient to minimise the risk of re-identification. These are issues discussed in depth in other literature, to which the interested reader may refer.6 See PDPC’s publication on anonymisation: “Anonymisation (Chapter 3)” in Advisory Guidelines on the Personal Data Protection Act for Selected Topics <https://www.pdpc.gov.sg/Legislation-and-Guidelines/Guidelines/Main-Advisory-Guidelines> (last accessed 30 December 2018) and “Guide to Basic Data Anonymisation Techniques” <https://www.pdpc.gov.sg/Legislation-and-Guidelines/Guidelines/Other-Guides> (last accessed 30 December 2018). Depending on the nature of the problem statement, a pseudonymised dataset may be effective since the net output that model training is concerned with is not the data but the vectors, weights and bias that form the trained model (more on these below, but for the moment, it is sufficient to know that these are essentially parameters).7 While anonymised data can be used for training and validation, an intelligent system designed with data protection in mind can also anonymise the input data that is fed to the trained AI/ML model.

Where anonymisation is not an option, the question that arises will be whether the intended use of the personal data is justifiable. Very much depends on the purpose for processing of the personal data (and AI/ML is a form of processing).8 Since it involves the “carrying out of (an) operation or a set of operations in relation to the (training dataset containing) personal data”: see definition of “processing” in section 2 of the Personal Data Protection Act. Does the intended processing come within one of the exceptions to consent for use under the PDPA? For example, the research exception can be helpful to justify processing for model training. If none of the exceptions are applicable, has consent been obtained for this type of processing or can consent be obtained? Very much depends on the specific wording of a consent clause. It is anticipated that impending introduction of deemed consent through a notification and opt-out process and the legitimate interest exception will be helpful for organisations considering secondary use of personal data that was hitherto unanticipated but is now enabled by technological advances.

The issue is complicated when the data source is from an external third party. Apart from the considerations in the preceding paragraph, there is the additional dimension of whether the disclosure by the third-party source (and the concomitant collection) is justifiable. Due diligence investigations of the scope of consent obtained from the data subjects ought to be conducted if the disclosure and collection cannot be justified under an exception to consent under the PDPA. Additionally, the licensing terms of the external dataset should also be checked to ensure that the intended use is not prohibited under the terms and how use of the licensed dataset to create derivative works is dealt with. This can result in at least two complications.

The first is when the licensed dataset is combined with the company’s own data (or it may even include datasets from other third-party sources) and the combined dataset is processed before a training dataset is extracted. There is commingling of datasets and perhaps merger of similar records, and transformation of data in some fields (eg converting data relating to the university that the data subject graduated from into the country from which the individual obtained his academic qualification). The question whether the merged dataset is a derivative work is answered by considering the extent to which the effort put towards its transformation resulted in an original work of authorship with distinct copyright.

The second occasion is when the dataset is used for model training. The trained model is a derivative work that is protected as an original work (see discussion below). It will therefore be prudent to ensure that the licensing terms not only permits such use, but ownership of the derivative work is adequately clear. For example, will copyright in the trained model be owned by the licensee under common law principles or has ownership rights been assigned to the licensor by the terms of the licence; and apart from ownership, do either party have a licence to use the derivative works. These are familiar intellectual property right topics that can be easily addressed through commercial negotiations once identified.

Issues Relating to Quality

The quality of the dataset gives rise to another set of issues that companies embarking on AI/ML implementation ought to be alive to. Truth be told, these issues are essentially the proverbial old wine put into new wine skins: these issues have been around since the time we started to make use of data but, by reason of the impenetrability of how some of the trained AI/ML models make specific predictions, takes on an accentuated tone. Data quality can be assessed in the following (traditional) dimensions.

First, the veracity of the data, in the sense whether the data is accurately captured, recently updated, and properly defined. These are not new concepts for the person experienced in managing data. Parenthetically, these are also the core fundamentals of a robust data protection management programme. Perhaps one point that ought to be highlighted is that care should be taken to fully understand the data dictionary, so that the right inferences are drawn. For example, an address in a dataset can mean several things: the data subject’s residence, place of business, billing address or delivery address. Sometimes, direct data points are not available and inferences are made from proxies.9 For an introduction to the concept of proxy or stand-in data, see Cathy O’Neil, Weapons of Math Destruction (2016), pp 17–18. For example, inferring from the fact that an individual holds a degree from the University of London that he studied in the United Kingdom may not always hold true, since it could have been an external degree. Some of these inferences appear as data fields in the dataset (in this example, country of study) and the fact that this data field was inferred data may not be obvious. The discipline of an updated and accurate data dictionary should record such details, but practice and theory often diverge.

Second, the fitness for purpose of the dataset, in the sense whether the dataset is representative of the population under consideration and for the intended purpose. An unrepresentative dataset contains hidden bias which can be transposed to the trained model and, depending on how the model is deployed in the decision-making approaches, can result in unintended discrimination. The instances of hidden bias resulting in unintended discriminatory decisions have gained recent notoriety.10 See, for example, Angwin, et al “Machine Bias” in Pro Publica (23 May 2016) <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> (last accessed 1 January 2019).

The legal issues that emanate from data quality from the perspective of a company deploying AI/ML can be trifurcated into the following: issues relating to data licensed from third-party sources, issues relating to the professional care and skill of service providers and the company’s responsibilities to its customers.

Where data is licensed from third-party sources, data quality and fitness for the intended purpose should be reduced to commercial terms and incorporated into the licence. These could take the form of representations and warranties as to quality and fitness for purpose, which can be supplemented with negotiated indemnity clauses. Complication arises when third-party data is commingled with the company’s own data (or even with data from other third-party sources). When this happens, one may validly question the effectiveness of representations and warranties from each component dataset.

If the company is relying on a service provider to pre-process the data (this could be the same service provider undertaking model training and selection), the split of roles and responsibilities between service provider and client should be properly defined. The nature of the issues is no different from a typical contract for professional services, but there are unique domain-specific issues. The identification and management of bias in the dataset is one such issue. The company contracting for such services will have to decide whether to include this within the scope of services to be contracted or to handle this in-house. While it may not be feasible to extract from the service provider a warranty that it can eliminate all bias from the dataset, the company can articulate its commercial objectives (and risks to be avoided) and negotiate with the service provider a set of contractual undertakings that provides sufficient assurance that the service provider will carry out its duty to identify and eliminate or manage the risk of bias in the dataset.

Ultimately, the company that implements AI/ML in its processes, products or services is answerable to its customers in the event of any malfunction or harm. Its liability will lie in the contract for the sale of goods or provision of services with its customer. Whether it can exclude or limit liability depends very much on the nature of the product or service and the application of the Unfair Contract Terms Act.11 Chapter 396. Having this exposure to its customers in mind, and perhaps also any reputational or other commercial risks that may materialise in the event of a failure (whether technical or market) of the AI/ML enabled functionality, the company may consider seeking contributions or indemnities from its service provider and the third-party from whom the dataset was sourced.

Issues Surrounding Model Training and Selection

Having considered issues around the training dataset, we turn our focus on the process of training and model selection. AI/ML models are essentially mathematical formulae expressed in algorithm and operate in association with a set of parameters.12 In fact there are learned parameters, like the weights, and there are also hyper-parameters, which influence the model very strongly yet they are not part of the learning process; such hyper-parameters are design decisions and may also be a source of bias. The aim of model training is to produce a set of parameters that operate with the algorithm to give the company the best set of results: ie improve accuracy. Accuracy, in this sense, is expressed as either increased precision – viz more selective, but returning a smaller set of more relevant results – or increased recall – viz more inclusive, thereby returning a larger set of all potentially relevant results. To put it another way, increased precision has less tolerance for false positives; while increased recall has a greater tolerance of false positives.13 Will Koehren, “Beyond Accuracy: Precision and Recall” (3 March 2018) <https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c> (last accessed 7 January 2019).

A keen reader will probably surmise by this stage that the set of parameters is heavily influenced by the training, validation, and testing dataset. The parameters differ depending on the nature of the AI/ML model. In certain categories of supervised machine learning, these parameters are expressed as feature vectors that assist with prediction (eg linear regression) or classification (eg logistic regression and support vector machines). In neural networks, which comprises of multiple layers of neurons (quintessentially, the input layer, hidden layer and output layer) that are interconnected with other neurons across to adjacent layers (and in some cases within the same layer and also, even to previous layers in so-called recurrent neural networks) the parameters are expressed as connections and weights (of each connection)14 Connections are hyperparameters which are parameters that are set before model training and which values cannot be estimated from the training data, while weights are learned parameters. and bias (of each neuron).

Model training is an iterative process that eventually yields a trained or fitted model, ie algorithm coupled with a set of learned parameters. The legal issues arising from the model selection and training process fall within three broad areas for discussion. The first area will, by now, be familiar – the professional skill and care of the service provider; the second are issues surrounding the ownership of intellectual property rights that may be created in the trained model; and the third area has to do with ongoing discussions on algorithmic transparency.

Professional Care and Skill in Model Training and Selection

The dependence on an external service provider’s professional skill and care is, by now, a common refrain in this essay. It bears repeating that the company implementing AI/ML ought to ensure that the service provider’s scope of responsibilities is clearly articulated and reduced to commercial terms in the contract for services. Questions to scope should encompass whether the service provider is supplying the AI/ML models or sourcing them from a third party provider, the extent to which the service provider will be providing consultancy services and advising on the appropriate decision-making approach for the intended AI/ML deployment within the company’s processes, products or services, and (the company having defined its commercial objectives in the AI/ML deployment) the service provider’s responsibilities in model training and recommendations of suitable trained models to be selected for deployment; and if the model training and selection service provider is required to, the obligation to work with the software development house that may have been engaged to develop the intelligent system.

What has just been described are areas for consideration and negotiation in the context of AI/ML deployed within an intelligent system. The issues are slightly different if it is the implementation of a commercial off-the-shelf AI/ML solution that a company seeks to deploy within its organisation to improve the efficiency of its processes. The issues will adopt a business process analysis and re-engineering tenor as the consultant will be more focused on integrating the solution into existing work processes and advising how these may need to be adjusted.

Apart from the initial training and recommendations of models for selection and deployment, the service provider’s role in monitoring the initial performance of the deployed model in real world usage and performance tuning cannot be ignored. Just as model training is iterative, models that are deployed will also need to be monitored and tuned to achieve the desired results. These will have to be factored into the contract for services.

In the longer term, deployed models can be enhanced by collecting input data from real world usage and extracting an appropriate dataset for further training and performance enhancement. This will require the implementation of a system for collecting input data and a schedule for further training. If there is intent to make use of input data for further training, this should be part of the main AI/ML deployment project to ensure that data capture is properly catered for. As for the scheduling of further training, parallels can be drawn with a maintenance agreement for software development projects in terms of the potential scope of services. If we were to adopt a similar approach, it is not uncommon that the option for a maintenance agreement is included in the main service contract for consultancy services, model training and selection.

Intellectual Property Rights in Trained Machine Learning Models

The process of selecting the appropriate training dataset, pre-processing training data, multiple iterations of model training, adjusting the feature vectors, connections, weights and bias during model training, and finally selecting a set of trained models for initial deployment can and should result in original works that are capable of independent copyright from the underlying dataset. This author proposes that the trained model, comprising the algorithm and associated parameters (eg feature vectors, connections and weights, or bias) qualify as a “computer program” within the definition of this term in section 7 of the Copyright Act.15 Chapter 63. Collectively, the trained model is “an expression … of a set of instructions (… with … related information) intended … to cause a device having information processing capabilities to perform a particular function”. Accordingly, the trained AI/ML model will enjoy protection under the Copyright Act as a literary work: section 7A of the Copyright Act.

The training process results in trained models that are original expressions of how patterns inherent within the dataset have been extracted and the models tuned in order to be deployed with the AI/ML algorithm to solve the problem statements confronting the organisation deploying AI/ML. The resultant trained models will not always be the same if the training dataset or problem statements are different; but crucially, the iterations during model training and the decision in finalising the fitted model are intellectual input of the persons engaged in the process and should therefore be capable of treatment as original works.

Following from the foregoing analysis, there are a number of legal issues that have to be addressed through commercial negotiations. It has been mentioned above that the use of a dataset for training has an associated set of considerations. The terms of licence or use of a dataset from an external third-party source have particular pertinence to the question of copyright creation. The licensor of the dataset that is used (whether solely or commingled with other data sources) may seek to extend its reach into some form of ownership or right to license derivative works created from its dataset.

The terms of engagement of service providers should also be clear as to the basis of their engagement. Unless there is a clear agreement otherwise, the default position is that any original copyright will be owned by the author (ie the service provider). It is therefore advisable that the contract for services properly and effectively deal with treatment of the copyright in the trained models generated during the term of engagement. From the client organisation’s perspective, ownership or at the very least an irrevocable and perpetual licence to use the trained model for its business (and in the geographical regions that it operates or will be operating in) is important. This is because it can expect that the trained models that have been deployed in production will have to be monitored, fine-tuned and probably enhanced in future. From the perspective of professional services providers in this area, there can be immense value in being able to continue using the trained models as learning models for future projects with other clients, particularly if there are commonality in the problems that they seek to solve. The ownership of the trained model will facilitate such future use, or at the very least an irrevocable licence that permits use of the trained model from the present client for the creation of derivative works in future engagements on projects by other clients.

Another dimension to the question of copyright creation and ownership lies in the role of employees. The foregoing discussion is equally pertinent when employees conduct the model training and selection, or when employees’ intellectual input are commingled with that of the service provider and issues around joint authorship (and ownership) arise. Without belabouring the point, it is essential that the employer understands this and reviews its terms of employment in order to ensure that its copyright interests are safeguarded (as discussed above) while employee’s contributions are adequately acknowledged and attributed. The decision in Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd16 (2011) 4 SLR 381; (2011) SGCA 37. is an apt reminder of the fundamental principle that copyright in works require human authorship. Employers should also be aware that administrative record keeping, or at least records of work assignments, must be kept in order to meet the evidential burden of proof that its employees authored the works and that, by dint of the assignments in their employment contracts, ownership of copyright is transferred to their employer.

Before leaving the topic of copyright in AI/ML models, a few words about the learning models that are sourced from third parties or supplied by the service provider. These are either third party IP or the service provider’s background IP respectively. The copyright issues that arise are no different from IPR in third party or background software: eg licence to use, particularly to make derivative works (since the trained model is probably a derivative work); whether the licence is an open source licence, particularly the GPL or other “copyleft” type licence that has the viral effect of rendering any system that incorporates the learning model open source as well; and whether there are any terms in the open source licence that may have an impact on intentions to patent the final intelligent system or specific processes therein.

Algorithmic Transparency

The ability for AI/ML to detect patterns from large datasets that is humanly impossible to process, and in doing so propose classifications and predictions that may appear unexpected at first blush, has fomented discussions about the need to explain how the AI/ML model functioned and why it made a particular classification or prediction. When AI/ML is used to augment human decision-making, the question becomes whether the human decision-maker had applied his independent analysis before accepting an AI/ML model’s prediction. When AI/ML is deployed to make autonomous decisions, the question often leads to who should be held accountable and liable for the decision, particularly when things go wrong.

Narrow algorithmic transparency focuses on explaining how the AI/ML model functioned and why it made a particular prediction or autonomous decision. From the foregoing discussion, one may identify a number of factors that are relevant to this inquiry. First, the quality of the dataset will have immense influence on the trained AI/ML model. The concern here will be issues around whether the appropriate dataset had been selected for training; this involves not only understanding the dataset schema and dictionary, but also other issues like how updated it is and whether the dataset is sufficiently representative of the expected and likely real-world scenarios that the trained AI/ML model has to deal with. Bias in the dataset will lead to discriminatory predictions. Identifying and reducing the risk of bias is an important consideration in training dataset preparation. Hidden bias lead to unintended discriminatory decisions, whether made autonomously or by a human who had been misguided by the AI/ML prediction. When this happens, the company that has deployed AI/ML to assist in decision making will be hard pressed to justify its decision.

The second and closely associated topic to this narrow inquiry is algorithm audits. While this sounds sexy, those of us who have had the experience of going through a technical audit always remember the morning after. But algorithm audit is a topic that has to be discussed both to understand what it is about and to appreciate its limitations. In a trained AI/ML model, there will be the software code that expresses the mathematical formula for the model and the associated parameters that are derived from the training process. Presumably, algorithm audit will encompass an examination of both software code and the associated parameters (although the traditional definition of algorithm only accommodates the former).17 It has been suggested that almost all ML models have a component which (during learning) calculates the error of the current model and then adjusts based for the error. The reason for this is that almost every ‘prediction’ in ML is ‘false’, as it is only an estimation based on probability. There is therefore a subjectivity in the tolerance for the margin of error which is attributable to the data analyst responsible for training since he makes the decision as to when model training should stop. However, looking only at these will not be helpful to explain why a particular prediction was made. The input data need also to be included in the inquiry; and for this to be possible, some form of an audit trail (or black box recorder) has to be kept. When it is completed, the audit can explain (or at least provide a highly probable explanation) of why a specific prediction was made. As can be seen, an algorithm audit is a technical and laborious process, resulting in a technical audit report that will probably be beyond the ken of most except those who are skilled in the art and science of AI/ML. Its usefulness is clearer as an investigative tool, but the resource intensity and limited accessibility of the final output advise the limited circumstances and how infrequently this tool would be deployed.

When we consider that regulators may start requiring algorithm audits as part of investigations, it is foreseeable that the company deploying AI/ML in its processes, product or services will look to its external service provider to step up and provide explanations of not just the model training and selection process, but also how the algorithm that was trained, selected and deployed functions. We can expect to start seeing such clauses in services agreements. Where the service provider licensed the learning algorithm from a third-party provider, this can foreseeably lead to back-to-back obligations extracted by the service provider from the AI/ML learning model provider to likewise step up and assist in regulatory investigations or other litigation. It does not take much imagination to anticipate that some of these explanations, at least at a more general level, may start making its way into white papers and product documentation published by the AI/ML provider. Although it is early days yet, but it will not be surprising to start seeing attempts at establishing industry standards around algorithmic transparency, both in terms of a professional code of conduct that AI/ML service providers adhere to in managing the data preparation, model training and selection process, as well as technical standards for the design and testing of AI/ML models.

When we shift our objectives from investigations to consumer confidence in decisions made autonomously or augmented by AI, it is not so important that a single prediction be explainable but how consistently the trained AI/ML behaves and whether the prediction is acceptable. Repeatability testing could be a more effective substitute to algorithm audits. Consumer confidence is probably more directly and effectively addressed by taking a slightly broader view of algorithmic transparency and to look holistically at how algorithmic decisions are made, transparency of the decision-making process and providing the consumer an avenue to participate in the decision-making process.

This is the focus of the PDPC’s Model Framework (referred to above) proposing a governance framework on the ethical use of AI and data. It puts forth an accountability-based framework that guides companies, which are deploying AI at scale within its processes, products or services, to integrate into its existing corporate governance structure the right motivations and risk mitigation.

Ethical principles ought to be articulated and, where possible, integrated into its corporate values; and risks arising from the use or misuse of AI/ML that had been deployed should be identified and incorporated into its enterprise risk management framework. The roles and responsibilities of different stakeholders – those who provide the training data, those working on data preparation and model training, the department that owns the process, product or service that makes the decision as to which models are selected for deployment, and responsibility to manage the communications channels and touch points with customers – have all to be identified and assigned. The model framework also proposes a risk severity and impact matrix for analysis of the impact of any malfunction on the affected customer (and possibly employees), in order to match it with the right decision-making approach.

Finally, the degree of disclosure and transparency to customers and providing them with sufficient information to be effective in giving them the assurance that they are respected as individuals in any algorithmic decision that affects them is a value that is advocated. For the dissatisfied customer, there is a need to consider how to provide a feedback channel or even appeal process for him to seek redress.

As a matter of legal or regulatory compliance, the model framework is a set of voluntary standards. Adoption will go a long way towards showing that the company has good accountable practices that it can rely on in the event of any complaint about misuse of personal data or other forms of data breach. Perhaps more importantly, the company that deploys AI/ML and which adopts this model framework is more likely to have the discussions of risks and mitigation and decisions taking place within its organisations at the right level within its management structure and with the right persons in the discussion. Poor decisions cannot ever be completely eliminated but good discipline goes a long way in weeding them out.

Concluding Remarks

This essay is an attempt to map the legal issues and considerations as we move towards more broad-based deployment of AI/ML commercially. The discourse has only just commenced, and the views of the present author are proffered to catalyse debate. As better logic and more persuasive arguments are offered, we can look forward to dissertations of greater sophistry.

The author wishes to express his heartfelt thanks to Wilson Ang, Dr Brian Ang and Albert Pichlmaier for their comments on an earlier draft of this paper. All errors that remain in this paper are mine. The views and opinions expressed in this paper are personal to the author and should not be taken as representing or attributable to his employer.

Endnotes

Endnotes
1 See Agrawal, Gans & Goldfarb, Prediction Machines, “Why it’s called intelligence” (Chapter 4).
2 See “History of Artificial Intelligence” <https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence> (last accessed 30 December 2018) for one version of the boom and bust cycle of AI over the decades.
3 See Janakiram, “In The Era Of Artificial Intelligence, GPUs Are The New CPUs” (Forbes, 7 August 2017) <https://www.forbes.com/sites/janakirammsv/2017/08/07/in-the-era-of-artificial-intelligence-gpus-are-the-new-cpus/#71bab1a15d16> (last accessed 7 January 2019).
4 Accessible at <https://www.pdpc.gov.sg/Resources/Model-AI-Gov> (last accessed 26 January 2019).
5 There are three steps in this process: first, selecting the right model family; next, selecting the right model form (or mathematical expression of the model); and third, the fitted model that emerges after the training process have optimised the parameters that can be estimated from the training dataset such that the trained model can be used to make predictive inferences. See Wickham, Cook & Hofmann, “Visualising Statistical Models: Removing the Blindfold” Statistical Analysis and Data Mining: The ASA Data Science Journal, vol. 8, no. 4, pp. 203–225, 2015; at section 2.1 <http://vita.had.co.nz/papers/model-vis.html> (last accessed 7 January 2019).
6 See PDPC’s publication on anonymisation: “Anonymisation (Chapter 3)” in Advisory Guidelines on the Personal Data Protection Act for Selected Topics <https://www.pdpc.gov.sg/Legislation-and-Guidelines/Guidelines/Main-Advisory-Guidelines> (last accessed 30 December 2018) and “Guide to Basic Data Anonymisation Techniques” <https://www.pdpc.gov.sg/Legislation-and-Guidelines/Guidelines/Other-Guides> (last accessed 30 December 2018).
7 While anonymised data can be used for training and validation, an intelligent system designed with data protection in mind can also anonymise the input data that is fed to the trained AI/ML model.
8 Since it involves the “carrying out of (an) operation or a set of operations in relation to the (training dataset containing) personal data”: see definition of “processing” in section 2 of the Personal Data Protection Act.
9 For an introduction to the concept of proxy or stand-in data, see Cathy O’Neil, Weapons of Math Destruction (2016), pp 17–18.
10 See, for example, Angwin, et al “Machine Bias” in Pro Publica (23 May 2016) <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> (last accessed 1 January 2019).
11 Chapter 396.
12 In fact there are learned parameters, like the weights, and there are also hyper-parameters, which influence the model very strongly yet they are not part of the learning process; such hyper-parameters are design decisions and may also be a source of bias.
13 Will Koehren, “Beyond Accuracy: Precision and Recall” (3 March 2018) <https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c> (last accessed 7 January 2019).
14 Connections are hyperparameters which are parameters that are set before model training and which values cannot be estimated from the training data, while weights are learned parameters.
15 Chapter 63.
16 (2011) 4 SLR 381; (2011) SGCA 37.
17 It has been suggested that almost all ML models have a component which (during learning) calculates the error of the current model and then adjusts based for the error. The reason for this is that almost every ‘prediction’ in ML is ‘false’, as it is only an estimation based on probability. There is therefore a subjectivity in the tolerance for the margin of error which is attributable to the data analyst responsible for training since he makes the decision as to when model training should stop.

Assistant Chief Executive, Data Innovation & Protection, Infocomm Media Development Authority;
Deputy Commissioner, Personal Data Protection Commission