Back
Image Alt

The Singapore Law Gazette

Can and Should We Rein in AI with Law?

What are potential risks and dangers of artificial intelligence (AI)? What are some current and proposed regulatory approaches? Are they sufficient to rein in AI? This article answers the above questions and raises socio-cultural, economic and political considerations beyond the law in thinking about the risks of AI.

Risks of AI 

Artificial intelligence (AI) has many uses. AI is already pervasive in our daily lives, e.g., predictive text / autocorrect and biometric recognition on our phones. However, in March 2023, many technology leaders and academics, even CEOs of AI companies, signed an open petition to call for a pause in AI development beyond GPT-4.1Pause Giant AI Experiments: An Open Letter – Future of Life Institute. (n.d.). Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

These are some risks and unintended consequences of AI that we should consider: 

1. Job Loss and Economic Inequality

AI will automate many information-based tasks and will increase productivity for some roles. AI will also replace and eliminate humans in many jobs. Goldman Sachs estimates 300 million jobs lost to AI, especially in advanced economies.2Toh, M. (2023, March 29). 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business. CNN. https://www.cnn.com/2023/03/29/tech/chatgpt-ai-automation-jobs-impact-intl-hnk/index.html It also estimates that AI will automate 25%-50% of workload, freeing up for other productive tasks. 

The exponential rate of increase in AI capability will possibly far exceed the rate of reskilling of individuals and the rate of job elimination far exceed the rate of job creation. Knowledge-economy workers may become valued much less because of AI.

2. Deepfakes and Disinformation 

Bad actors are already using AI to clone the voices of loved ones, employers, and colleagues to defraud people and companies. 

A CEO of a UK energy company reportedly received a call from the CEO of his parent company to transfer a large sum of funds to a Hungarian supplier. Turns out the latter was an AI-generated audio clip.3Forget email: Scammers use CEO voice “deepfakes” to con workers into wiring cash. (n.d.). ZDNET. https://www.zdnet.com/article/forget-email-scammers-use-ceo-voice-deepfakes-to-con-workers-into-wiring-cash/ A man was recently charged for using AI to create deepfake child pornography. Our children’s images online could become abused as such.4Quebec man sentenced to prison for creating AI-generated, synthetic child pornography. (2023, April 26). Montreal. https://montreal.ctvnews.ca/quebec-man-sentenced-to-prison-for-creating-ai-generated-synthetic-child-pornography-1.6372624 

So much of our lives are lived digitally today. Soon, our digital worlds will be flooded with falsehood that we (i) fall prey to it; or (ii) are unable to distinguish it from truth; and (iii) thereby can no longer operate digitally without some trustless transactional mechanism. 

In today’s world, mass information is necessarily digital. So, mass information could become mis/disinformation. AI in the hands of bad actors could undermine truth in the digital world, bringing about a digital scarcity of trust. 

3. Epistemological Surrender to AI

The effect of AI on many people would be a false aura of infallibility. Many people will soon say, it must be true because the AI says so. 

Already, many people uncritically think that search engine results are accurate and reliable information. People change their political voting preference based on first-page search engine results,5How the internet flips elections and alters our thoughts | Aeon Essays. (2016, February 18). Aeon. https://aeon.co/essays/how-the-internet-flips-elections-and-alters-our-thoughts something described as the Search Engine Manipulation Effect (SEME)6The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections – PubMed. (2015, August 18). PubMed. https://doi.org/10.1073/pnas.1419828112.

In 2012, Pew Research found that 66% of people say search engines are a fair and unbiased source of information. 28% say all or almost all are trustworthy, and another 45% say most. 72% of 18–29-year-olds say that search engines are a fair and unbiased source.7Main findings. (2012, March 9). Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2012/03/09/main-findings-11/ 

Today, generative large language model (LLM) AI chatbots give users one single answer. People will possibly lap up the AI system’s answers, without evidence or justification, as absolute truth. Yet, current LLM models are riddled with “hallucinations”,8ChatGPT’s answers could be nothing but a hallucination (2023, March 6). Cybernews. https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/ creating false information and embellishing it with false details which make even subject matter experts doubt themselves. 

4. Copyright and Expropriation Without Compensation

Many AI systems are trained on data that is taken from the Internet without consent. They then generate outputs for the benefit of others, and at the financial benefit of its developers or operators, (i) without moral attribution; (ii) possibly copying other people’s works and intellectual property; (iii) without any compensation to the creators of the dataset works. 

The legal position on these issues is presently unclear. Lawsuits are ongoing in the US on this against Midjourney and Stable Diffusion by, among others, Getty Images and other claimants.9Brittain, B. (2023, February 6). Getty Images lawsuit says Stability AI misused photos to train AI. Reuters. https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/ 

5. Privacy Issues

Large AI systems are fed massive amounts of data scraped from the Internet. Some of that data may be your personal data. 

AI raises certain privacy issues:10Beware the Privacy Violations in Artificial Intelligence Applications. (n.d.). ISACA. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications 

  • Data persistence – data existing longer than the individuals intended. 
  • Data repurposing – data being used beyond their original purpose.
  • Data spillovers – data collected on people who are not the target of data collection. 

If an AI system is going to trawl through your data, process it, and possibly disclose it in some output or use it for a purpose which you cannot condone, would you be able to opt-out? Can you even request to delete the data since it is already fed into the AI system?

6. Affective AI’s Negative Influence

Affective AI reacts to and mimics human emotions. LLM AI chatbots have been reported to cause humans to develop emotional, romantic, and sexual feelings towards the AI bot. A man reportedly committed suicide after an emotionally fraught exchange with a bot.11“He Would Still Be Here”: Man Dies by Suicide After Talking with AI Chatbot, Widow Says. (n.d.). https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

7. Social manipulation

GPT-3 already passes Theory of Mind (TOM) tests.12AI Has Suddenly Evolved to Achieve Theory of Mind. (2023, February 17). Popular Mechanics. https://www.popularmechanics.com/technology/robots/a42958546/artificial-intelligence-theory-of-mind-chatgpt/ It may not actually have a Theory of Mind, but somehow internalised language patterns which encode TOM. AI systems are thus able to strategise how to respond to and communicate with people in a particular way. 

AI systems will be able to manipulate not just individuals but whole societies based on profiling. We know how Cambridge Analytica used personal data to influence voters at scale with profiling.13Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. http://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election We know bots are used for disinformation campaigns by malicious actors, whatever the extent of impact and scale.14A new study reignites the debate over Twitter, bots, and 2016. (2023, January 19). Columbia Journalism Review. https://www.cjr.org/the_media_today/nature_study_trump_bots_twitter.php It has also affected people’s responses to the pandemic.15Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H. A., Epstein, D. H., Leggio, L., & Curtis, B. (2021, May 20). Bots and Misinformation Spread on Social Media: Implications for COVID-19. PubMed Central (PMC). https://doi.org/10.2196/26933 

8. Weaponization

Generative AI is already capable of providing people with information needed to make weapons, manufacture complex things or structures, or write code to take down critical infrastructure of whole countries. AI systems can be weaponised by being programmed to cause damage. 

Researchers found that a drug-developing AI system could invent 40,000 potentially lethal molecules. Some are similar to VX, the most potent nerve agent.16AI suggested 40,000 new possible chemical weapons in just six hours. (2022, March 17). the Verge. https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx 

Although a bad actor could do it without AI, it would have taken a lot more resources and time to do so. That would give time for security intelligence services to pick up on such activities and counter them. 

9. AI Bias

Human prejudices cause socio-economic, racial, religious, and gender discrimination. Such biases are encoded into data collected, which is then fed into AI systems and thus generate AI biases which perpetuate feedback loops. 

AI bias in automated decision-making will discriminate and harm already marginalised persons.17SQ10. What are the most pressing dangers of AI? (n.d.). One Hundred Year Study on Artificial Intelligence (AI100). https://ai100.stanford.edu/2021-report/standing-questions-and-responses/sq10-what-are-most-pressing-dangers-ai Without explainability and opt-out requirements, the poor and powerless will be unable to resist the negative effects of such AI bias. This can have a disproportionate adverse impact in certain contexts, e.g., criminal justice where it affects the liberties of, and justice for, racial and socio-economic minorities.18AI is sending people to jail—and getting it wrong. (2019, January 21). MIT Technology Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/  

10. Environmental Issues

AI development and deployment consumes a lot of electricity and physical resources, including gold and the rare earth metal neodymium, to produce microchips used to process data. According to a study, training a transformer model – which is used in GPT – requires more than 284,000 kg of CO2: that’s nearly five times the emissions of the average car.

AI depends on storing and processing huge amounts of data, which must be stored and maintained using devices. If we continue our unlimited storage of data, to feed our AI, the use of neodymium in 2025 will exceed the current supply in Europe 120 times.19Artificial intelligence: Worth the environmental cost? | Deloitte Netherlands. (2022, September 27). Deloitte Netherlands. https://www2.deloitte.com/nl/nl/blog/sustainability/2022/artificial-intelligence-worth-the-environmental-cost.html

In training GPT-3, it is estimated to have consumed 700,000 litres of water – enough to produce 370 cars.20ChatGPT data centres may be consuming a staggering amount of water. (2023, April 13). The Independent. https://www.independent.co.uk/tech/chatgpt-data-centre-water-consumption-b2318972.html

Current and Proposed Regulatory Approaches 

Given the manifold risks of AI systems, what are the present and proposed regulatory approaches governing AI?

Generally, regulators and governments are presently:

  1. moving to implement regulations governing the development and deployment of AI systems generally;
  2. relying on existing regulations in their legal toolkit to protect specific interests of individuals or society e.g., personal data, and online falsehoods and manipulation;
  3. regulating specific use of technology, e.g., autonomous vehicles and financial services; or
  4. issuing guidance and model frameworks for industry self-regulation. 

The regulatory approaches generally balance the competing policy goals of: (i) encouraging innovation vs regulating risks of harm; (ii) protecting freedom of expression vs protecting against harmful expressions. 

European Union (EU) AI Act and GDPR 

The European Parliament has recently approved a draft of the AI Act.21Mukherjee, S., Chee, F. Y., & Coulter, M. (2023, April 28). EU proposes new copyright rules for generative AI. Reuters. https://www.reuters.com/technology/eu-lawmakers-committee-reaches-deal-artificial-intelligence-act-2023-04-27/ It seeks to regulate the development and deployment of AI systems generally, apart from other legal initiatives of the EU to implement civil liability rules and sector-specific regulations regarding AI.22A European approach to artificial intelligence. (2022, September 1). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence The proposed AI Act addresses four levels of risks of AI:23Regulatory framework proposal on artificial intelligence. (n.d.). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai 

  1. Unacceptable risks: AI systems deemed a clear threat to the safety, livelihoods and rights of people will be banned. Examples include social scoring by governments and toys using voice assistance which encourage dangerous behaviour.
  2. High-risk AI systems will be subject to strict obligations before they can be put on the market. 
    1. These include: (i) adequate risk assessment and mitigation systems; (ii) high quality of datasets to minimise risks and discriminatory outcomes; (iii) logging to ensure traceability of results; (iv) detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; (v) clear and adequate information to the user; (vi) appropriate human oversight measures to minimise risk; (vii) high level of robustness, security and accuracy.
    2. Narrow exceptions are strictly defined and regulated, subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and databases.
    3. High-risk stand-alone AI systems must be registered and receive CE mark approval before they can be put on the market. 
  3. Limited risk AI systems will require specific transparency obligations, e.g., users of chatbots should be notified that they are interacting with a bot. In this regard, deepfake-generating AI services, chatbots, emotion recognition systems, and biometric categorisation systems, are subjected to such transparency obligations.24 For critiques on this, see: https://www.medialaws.eu/regulating-deep-fakes-in-the-proposed-ai-act/; https://www.mhc.ie/hubs/the-eu-artificial-intelligence-act/regulating-chatbots-and-deepfakes-under-the-eu-ai-act   
  4. Minimal risk: free to use.

It is notable that unlike the EU General Data Protection Regulation (GDPR), which places obligations principally on data controllers and processors, the proposed AI Act places certain obligations on users of high-risk AI systems as well. 

Apart from the AI Act, Art. 13 of the GDPR stipulates a right of notification and explanation if there is any automated decision making, including profiling, done. Individuals should be provided “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing”. 

Art. 22 of the GDPR also governs the use of AI systems in automated decision making relating to personal data. An individual can object to automated decision making, including profiling, which produces legal effects concerning them or similarly significantly affects them, save for express consent or where it is necessary for performance or entering a contract, both with safeguards such as human intervention and the right to contest the decision. 

Australia 

Recently, the Australian Attorney-General issued the Privacy Act Review Report, which proposed, among other things, amendments to privacy legislation regarding AI. Like the EU GDPR, the proposed amendments would require the system operator to provide information about any targeting, algorithms, and profiling; to provide information about the type of personal information used in automated decision making and meaningful information about how such decisions are made; to ensure that any collection (which includes data analysis), use or disclosure of personal information must be fair and reasonable in the circumstances.25Uncovering the Future of AI Regulation: The Australian Privacy Act Review. (n.d.). Lexology. https://www.lexology.com/library/detail.aspx?g=073b3a65-a106-4808-89cc-2c03f8a5ce80Privacy Act Review Report. (2023, February 16). Attorney-General’s Department. https://www.ag.gov.au/rights-and-protections/publications/privacy-act-review-report

China 

The Cyberspace Administration of China (CAC) introduced regulations called Deep Synthesis Provisions, which came into force in January 2023.26Briefing, C. (2022, December 20). China to Regulate Deep Synthesis (Deepfake) Technology from 2023. China Briefing News. https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/ It requires content generation service providers and users to disclose through watermarks any AI-generated or manipulated content of any type. Service providers must comply with rules about processing personal information (including staff training, algorithm review, user registration, data security, child protection and data protection). They must also establish guidelines, criteria, and processes for identifying false or damaging information, and to deal with users who produce such material. This includes user authentication or know-your-customer (KYC) requirements. They must also periodically review their algorithms and models and conduct security assessments to ensure the protection of national security, national image, national interests, and public interests. Such service providers with “public opinion properties or having social mobilization capabilities” must file and thus be approved by the regulator. The law also prohibits fake news and requires online service providers to have consideration for and protection of elderly e.g., against fraud.

China’s Personal Information Protection Law (PIPL) provides for certain rights and requirements similar to the GDPR and Singapore’s Personal Data Protection Act (PDPA). Regarding AI in particular, it provides for fairness, transparency, explanation and opt-out obligations similar to the GDPR, as well as impact assessment, if automated decision making is used.27Mainland’s Personal Information Protection Law. (n.d.). Mainland’s Personal Information Protection Law. https://www.pcpd.org.hk/english/data_privacy_law/mainland_law/mainland_law.html 

Recently in April 2023, the CAC released for public consultation, draft measures for managing generative AI services.28Xuezi Dan, Vicky Liu, Nicholas Shepherd, Y. L. (2023, April 17). China Proposes Draft Measures to Regulate Generative AI. China Proposes Draft Measures to Regulate Generative AI | Global Policy Watch. https://www.globalpolicywatch.com/2023/04/china-proposes-draft-measures-to-regulate-generative-ai/China’s cyberspace regulator releases draft measures for managing generative AI services. (n.d.). Lexology. https://www.lexology.com/library/detail.aspx?g=f5453b17-c059-490e-bd3d-415f5d7603e1 It requires service providers to submit to a security assessment before launching generative AI services, to ensure that training data is not discriminatory and complies with laws on cybersecurity, intellectual property, and personal information protection, to conduct KYC verification on users, to filter content and prevent content which are false, unlawful or inconsistent with societal values or national security. Service providers must disclose to users a “description of the source, scale, type, quality, and other details of pre-training and optimised-training data, rules for manual labelling, the scale and types of manually labelled data, as well as fundamental algorithms and technical systems.” Further, they must “guide” end users to properly utilise generative AI and not to use it to “damage the image, reputation, or other legitimate rights and interests of others, and do not engage in commercial hype or improper marketing.”

Singapore 

There is presently no hard law in Singapore governing AI generally, although there are facilitative regulations for autonomous vehicles. 

The Personal Data Protection Commission has issued a model AI governance framework29PDPC | Singapore’s Approach to AI Governance. (2023, January 13). PDPC | Singapore’s Approach to AI Governance. https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework for organisations who develop or own AI systems to consider, among others, internal governance, risk assessments, data quality and management, transparency, and other human-centric ethical principles in deploying AI systems. 

The Monetary Authority of Singapore (MAS) has issued the “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector” in decision-making in the provision of financial products and services. The MAS also launched the Veritas Initiative, which enables financial institutions to evaluate their AI and data analytics solutions against the principles of FEAT (for example, through the white papers detailing assessment methodologies and the open-source toolkit released by the Veritas Consortium).

Difficulties of Regulation

Post Facto and Ex-ante Regulation 

Regulations generally serve as deterrence and legal tools for post facto enforcement against some wrongdoing. Unless a pre-deployment licensing scheme is implemented, which is usually resource-intensive and not feasible, a lot of risks cannot be eliminated by law ex-ante. 

Challenge of General Regulations 

Regulations deter and shape behaviour to the extent that the obligations are sufficiently specific, the penalties are sufficiently severe (much more than the cost of compliance) and the probability of enforcement for non-compliance is high. 

In this regard, the challenges of regulating AI generally are that the types, mechanisms, and applications of AI systems are myriad. Generic obligations may in practice be deemed hortatory rather than feasible. Technical capability may also be required to investigate and enforce non-compliance. The standard of compliance is uncertain. Should the obligations distinguish different standards for different types and sizes of service providers or users? Would it be just to impose liability on an AI service provider whose services were used by a malicious actor in ways which are not reasonably foreseeable? 

In contrast, sector-specific regulations, e.g., autonomous vehicles, are more likely to be specifically prescriptive as to technical and behavioural standards. Regulations to protect certain interests and prevent certain types of harm are also more certain as to behavioural prohibitions while being technology-agnostic. 

Deepfakes and Evidence 

Another aspect to consider is how AI-generated deepfake content may change judicial approaches to assessing evidence. Courts and tribunals are likely to become more sceptical of digital evidence, e.g., videos and audio recordings, which are not corroborated by multiple sources. This is not something which can be easily legislated. 

Perhaps, this will spur the need for technological development in producing digital recordings with metadata that is immediately encoded on a public blockchain, so that the recording’s authenticity is indisputable. 

Beyond Regulation 

Importantly, laws and regulations generally cannot resolve cultural, social, economic, psychological, or technical issues. 

Regulations cannot confer critical thinking skills to distinguish possible mass disinformation or manipulative chatbots. Laws simpliciter cannot build up a currency of trust when there’s a truth deficit. Laws cannot deal with economic inequality and structural shifts. Laws cannot remove existing human biases and prejudices. Laws cannot prevent people from undertaking epistemological surrender to AI. 

Some of the dangers of AI technology however can be mitigated with technological counter-solutions. Already, some people are working on technological systems which can detect AI-generated content. However, the economic incentives for such AI applications are far less than other types of AI applications. 

On the social, cultural, and economic front, there should be public education on how certain AI systems operate, e.g., how LLM chatbots work (i.e., statistical models which assign probabilities to words and sentences, rather than reasoning to produce factual or moral truth), and the utility, benefits, and risks of certain AI applications. 

Special consideration should be given to safeguarding the vulnerable in society – the young, the elderly and the less technologically savvy–from the dangers of AI systems in the hands of bad actors.

As a society, apart from any legal regulation, we should expect businesses, employers, service providers, and other users, to take responsibility for ethical and mindful AI development and deployment. 

Henry Ward Beecher wrote, “Laws and institutions, like clocks, must occasionally be cleaned, wound up and set to true time”. AI may well cause fundamental structural revolutions in various aspects of our society. If so, it is critical that laws and institutions catch up. 

Endnotes

Endnotes
1 Pause Giant AI Experiments: An Open Letter – Future of Life Institute. (n.d.). Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
2 Toh, M. (2023, March 29). 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business. CNN. https://www.cnn.com/2023/03/29/tech/chatgpt-ai-automation-jobs-impact-intl-hnk/index.html
3 Forget email: Scammers use CEO voice “deepfakes” to con workers into wiring cash. (n.d.). ZDNET. https://www.zdnet.com/article/forget-email-scammers-use-ceo-voice-deepfakes-to-con-workers-into-wiring-cash/
4 Quebec man sentenced to prison for creating AI-generated, synthetic child pornography. (2023, April 26). Montreal. https://montreal.ctvnews.ca/quebec-man-sentenced-to-prison-for-creating-ai-generated-synthetic-child-pornography-1.6372624
5 How the internet flips elections and alters our thoughts | Aeon Essays. (2016, February 18). Aeon. https://aeon.co/essays/how-the-internet-flips-elections-and-alters-our-thoughts
6 The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections – PubMed. (2015, August 18). PubMed. https://doi.org/10.1073/pnas.1419828112
7 Main findings. (2012, March 9). Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2012/03/09/main-findings-11/
8 ChatGPT’s answers could be nothing but a hallucination (2023, March 6). Cybernews. https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/
9 Brittain, B. (2023, February 6). Getty Images lawsuit says Stability AI misused photos to train AI. Reuters. https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/
10 Beware the Privacy Violations in Artificial Intelligence Applications. (n.d.). ISACA. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications
11 “He Would Still Be Here”: Man Dies by Suicide After Talking with AI Chatbot, Widow Says. (n.d.). https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
12 AI Has Suddenly Evolved to Achieve Theory of Mind. (2023, February 17). Popular Mechanics. https://www.popularmechanics.com/technology/robots/a42958546/artificial-intelligence-theory-of-mind-chatgpt/
13 Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. http://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
14 A new study reignites the debate over Twitter, bots, and 2016. (2023, January 19). Columbia Journalism Review. https://www.cjr.org/the_media_today/nature_study_trump_bots_twitter.php
15 Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H. A., Epstein, D. H., Leggio, L., & Curtis, B. (2021, May 20). Bots and Misinformation Spread on Social Media: Implications for COVID-19. PubMed Central (PMC). https://doi.org/10.2196/26933
16 AI suggested 40,000 new possible chemical weapons in just six hours. (2022, March 17). the Verge. https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
17 SQ10. What are the most pressing dangers of AI? (n.d.). One Hundred Year Study on Artificial Intelligence (AI100). https://ai100.stanford.edu/2021-report/standing-questions-and-responses/sq10-what-are-most-pressing-dangers-ai
18 AI is sending people to jail—and getting it wrong. (2019, January 21). MIT Technology Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/  
19 Artificial intelligence: Worth the environmental cost? | Deloitte Netherlands. (2022, September 27). Deloitte Netherlands. https://www2.deloitte.com/nl/nl/blog/sustainability/2022/artificial-intelligence-worth-the-environmental-cost.html
20 ChatGPT data centres may be consuming a staggering amount of water. (2023, April 13). The Independent. https://www.independent.co.uk/tech/chatgpt-data-centre-water-consumption-b2318972.html
21 Mukherjee, S., Chee, F. Y., & Coulter, M. (2023, April 28). EU proposes new copyright rules for generative AI. Reuters. https://www.reuters.com/technology/eu-lawmakers-committee-reaches-deal-artificial-intelligence-act-2023-04-27/
22 A European approach to artificial intelligence. (2022, September 1). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
23 Regulatory framework proposal on artificial intelligence. (n.d.). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
24  For critiques on this, see: https://www.medialaws.eu/regulating-deep-fakes-in-the-proposed-ai-act/; https://www.mhc.ie/hubs/the-eu-artificial-intelligence-act/regulating-chatbots-and-deepfakes-under-the-eu-ai-act  
25 Uncovering the Future of AI Regulation: The Australian Privacy Act Review. (n.d.). Lexology. https://www.lexology.com/library/detail.aspx?g=073b3a65-a106-4808-89cc-2c03f8a5ce80Privacy Act Review Report. (2023, February 16). Attorney-General’s Department. https://www.ag.gov.au/rights-and-protections/publications/privacy-act-review-report
26 Briefing, C. (2022, December 20). China to Regulate Deep Synthesis (Deepfake) Technology from 2023. China Briefing News. https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/
27 Mainland’s Personal Information Protection Law. (n.d.). Mainland’s Personal Information Protection Law. https://www.pcpd.org.hk/english/data_privacy_law/mainland_law/mainland_law.html
28 Xuezi Dan, Vicky Liu, Nicholas Shepherd, Y. L. (2023, April 17). China Proposes Draft Measures to Regulate Generative AI. China Proposes Draft Measures to Regulate Generative AI | Global Policy Watch. https://www.globalpolicywatch.com/2023/04/china-proposes-draft-measures-to-regulate-generative-ai/China’s cyberspace regulator releases draft measures for managing generative AI services. (n.d.). Lexology. https://www.lexology.com/library/detail.aspx?g=f5453b17-c059-490e-bd3d-415f5d7603e1
29 PDPC | Singapore’s Approach to AI Governance. (2023, January 13). PDPC | Singapore’s Approach to AI Governance. https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework

Ronald JJ Wong
Director
Covenant Chambers LLC
E-mail: [email protected]

Ronald JJ Wong engages in both disputes and corporate practices, specialising in technology, intellectual property, corporate finance, financial regulations, and employment.